Apple quietly releases MLX AI framework to build foundation AI models

Apple quietly releases MLX AI framework to build foundation AI models

Posted on

Apple’s machine learning research team has quietly introduced and released a new machine learning framework called MLX, designed to optimize the development of machine learning models on Apple Silicon. The new framework has been specifically designed and engineered to enhance the way developers engage with machine learning on their devices and has been inspired by frameworks such as PyTorch, Jax, and ArrayFire.

The difference from these frameworks and MLX is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without performing data copies. Currently supported device types are the CPU and GPU.

What is Apple MLX?

MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple silicon, brought to you by Apple machine learning research. The Python API closely follows NumPy with a few exceptions. MLX also has a fully featured C++ API which closely follows the Python API. The main differences between MLX and NumPy are:

  • Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.
  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.
  • Multi-device: Operations can run on any of the supported devices (CPU, GPU, …)

The MLX framework is a significant advancement, especially for those working with Apple’s M-series chips, which are known for their powerful performance in AI tasks. This new framework is not only a step forward for Apple but also for the broader AI community, as it is now available as open-source, marking a shift from Apple’s typically closed-off software development practices. MLX is available on PyPI. All you have to do to use MLX with your own Apple silicon computer is  : pip install mlx

Apple MLX AI framework

The MLX framework is designed to work in harmony with the M-series chips, including the advanced M3 chip, which boasts a specialized neural engine for AI operations. This synergy between hardware and software leads to improved efficiency and speed in machine learning tasks, such as processing text, generating images, and recognizing speech. The framework’s ability to work with popular machine learning platforms like PyTorch and JAX is a testament to its versatility. This is made possible by the MLX data package, which eases the process of managing data and integrating it into existing workflows.

Developers can access MLX through a Python API, which is as user-friendly as NumPy, making it accessible to a wide range of users. For those looking for even faster performance, there is also a C++ API that takes advantage of the speed that comes with lower-level programming. The framework’s innovative features, such as composable function transformation and lazy computation, lead to code that is not only more efficient but also easier to maintain. Additionally, MLX’s support for multiple devices and a unified memory model ensures that resources are optimized across different Apple devices.

Apple MLX

Apple is committed to supporting developers who are interested in using MLX. They have provided a GitHub repository that contains sample code and comprehensive documentation. This is an invaluable resource for those who want to explore the capabilities of MLX and integrate it into their machine learning projects.

The introduction of the MLX framework is a clear indication of Apple’s commitment to advancing machine learning technology. Its compatibility with the M-series chips, open-source nature, and ability to support a variety of machine learning tasks make it a potent tool for developers. The MLX data package’s compatibility with other frameworks, coupled with the availability of both Python and C++ APIs, positions MLX to become a staple in the machine learning community.

The Apple MLX framework’s additional features, such as composable function transformation, lazy computation, multi-device support, and a unified memory model, further enhance its appeal. As developers begin to utilize the resources provided on GitHub, we can expect to see innovative machine learning applications that fully leverage the capabilities of Apple Silicon. Here are some other articles you may find of interest on the subject of AI models :

Filed Under: Technology News, Top News





Latest togetherbe Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, togetherbe may earn an affiliate commission. Learn about our Disclosure Policy.

Gravatar Image
My lisa Nichols is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, lisa Nichols brings a unique blend of creativity and accuracy to every piece

Leave a Reply

Your email address will not be published. Required fields are marked *