Apple’s machine learning (ML) teams quietly flexed their muscle with the release of a new ML framework developed for Apple Silicon. Apple’s machine learning (ML) teams have released a new ML framework for Apple Silicon: MLX, or ML Explore arrives after being tested over the summer and is now available through GitHub. Machine Learning for Apple Silicon In an X-note, Awni Hannun, of Apple’s ML team, calls the software: “…an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)” The idea is that it streamlines training and deployment of ML models for researchers who use Apple hardware. MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple’s processors. This isn’t a consumer-facing tool; it equips developers with what appears to be a powerful environment within which to build ML models. The company also seems to have worked to embrace the languages developers want to use, rather than force a language on them – and it apparently invented powerful LLM tools in the process. Familiar to developers MLX design is inspired by existing frameworks such as PyTorch, Jax, and ArrayFire. However, MLX adds support for a unified memory model, which means arrays live in shared memory and operations can be performed on any of the supported device types without performing data copies. The team explains: “The Python API closely follows NumPy with a few exceptions. MLX also has a fully featured C++ API which closely follows the Python API.” Notes accompanying the release also say: “The framework is intended to be user-friendly, but still efficient to train and deploy models…. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.” Pretty good at first glance On first glance, MLX seems relatively good and (as explained on GitHub) is equipped with several features that set it apart — for example, the use of familiar APIs, and also: Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization. Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed. Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive. Multi-device: Operations can run on any of the supported devices (currently, the CPU and GPU). Unified memory: Under the unified memory model, arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data. What it can already achieve Apple has provided a collection of examples of what MLX can do. These appear to confirm the company now has a highly-efficient language model, powerful tools for image generation using Stable Diffusion, and highly accurate speech recognition. This tallies with claims earlier this year, and some speculation concerning infinite virtual world creation for future Vision Pro experiences. Examples include: Train a Transformer LM or fine-tune with LoRA. Text generation with Mistral. Image generation with Stable Diffusion. Speech recognition with Whisper. Developers, developers…. Ultimately, Apple seems to want to democratize machine learning. “MLX is designed by machine learning researchers for machine learning researchers,” the team explains. In other words, Apple has recognized the need to build open, easy-to-use development environments for machine learning in order to nurture further work in that space. That MLX lives on Apple Silicon is also important, given that Apple’s processors now live across all its products, including Mac, iPhone, and iPad. The use of the GPU, CPU, and — conceivably, at some point — Neural Engine on those chips could translate into on-device execution of ML models (for privacy) with performance other processors cannot match, at least not in terms of edge devices. Is it too little, too late? Given the big buzz that emerged around Open AI’s Chat GPT when it appeared around this time last year, is Apple really truly late to the party? I don’t think so. The company has clearly decided to place its focus on equipping ML researchers with the best tools it can make, including powerful M3 Macs to build models on. Now, it wants to translate that attention into viable, human-focused AI tools for the rest of us to enjoy. It is much too early to declare Apple defeated in an AI industry war that has really only just begun. Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Related content news Platform lets creators monetize their content for use in LLM training Avail’s Corpus tool ‘flies in the face’ of comments made by head of Microsoft AI, says analyst. By Paul Barker Jul 17, 2024 5 mins Artificial Intelligence news ChatGPT users speechless over delays OpenAI has delayed an alpha release of its new voice mode for ChatGPT, citing safety and scalability concerns By Gyana Swain Jun 26, 2024 4 mins Generative AI Voice Assistants Artificial Intelligence news Public opinion on AI divided While many think it may benefit society as a whole, they find it hard to see what’s in it for them, highlighting some lessons for employers and developers. By Lynn Greiner May 28, 2024 7 mins Employee Experience Generative AI IT Skills news analysis There aren't nearly enough workers to support new US chip production Even as the semiconductor industry hopes to find and recruit skilled workers, a lack of talent could undermine national objectives, push up labor costs, and hinder the returns from the billions of dollars being spent, according to a McKinsey & Co By Lucas Mearian May 15, 2024 10 mins CPUs and Processors Government Manufacturing Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe