| | Metadata-Version: 2.4 |
| | Name: mlx |
| | Version: 0.29.2.dev20251007+0073096d |
| | Summary: A framework for machine learning on Apple silicon. |
| | Home-page: https://github.com/ml-explore/mlx |
| | Author: MLX Contributors |
| | Author-email: mlx@group.apple.com |
| | License: MIT |
| | Requires-Python: >=3.9 |
| | Description-Content-Type: text/markdown |
| | License-File: LICENSE |
| | Provides-Extra: dev |
| | Requires-Dist: nanobind==2.4.0; extra == "dev" |
| | Requires-Dist: numpy; extra == "dev" |
| | Requires-Dist: pre-commit; extra == "dev" |
| | Requires-Dist: setuptools>=80; extra == "dev" |
| | Requires-Dist: torch; extra == "dev" |
| | Requires-Dist: typing_extensions; extra == "dev" |
| | Dynamic: author |
| | Dynamic: author-email |
| | Dynamic: description |
| | Dynamic: description-content-type |
| | Dynamic: home-page |
| | Dynamic: license |
| | Dynamic: license-file |
| | Dynamic: provides-extra |
| | Dynamic: requires-python |
| | Dynamic: summary |
| |
|
| | |
| |
|
| | [**Quickstart**]( |
| | [**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) | |
| | [**Examples**]( |
| |
|
| | [](https://circleci.com/gh/ml-explore/mlx) |
| |
|
| | MLX is an array framework for machine learning on Apple silicon, |
| | brought to you by Apple machine learning research. |
| |
|
| | Some key features of MLX include: |
| |
|
| | - **Familiar APIs**: MLX has a Python API that closely follows NumPy. MLX |
| | also has fully featured C++, [C](https://github.com/ml-explore/mlx-c), and |
| | [Swift](https://github.com/ml-explore/mlx-swift/) APIs, which closely mirror |
| | the Python API. MLX has higher-level packages like `mlx.nn` and |
| | `mlx.optimizers` with APIs that closely follow PyTorch to simplify building |
| | more complex models. |
| |
|
| | - **Composable function transformations**: MLX supports composable function |
| | transformations for automatic differentiation, automatic vectorization, |
| | and computation graph optimization. |
| |
|
| | - **Lazy computation**: Computations in MLX are lazy. Arrays are only |
| | materialized when needed. |
| |
|
| | - **Dynamic graph construction**: Computation graphs in MLX are constructed |
| | dynamically. Changing the shapes of function arguments does not trigger |
| | slow compilations, and debugging is simple and intuitive. |
| |
|
| | - **Multi-device**: Operations can run on any of the supported devices |
| | (currently the CPU and the GPU). |
| |
|
| | - **Unified memory**: A notable difference from MLX and other frameworks |
| | is the *unified memory model*. Arrays in MLX live in shared memory. |
| | Operations on MLX arrays can be performed on any of the supported |
| | device types without transferring data. |
| |
|
| | MLX is designed by machine learning researchers for machine learning |
| | researchers. The framework is intended to be user-friendly, but still efficient |
| | to train and deploy models. The design of the framework itself is also |
| | conceptually simple. We intend to make it easy for researchers to extend and |
| | improve MLX with the goal of quickly exploring new ideas. |
| |
|
| | The design of MLX is inspired by frameworks like |
| | [NumPy](https://numpy.org/doc/stable/index.html), |
| | [PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and |
| | [ArrayFire](https://arrayfire.org/). |
| |
|
| | |
| |
|
| | The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a |
| | variety of examples, including: |
| |
|
| | - [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training. |
| | - Large-scale text generation with |
| | [LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llms/llama) and |
| | finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora). |
| | - Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion). |
| | - Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper). |
| | |
| | ## Quickstart |
| | |
| | See the [quick start |
| | guide](https://ml-explore.github.io/mlx/build/html/usage/quick_start.html) |
| | in the documentation. |
| | |
| | ## Installation |
| | |
| | MLX is available on [PyPI](https://pypi.org/project/mlx/). To install MLX on |
| | macOS, run: |
| | |
| | ```bash |
| | pip install mlx |
| | ``` |
| | |
| | To install the CUDA backend on Linux, run: |
| | |
| | ```bash |
| | pip install mlx[cuda] |
| | ``` |
| | |
| | To install a CPU-only Linux package, run: |
| | |
| | ```bash |
| | pip install mlx[cpu] |
| | ``` |
| | |
| | Checkout the |
| | [documentation](https://ml-explore.github.io/mlx/build/html/install.html#) |
| | for more information on building the C++ and Python APIs from source. |
| | |
| | ## Contributing |
| | |
| | Check out the [contribution guidelines](https://github.com/ml-explore/mlx/tree/main/CONTRIBUTING.md) for more information |
| | on contributing to MLX. See the |
| | [docs](https://ml-explore.github.io/mlx/build/html/install.html) for more |
| | information on building from source, and running tests. |
| | |
| | We are grateful for all of [our |
| | contributors](https://github.com/ml-explore/mlx/tree/main/ACKNOWLEDGMENTS.md#Individual-Contributors). If you contribute |
| | to MLX and wish to be acknowledged, please add your name to the list in your |
| | pull request. |
| | |
| | ## Citing MLX |
| | |
| | The MLX software suite was initially developed with equal contribution by Awni |
| | Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find |
| | MLX useful in your research and wish to cite it, please use the following |
| | BibTex entry: |
| | |
| | ``` |
| | @software{mlx2023, |
| | author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert}, |
| | title = {{MLX}: Efficient and flexible machine learning on Apple silicon}, |
| | url = {https://github.com/ml-explore}, |
| | version = {0.0}, |
| | year = {2023}, |
| | } |
| | ``` |
| | |