functiongemma-270m-it-4bit-mlx / CONVERT_TO_MLX.md
codewithdark's picture
Upload model via QuantLLM
0a76aa1 verified
# Convert to MLX
This model was saved in HuggingFace format.
To convert to MLX format on Apple Silicon:
```bash
pip install mlx-lm
python -m mlx_lm.convert --hf-path ./hub_staging/functiongemma-270m-it-4bit-mlx --mlx-path ./mlx_model
```