Instructions to use mlx-community/Soprano-80M-bf16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/Soprano-80M-bf16 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Soprano-80M-bf16 mlx-community/Soprano-80M-bf16
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Fix README metadata
#3 opened 3 months ago
by
solarpunkin
Update MLX weights and patched config
#2 opened 3 months ago
by
solarpunkin
Update to 1.1?
#1 opened 4 months ago
by
dylanbay11