Instructions to use lightsofapollo/omnivoice-mlx-q4-g64 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use lightsofapollo/omnivoice-mlx-q4-g64 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir omnivoice-mlx-q4-g64 lightsofapollo/omnivoice-mlx-q4-g64
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Xet hash:
- de958080b31f91278c16378b08dcbb8191321719065a32399b5e5ec663a790ec
- Size of remote file:
- 11.4 MB
- SHA256:
- 408f669b7e2b045fdf54201d815bd364e6667dbd845115da81239c40bc6dcfd1
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.