Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

lightsofapollo
/
omnivoice-mlx-q8-g64

Text-to-Speech
MLX
Safetensors
omnivoice
tts
quantized
8-bit precision
Model card Files Files and versions
xet
Community

Instructions to use lightsofapollo/omnivoice-mlx-q8-g64 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use lightsofapollo/omnivoice-mlx-q8-g64 with MLX:

    # Download the model from the Hub
    pip install huggingface_hub[hf_xet]
    
    huggingface-cli download --local-dir omnivoice-mlx-q8-g64 lightsofapollo/omnivoice-mlx-q8-g64
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
omnivoice-mlx-q8-g64 / audio_tokenizer
403 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 1 commit
lightsofapollo's picture
lightsofapollo
OmniVoice MLX q8-g64: Qwen3 backbone quantized via mlx-lm; Higgs tokenizer bf16
b50cab9 verified 4 days ago
  • config.json
    2.53 kB
    OmniVoice MLX q8-g64: Qwen3 backbone quantized via mlx-lm; Higgs tokenizer bf16 4 days ago
  • model.safetensors
    403 MB
    xet
    OmniVoice MLX q8-g64: Qwen3 backbone quantized via mlx-lm; Higgs tokenizer bf16 4 days ago
  • preprocessor_config.json
    206 Bytes
    OmniVoice MLX q8-g64: Qwen3 backbone quantized via mlx-lm; Higgs tokenizer bf16 4 days ago