Instructions to use gafiatulin/vibevoice-semantic-encoder-mlpackage with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- VibeVoice
How to use gafiatulin/vibevoice-semantic-encoder-mlpackage with VibeVoice:
import torch, soundfile as sf, librosa, numpy as np from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference # Load voice sample (should be 24kHz mono) voice, sr = sf.read("path/to/voice_sample.wav") if voice.ndim > 1: voice = voice.mean(axis=1) if sr != 24000: voice = librosa.resample(voice, sr, 24000) processor = VibeVoiceProcessor.from_pretrained("gafiatulin/vibevoice-semantic-encoder-mlpackage") model = VibeVoiceForConditionalGenerationInference.from_pretrained( "gafiatulin/vibevoice-semantic-encoder-mlpackage", torch_dtype=torch.bfloat16 ).to("cuda").eval() model.set_ddpm_inference_steps(5) inputs = processor(text=["Speaker 0: Hello!\nSpeaker 1: Hi there!"], voice_samples=[[voice]], return_tensors="pt") audio = model.generate(**inputs, cfg_scale=1.3, tokenizer=processor.tokenizer).speech_outputs[0] sf.write("output.wav", audio.cpu().numpy().squeeze(), 24000) - Notebooks
- Google Colab
- Kaggle
VibeVoice Semantic Encoder (CoreML)
Streaming semantic encoder for VibeVoice TTS, exported as a stateful CoreML MLPackage.
Shared between 1.5B and 7B models (identical encoder weights, 128-dim output).
Usage
Auto-downloaded by vibevoice-mlx when CoreML is available:
pip install mlx coremltools soundfile transformers huggingface_hub safetensors
git clone https://github.com/gafiatulin/vibevoice-mlx && cd vibevoice-mlx
# CoreML semantic encoder is auto-downloaded on first use
python run/e2e_pipeline.py --model microsoft/VibeVoice-1.5B --text "Hello!" --output hello.wav
Without CoreML (Linux, or no coremltools), the pipeline falls back to a pure MLX semantic encoder.
Architecture
- Type: Causal σ-VAE encoder with streaming conv caches
- Input: 3200 audio samples (one speech frame at 24kHz)
- Output: 128-dim semantic features
- State: 34 conv cache buffers (ct.StateType, requires iOS 18+)
- Compute units: CPU_AND_GPU (ANE not supported for stateful models)
- Size: 657 MB (fp16 weights)
Performance
| Backend | Latency | Pipeline RTF (1.5B INT8) |
|---|---|---|
| CoreML | 4.8ms/frame | 3.1x |
| Pure MLX | 11.5ms/frame | 2.6x |
Source
Built from microsoft/VibeVoice-1.5B using vibevoice-coreml conversion scripts.
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support