Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-to-speech", model="microsoft/VibeVoice-Realtime-0.5B")

Load model directly

from transformers import VibeVoiceStreamingForConditionalGenerationInference model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained("microsoft/VibeVoice-Realtime-0.5B", dtype="auto")

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support