Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ABDALLALSWAITI
/
vibevoice-arabic-Z

Text-to-Speech
VibeVoice
Safetensors
Arabic
English
tts
audio
lora
arabic
Model card Files Files and versions
xet
Community
2

Instructions to use ABDALLALSWAITI/vibevoice-arabic-Z with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • VibeVoice

    How to use ABDALLALSWAITI/vibevoice-arabic-Z with VibeVoice:

    import torch, soundfile as sf, librosa, numpy as np
    from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor
    from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference
    
    # Load voice sample (should be 24kHz mono)
    voice, sr = sf.read("path/to/voice_sample.wav")
    if voice.ndim > 1: voice = voice.mean(axis=1)
    if sr != 24000: voice = librosa.resample(voice, sr, 24000)
    
    processor = VibeVoiceProcessor.from_pretrained("ABDALLALSWAITI/vibevoice-arabic-Z")
    model = VibeVoiceForConditionalGenerationInference.from_pretrained(
        "ABDALLALSWAITI/vibevoice-arabic-Z", torch_dtype=torch.bfloat16
    ).to("cuda").eval()
    model.set_ddpm_inference_steps(5)
    
    inputs = processor(text=["Speaker 0: Hello!\nSpeaker 1: Hi there!"],
                       voice_samples=[[voice]], return_tensors="pt")
    audio = model.generate(**inputs, cfg_scale=1.3,
                           tokenizer=processor.tokenizer).speech_outputs[0]
    sf.write("output.wav", audio.cpu().numpy().squeeze(), 24000)
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

lora_wrap_diffusion_head problem

#2 opened 6 months ago by
PakdamanAli

How good is the arabic language using only LORA training?

#1 opened 7 months ago by
Viewegger
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs