Text-to-Speech
GGUF
Vietnamese
realtime
tts
cpu
vietnamese

🦜 VieNeu-TTS-0.3B-Q8-GGUF (High-Quality CPU)

GitHub Base Model Discord

VieNeu-TTS-0.3B-Q8-GGUF is a Q8_0 quantized version of VieNeu-TTS-0.3B. This model provides a perfect balance between CPU performance and voice quality, retaining almost all the precision of the original PyTorch model.

Author: Phạm Nguyễn Ngọc Bảo

☕ Support This Project

Training high-quality TTS models requires significant GPU resources. If you find this model useful, please consider supporting the development:

Buy Me a Coffee


🛠️ Requirements (eSpeak NG)

eSpeak NG is mandatory for phonemization.

  • Windows: Download .msi from eSpeak NG Releases.
  • macOS: brew install espeak
  • Linux: sudo apt install espeak-ng

🚀 How to Use

Use the source code from GitHub for the best experience with full text preprocessing support:

# 1. Clone repository
git clone https://github.com/pnnbao97/VieNeu-TTS.git
cd VieNeu-TTS

# 2. Sync environment (requires uv)
uv sync

# 3. Launch Web UI
uv run gradio_app.py

In the UI, select Backbone: VieNeu-TTS-0.3B-q8-gguf and Device: CPU.


📦 Using Python SDK (vieneu)

Install the SDK to integrate VieNeu-TTS-0.3B into your research or applications:

# Windows (Avoid llama-cpp build errors)
pip install vieneu --extra-index-url https://pnnbao97.github.io/llama-cpp-python-v0.3.16/cpu/

# Linux / MacOS
pip install vieneu

Full Features Guide

from vieneu import Vieneu
import os

# Initialization
tts = Vieneu()  # Default: 0.3B-Q4 GGUF for CPU
os.makedirs("outputs", exist_ok=True)

# 1. List preset voices
available_voices = tts.list_preset_voices()
for desc, name in available_voices:
    print(f"   - {desc} (ID: {name})")

# 2. Use specific voice (dynamically select second voice)
if available_voices:
    _, my_voice_id = available_voices[1] if len(available_voices) > 1 else available_voices[0]
    voice_data = tts.get_preset_voice(my_voice_id)
    audio_spec = tts.infer(text="Chào bạn, tôi đang nói bằng giọng của bác sĩ Tuyên.", voice=voice_data)
    tts.save(audio_spec, f"outputs/standard_{my_voice_id}.wav")
    print(f"💾 Saved synthesis to: outputs/standard_{my_voice_id}.wav")

# 3. Standard synthesis (uses default voice)
text = "Xin chào, tôi là VieNeu. Tôi có thể giúp bạn đọc sách, làm chatbot thời gian thực, hoặc thậm chí clone giọng nói của bạn."
audio = tts.infer(text=text)
tts.save(audio, "outputs/standard_output.wav")
print("💾 Saved synthesis to: outputs/standard_output.wav")

# 4. Zero-shot voice cloning
if os.path.exists("examples/audio_ref/example_ngoc_huyen.wav"):
    cloned_audio = tts.infer(
        text="Đây là giọng nói đã được clone thành công từ file mẫu.",
        ref_audio="examples/audio_ref/example_ngoc_huyen.wav",
        ref_text="Tác phẩm dự thi bảo đảm tính khoa học, tính đảng, tính chiến đấu, tính định hướng."
    )
    tts.save(cloned_audio, "outputs/standard_cloned_output.wav")
    print("💾 Saved cloned voice to: outputs/standard_cloned_output.wav")

# 5. Cleanup
tts.close()

📊 Technical Specifications

  • Format: GGUF (Q8_0)
  • Size: ~300 MB
  • Quality: ⭐⭐⭐⭐ (Near original PyTorch quality)
  • Performance: Runs smoothly on modern CPUs with high efficiency.

⚠️ Licensing & Copyright

This model is released under the CC BY-NC 4.0 license.

  • Free: For students, researchers, and non-profit purposes.
  • ⚠️ Commercial/Enterprise: Use for businesses or commercial products is strictly prohibited without prior authorization.
  • Commercial Licensing: Please contact the author (Phạm Nguyễn Ngọc Bảo) for licensing terms (Estimated: 5,000 USD/year - negotiable).

📑 Citation

@misc{vieneutts03bggufq82026,
  title        = {VieNeu-TTS-0.3B-Q8-GGUF: High-Quality CPU-Optimized Vietnamese Text-to-Speech},
  author       = {Pham Nguyen Ngoc Bao},
  year         = {2026},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/pnnbao-ump/VieNeu-TTS-0.3B-q8-gguf}}
}

Made with ❤️ for the Vietnamese TTS community

Downloads last month
3
GGUF
Model size
0.3B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alextransvideo/AlexVoice-CPU-Q8

Quantized
(4)
this model

Datasets used to train alextransvideo/AlexVoice-CPU-Q8