NeuTTS Nano
Q8 GGUF version, Q4 GGUF version
Created by Neuphonic - building faster, smaller, on-device voice AI
State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Nano is a super-fast, highly realistic, on-device TTS speech language model with instant voice cloningβbuilt to run smoothly on CPUs and edge devices. With a compact backbone and an efficient LM + codec design, Nano delivers strong naturalness and cloning quality at a fraction of the compute, making it ideal for embedded voice agents, assistants, toys, and privacy-sensitive applications.
Key Features
- β‘οΈ Ultra-fast for on-device β built for real-time or better-than-real-time generation on laptop-class CPUs
- π£ High realism for its size β natural, expressive speech in a comp act footprint
- π« Instant voice cloning β create a new speaker from just a few seconds of audio
- π¦ GGUF/GGML-friendly deployment β easy to run locally via CPU-first tooling
- π Local-first + compliance-friendly β keep audio and text on-device
Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo.
We are on neuphonic.com only. Please be careful out there! π
Model Details
NeuTTS Nano is designed for maximum speed per parameter while retaining strong speaker similarity and naturalness:
- Backbone: compact LM backbone tuned for TTS token generation (Nano class)
- Audio Codec: NeuCodec - our open-source neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
- Format: Available in GGUF/GGML-friendly formats for efficient on-device inference
- Responsibility: Watermarked outputs
- Inference Speed: Optimised for real-time generation on CPUs
- Power Consumption: Designed for mobile and embedded devices
Parameter Count (Nano)
- Active params (backbone only): ~116.8M
- Total params (backbone + tied embeddings/head): ~228.7M
Get Started
Clone the Git Repo
git clone https://github.com/neuphonic/neutts.git cd neuttsInstall
espeak(required dependency)Please refer to the following link for instructions on how to install
espeak:https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md
# Mac OS brew install espeak-ng # Ubuntu/Debian sudo apt install espeak-ng # Arch Linux paru -S extra/espeak-ngInstall Python dependencies
The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required.
The inference is compatible and tested on
python>=3.11.pip install -r requirements.txt
Basic Example
Run the basic example script to synthesize speech:
python -m examples.basic_example \
--input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
--ref_audio samples/jo.wav \
--ref_text samples/jo.txt
To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in the NeuTTS Nano Hugging Face collection (see repo page for up-to-date options).
Simple One-Code Block Usage
from neutts import NeuTTS
import soundfile as sf
tts = NeuTTS(
backbone_repo="neuphonic/neutts-nano",
backbone_device="cpu",
codec_repo="neuphonic/neucodec",
codec_device="cpu",
)
input_text = "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all."
ref_text_path = "samples/jo.txt"
ref_audio_path = "samples/jo.wav"
ref_text = open(ref_text_path, "r").read().strip()
ref_codes = tts.encode_reference(ref_audio_path)
wav = tts.infer(input_text, ref_codes, ref_text)
sf.write("test.wav", wav, 24000)
Tips
NeuTTS Nano requires two inputs:
- A reference audio sample (
.wavfile) - A text string
The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Nanoβs instant voice cloning capability.
Example Reference Files
You can find some ready-to-use samples in the examples folder:
samples/dave.wavsamples/jo.wav
Guidelines for Best Results
For optimal performance, reference audio samples should be:
- Mono channel
- 16-44 kHz sample rate
- 3β15 seconds in length
- Saved as a
.wavfile - Clean β minimal to no background noise
- Natural, continuous speech β like a monologue or conversation, with few pauses, so the model can capture tone effectively
- Downloads last month
- 2,472
