Configuration Parsing Warning: Invalid JSON for config file tokenizer_config.json

NeuTTS Nano

NeuTTSNano_Intro

πŸš€ Spaces Demo, πŸ”§ Github

Q8 GGUF version, Q4 GGUF version

Created by Neuphonic - building faster, smaller, on-device voice AI

State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Nano is a super-fast, highly realistic, on-device TTS speech language model with instant voice cloningβ€”built to run smoothly on CPUs and edge devices. With a compact backbone and an efficient LM + codec design, Nano delivers strong naturalness and cloning quality at a fraction of the compute, making it ideal for embedded voice agents, assistants, toys, and privacy-sensitive applications.

Key Features

  • ⚑️ Ultra-fast for on-device β€” built for real-time or better-than-real-time generation on laptop-class CPUs
  • πŸ—£ High realism for its size β€” natural, expressive speech in a comp act footprint
  • πŸ‘« Instant voice cloning β€” create a new speaker from just a few seconds of audio
  • πŸ“¦ GGUF/GGML-friendly deployment β€” easy to run locally via CPU-first tooling
  • πŸ”’ Local-first + compliance-friendly β€” keep audio and text on-device

Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo.

We are on neuphonic.com only. Please be careful out there! πŸ™

Model Details

NeuTTS Nano is designed for maximum speed per parameter while retaining strong speaker similarity and naturalness:

  • Backbone: compact LM backbone tuned for TTS token generation (Nano class)
  • Audio Codec: NeuCodec - our open-source neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
  • Format: Available in GGUF/GGML-friendly formats for efficient on-device inference
  • Responsibility: Watermarked outputs
  • Inference Speed: Optimised for real-time generation on CPUs
  • Power Consumption: Designed for mobile and embedded devices

Parameter Count (Nano)

  • Active params (backbone only): ~116.8M
  • Total params (backbone + tied embeddings/head): ~228.7M

Get Started

  1. Clone the Git Repo

    git clone https://github.com/neuphonic/neutts.git
    cd neutts
    
  2. Install espeak (required dependency)

    Please refer to the following link for instructions on how to install espeak:

    https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md

    # Mac OS
    brew install espeak-ng
    
    # Ubuntu/Debian
    sudo apt install espeak-ng
    
    # Arch Linux
    paru -S extra/espeak-ng
    
  3. Install Python dependencies

    The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required.

    The inference is compatible and tested on python>=3.11.

    pip install -r requirements.txt
    

Basic Example

Run the basic example script to synthesize speech:

python -m examples.basic_example \
  --input_text "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all." \
  --ref_audio samples/jo.wav \
  --ref_text samples/jo.txt

To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in the NeuTTS Nano Hugging Face collection (see repo page for up-to-date options).

Simple One-Code Block Usage

from neutts import NeuTTS
import soundfile as sf

tts = NeuTTS(
    backbone_repo="neuphonic/neutts-nano",
    backbone_device="cpu",
    codec_repo="neuphonic/neucodec",
    codec_device="cpu",
)

input_text = "My name is Andy. I'm 25 and I just moved to London. The underground is pretty confusing, but it gets me around in no time at all."

ref_text_path = "samples/jo.txt"
ref_audio_path = "samples/jo.wav"

ref_text = open(ref_text_path, "r").read().strip()
ref_codes = tts.encode_reference(ref_audio_path)

wav = tts.infer(input_text, ref_codes, ref_text)
sf.write("test.wav", wav, 24000)

Tips

NeuTTS Nano requires two inputs:

  1. A reference audio sample (.wav file)
  2. A text string

The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Nano’s instant voice cloning capability.

Example Reference Files

You can find some ready-to-use samples in the examples folder:

  • samples/dave.wav
  • samples/jo.wav

Guidelines for Best Results

For optimal performance, reference audio samples should be:

  • Mono channel
  • 16-44 kHz sample rate
  • 3–15 seconds in length
  • Saved as a .wav file
  • Clean β€” minimal to no background noise
  • Natural, continuous speech β€” like a monologue or conversation, with few pauses, so the model can capture tone effectively
Downloads last month
2,472
Safetensors
Model size
0.3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for neuphonic/neutts-nano

Quantizations
1 model

Spaces using neuphonic/neutts-nano 2

Collection including neuphonic/neutts-nano