Papers
arxiv:2604.00688

OmniVoice: Towards Omnilingual Zero-Shot Text-to-Speech with Diffusion Language Models

Published on Apr 1
Authors:
,
,
,
,
,
,
,
,
,

Abstract

OmniVoice is a multilingual zero-shot text-to-speech model that uses a novel diffusion language model-style discrete non-autoregressive architecture to directly map text to acoustic tokens with improved efficiency and performance.

AI-generated summary

We present OmniVoice, a massive multilingual zero-shot text-to-speech (TTS) model that scales to over 600 languages. At its core is a novel diffusion language model-style discrete non-autoregressive (NAR) architecture. Unlike conventional discrete NAR models that suffer from performance bottlenecks in complex two-stage (text-to-semantic-to-acoustic) pipelines, OmniVoice directly maps text to multi-codebook acoustic tokens. This simplified approach is facilitated by two key technical innovations: (1) a full-codebook random masking strategy for efficient training, and (2) initialization from a pre-trained LLM to ensure superior intelligibility. By leveraging a 581k-hour multilingual dataset curated entirely from open-source data, OmniVoice achieves the broadest language coverage to date and delivers state-of-the-art performance across Chinese, English, and diverse multilingual benchmarks. Our code and pre-trained models are publicly available at https://github.com/k2-fsa/OmniVoice.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.00688
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.00688 in a dataset README.md to link it from this page.

Spaces citing this paper 3

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.