| | --- |
| | license: apache-2.0 |
| | --- |
| | |
| | # **Introduction** |
| |
|
| | **`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate. |
| |
|
| | - **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325) |
| | - **Source Code:** |
| | - [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD/tree/main/XY_Tokenizer) |
| | - [Hugging Face Repo](https://huggingface.co/spaces/fnlp/MOSS-TTSD/tree/main/XY_Tokenizer) |
| |
|
| | ## 📚 Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)** |
| |
|
| | **`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \ |
| | Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [博客](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD). |
| |
|
| | ## ✨ Features |
| |
|
| | - **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details |
| | - **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz |
| | - **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss |
| | - **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap |
| | - **Batch processing**: Efficiently process multiple audio files in batches |
| | - **24kHz output**: Generate high-quality 24kHz audio output |
| |
|
| | ## 💻 Quick Start |
| |
|
| | Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform. |
| |
|
| | ```python |
| | import torchaudio |
| | from transformers import AutoFeatureExtractor, AutoModel |
| | |
| | # 1. Load the feature extractor and the codec model |
| | model_id = "OpenMOSS-Team/XY_Tokenizer_TTSD_V0_hf" |
| | feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, trust_remote_code=True) |
| | codec = AutoModel.from_pretrained(model_id, trust_remote_code=True).eval().to("cuda") |
| | |
| | # 2. Load and preprocess the audio |
| | # The model expects a 16kHz sample rate. |
| | wav_form, sampling_rate = torchaudio.load("examples/m1.wav") |
| | if sampling_rate != 16000: |
| | wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000) |
| | |
| | # 3. Encode the audio into discrete codes |
| | input_features = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt") |
| | # The 'code' dictionary contains the discrete audio codes |
| | code = codec.encode(input_features) |
| | |
| | # 4. Decode the codes back to an audio waveform |
| | # The output is high-quality 24kHz audio. |
| | output_wav = codec.decode(code["audio_codes"], overlap_seconds=10) |
| | |
| | # 5. Save the reconstructed audio |
| | for i, audio in enumerate(output_wav["audio_values"]): |
| | torchaudio.save(f"audio_{i}.wav", audio.cpu(), 24000) |
| | ``` |
| |
|