| | --- |
| | language: |
| | - en |
| | license: mit |
| | library_name: transformers |
| | datasets: |
| | - fixie-ai/librispeech_asr |
| | - fixie-ai/common_voice_17_0 |
| | pipeline_tag: audio-text-to-text |
| | --- |
| | |
| | # Model Card for Ultravox |
| |
|
| | Ultravox is a multimodal Speech LLM built around a pretrained [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) and [Whisper-small](https://huggingface.co/openai/whisper-small) backbone.\ |
| | See https://ultravox.ai for the GitHub repo and more information. |
| |
|
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). |
| | The input to the model is given as a text prompt with a special `<|audio|>` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. |
| | Using the merged embeddings as input, the model will then generate output text as usual. |
| |
|
| | In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. |
| | No preference tuning has been applied to this revision of the model. |
| |
|
| | - **Developed by:** Fixie.ai |
| | - **License:** MIT |
| |
|
| | ### Model Sources |
| |
|
| | - **Repository:** https://ultravox.ai |
| | - **Demo:** See repo |
| |
|
| | ## Usage |
| |
|
| | Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc. |
| |
|
| | To use the model, try the following: |
| | ```python |
| | # pip install transformers peft librosa |
| | |
| | import transformers |
| | import numpy as np |
| | import librosa |
| | |
| | pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_3', trust_remote_code=True) |
| | |
| | path = "<path-to-input-audio>" # TODO: pass the audio here |
| | audio, sr = librosa.load(path, sr=16000) |
| | |
| | |
| | turns = [ |
| | { |
| | "role": "system", |
| | "content": "You are a friendly and helpful character. You love to answer questions for people." |
| | }, |
| | ] |
| | pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30) |
| | ``` |
| |
|
| |
|
| | ## Training Details |
| |
|
| | The model uses a pre-trained [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) backbone as well as the encoder part of [Whisper-small](https://huggingface.co/openai/whisper-small). |
| |
|
| | Only the multi-modal adapter is trained, while Whisper encoder and Llama are kept frozen. |
| | We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based Llama backbone. |
| |
|
| | ### Training Data |
| |
|
| | Training dataset is a mix of ASR datasets, extended by adding a "continuation" generated by Llama 3.1 8B. |
| |
|
| |
|
| | ### Training Procedure |
| |
|
| | Supervised speech to audio finetuning. For more info, see [training code in Ultravox repo](https://github.com/fixie-ai/ultravox/blob/main/ultravox/training/train.py). |
| |
|
| |
|
| | #### Training Hyperparameters |
| |
|
| | - **Training regime:** BF16 mixed precision training |
| | - **Hardward used:** 8x H100 GPUs |
| |
|
| | #### Speeds, Sizes, Times |
| |
|
| | The current version of Ultravox, when invoked with audio content, has a time-to-first-token (TTFT) of approximately 200ms, and a tokens-per-second rate of ~50-100 when using an A100-40GB GPU, all using a Llama 3.1 8B backbone. |
| |
|
| | Check out the audio tab on [TheFastest.ai](https://thefastest.ai/?m=audio) for daily benchmarks and a comparison with other existing models. |
| |
|
| | ## Evaluation |
| | | | en_de (BLEU) | es_en (BLEU) | LibriSpeech clean.test (WER) | |
| | |:------------------|:-------------|:-------------|:----------------------------| |
| | | Ultravox v0.2 | 12.07 | 15.17 | 6.07 | |
| | | **Ultravox v0.3** | 22.68 | 24.10 | 6.67 | |
| | | Whisper-Llama3.1 | 24.89 | 28.67 | 3.4 | |
| | | Llama3.1 (text-only) | 31.95 | 38.28 | - | |