Improve model card: add paper link, GitHub repo, and sample usage
Browse filesThis PR improves the model card for AudioX. It now includes:
- A link to the [paper](https://huggingface.co/papers/2503.10522).
- A link to the official [GitHub repository](https://github.com/ZeyueT/AudioX) and [project page](https://zeyuet.github.io/AudioX/).
- A "Sample Usage" section based on the script inference provided in the GitHub README.
- Updated metadata including the arXiv ID and relevant tags.
README.md
CHANGED
|
@@ -1,6 +1,105 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
base_model:
|
| 4 |
- HKUSTAudio/AudioX
|
|
|
|
| 5 |
pipeline_tag: text-to-audio
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model:
|
| 3 |
- HKUSTAudio/AudioX
|
| 4 |
+
license: cc-by-nc-4.0
|
| 5 |
pipeline_tag: text-to-audio
|
| 6 |
+
arxiv: 2503.10522
|
| 7 |
+
tags:
|
| 8 |
+
- audio-generation
|
| 9 |
+
- music-generation
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# AudioX: A Unified Framework for Anything-to-Audio Generation
|
| 13 |
+
|
| 14 |
+
AudioX is a unified framework for anything-to-audio generation that integrates varied multimodal conditions (i.e., text, video, and audio signals). The core design is a Multimodal Adaptive Fusion module, which enables the effective fusion of diverse multimodal inputs, enhancing cross-modal alignment and improving overall generation quality.
|
| 15 |
+
|
| 16 |
+
- **Paper:** [AudioX: A Unified Framework for Anything-to-Audio Generation](https://huggingface.co/papers/2503.10522)
|
| 17 |
+
- **Project Page:** [https://zeyuet.github.io/AudioX/](https://zeyuet.github.io/AudioX/)
|
| 18 |
+
- **Repository:** [https://github.com/ZeyueT/AudioX](https://github.com/ZeyueT/AudioX)
|
| 19 |
+
- **Demo:** [Hugging Face Space](https://huggingface.co/spaces/Zeyue7/AudioX)
|
| 20 |
+
|
| 21 |
+
## Sample Usage
|
| 22 |
+
|
| 23 |
+
To use this model programmatically, you can use the following script. Note that you need to install the `audiox` package as specified in the [official repository](https://github.com/ZeyueT/AudioX).
|
| 24 |
+
|
| 25 |
+
```python
|
| 26 |
+
import torch
|
| 27 |
+
import torchaudio
|
| 28 |
+
from einops import rearrange
|
| 29 |
+
from audiox import get_pretrained_model
|
| 30 |
+
from audiox.inference.generation import generate_diffusion_cond
|
| 31 |
+
from audiox.data.utils import read_video, merge_video_audio, load_and_process_audio, encode_video_with_synchformer
|
| 32 |
+
import os
|
| 33 |
+
|
| 34 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 35 |
+
|
| 36 |
+
# Load pretrained model
|
| 37 |
+
# Choose one: "HKUSTAudio/AudioX", "HKUSTAudio/AudioX-MAF", or "HKUSTAudio/AudioX-MAF-MMDiT"
|
| 38 |
+
model_name = "HKUSTAudio/AudioX"
|
| 39 |
+
model, model_config = get_pretrained_model(model_name)
|
| 40 |
+
sample_rate = model_config["sample_rate"]
|
| 41 |
+
sample_size = model_config["sample_size"]
|
| 42 |
+
target_fps = model_config["video_fps"]
|
| 43 |
+
seconds_start = 0
|
| 44 |
+
seconds_total = 10
|
| 45 |
+
|
| 46 |
+
model = model.to(device)
|
| 47 |
+
|
| 48 |
+
# Example: Video-to-Music generation
|
| 49 |
+
video_path = "example/V2M_sample-1.mp4"
|
| 50 |
+
text_prompt = "Generate music for the video"
|
| 51 |
+
audio_path = None
|
| 52 |
+
|
| 53 |
+
# Prepare inputs
|
| 54 |
+
video_tensor = read_video(video_path, seek_time=seconds_start, duration=seconds_total, target_fps=target_fps)
|
| 55 |
+
if audio_path:
|
| 56 |
+
audio_tensor = load_and_process_audio(audio_path, sample_rate, seconds_start, seconds_total)
|
| 57 |
+
else:
|
| 58 |
+
# Use zero tensor when no audio is provided
|
| 59 |
+
audio_tensor = torch.zeros((2, int(sample_rate * seconds_total)))
|
| 60 |
+
|
| 61 |
+
# For AudioX-MAF and AudioX-MAF-MMDiT: encode video with synchformer
|
| 62 |
+
video_sync_frames = None
|
| 63 |
+
if "MAF" in model_name:
|
| 64 |
+
video_sync_frames = encode_video_with_synchformer(
|
| 65 |
+
video_path, model_name, seconds_start, seconds_total, device
|
| 66 |
+
)
|
| 67 |
+
|
| 68 |
+
# Create conditioning
|
| 69 |
+
conditioning = [{
|
| 70 |
+
"video_prompt": {"video_tensors": video_tensor.unsqueeze(0), "video_sync_frames": video_sync_frames},
|
| 71 |
+
"text_prompt": text_prompt,
|
| 72 |
+
"audio_prompt": audio_tensor.unsqueeze(0),
|
| 73 |
+
"seconds_start": seconds_start,
|
| 74 |
+
"seconds_total": seconds_total
|
| 75 |
+
}]
|
| 76 |
+
|
| 77 |
+
# Generate audio
|
| 78 |
+
output = generate_diffusion_cond(
|
| 79 |
+
model,
|
| 80 |
+
steps=250,
|
| 81 |
+
cfg_scale=7,
|
| 82 |
+
conditioning=conditioning,
|
| 83 |
+
sample_size=sample_size,
|
| 84 |
+
sigma_min=0.3,
|
| 85 |
+
sigma_max=500,
|
| 86 |
+
sampler_type="dpmpp-3m-sde",
|
| 87 |
+
device=device
|
| 88 |
+
)
|
| 89 |
+
|
| 90 |
+
# Post-process audio
|
| 91 |
+
output = rearrange(output, "b d n -> d (b n)")
|
| 92 |
+
output = output.to(torch.float32).div(torch.max(torch.abs(output))).clamp(-1, 1).mul(32767).to(torch.int16).cpu()
|
| 93 |
+
torchaudio.save("output.wav", output, sample_rate)
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
## Citation
|
| 97 |
+
|
| 98 |
+
```bibtex
|
| 99 |
+
@article{tian2025audiox,
|
| 100 |
+
title={AudioX: Diffusion Transformer for Anything-to-Audio Generation},
|
| 101 |
+
author={Tian, Zeyue and Jin, Yizhu and Liu, Zhaoyang and Yuan, Ruibin and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
|
| 102 |
+
journal={arXiv preprint arXiv:2503.10522},
|
| 103 |
+
year={2025}
|
| 104 |
+
}
|
| 105 |
+
```
|