File size: 3,305 Bytes
501639c
 
 
8e48b17
 
 
a90b58b
8e48b17
a90b58b
8e48b17
 
2185c05
8e48b17
 
 
 
 
 
 
8a9e85a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e2747c
8a9e85a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e48b17
a90b58b
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---

license: cc-by-nc-4.0
---


# AudioX

## 🎧 AudioX: Diffusion Transformer for Anything-to-Audio Generation

[TL;DR]: AudioX is a unified Diffusion Transformer model for Anything-to-Audio and Music Generation, capable of generating high-quality general audio and music, offering flexible natural language control, and seamlessly processing various modalities including text, video, image, music, and audio.

### Links
- **[Paper](https://arxiv.org/abs/2503.10522)**: Explore the research behind AudioX.
- **[Project](https://zeyuet.github.io/AudioX/)**: Visit the official project page for more information and updates.


## Clone the repository
```bash

GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/Zeyue7/AudioX

cd AudioX



conda create -n AudioX python=3.8.20

conda activate AudioX

pip install git+https://github.com/ZeyueT/AudioX.git

conda install -c conda-forge ffmpeg libsndfile

```

## Usage

```py

import torch

import torchaudio

from einops import rearrange

from stable_audio_tools import get_pretrained_model

from stable_audio_tools.inference.generation import generate_diffusion_cond

from stable_audio_tools.data.utils import read_video, merge_video_audio

from stable_audio_tools.data.utils import load_and_process_audio

import os



device = "cuda" if torch.cuda.is_available() else "cpu"



# Download model

model, model_config = get_pretrained_model("Zeyue7/AudioX")

sample_rate = model_config["sample_rate"]

sample_size = model_config["sample_size"]

target_fps = model_config["video_fps"]

seconds_start = 0

seconds_total = 10



model = model.to(device)



# for video-to-music generation

video_path = "video.mp4"

text_prompt = "Generate music for the video" 

audio_path = None



video_tensor = read_video(video_path, seek_time=0, duration=seconds_total, target_fps=target_fps)

audio_tensor = load_and_process_audio(audio_path, sample_rate, seconds_start, seconds_total)



conditioning = [{

    "video_prompt": [video_tensor.unsqueeze(0)],        

    "text_prompt": text_prompt,

    "audio_prompt": audio_tensor.unsqueeze(0),

    "seconds_start": seconds_start,

    "seconds_total": seconds_total

}]

    

# Generate stereo audio

output = generate_diffusion_cond(

    model,

    steps=250,

    cfg_scale=7,

    conditioning=conditioning,

    sample_size=sample_size,

    sigma_min=0.3,

    sigma_max=500,

    sampler_type="dpmpp-3m-sde",

    device=device

)



# Rearrange audio batch to a single sequence

output = rearrange(output, "b d n -> d (b n)")



# Peak normalize, clip, convert to int16, and save to file

output = output.to(torch.float32).div(torch.max(torch.abs(output))).clamp(-1, 1).mul(32767).to(torch.int16).cpu()

torchaudio.save("output.wav", output, sample_rate)



if video_path is not None and os.path.exists(video_path):

    merge_video_audio(video_path, "output.wav", "output.mp4", 0, seconds_total)



```



## Citation
If you find our work useful, please consider citing:

```

@article{tian2025audiox,

  title={AudioX: Diffusion Transformer for Anything-to-Audio Generation},

  author={Tian, Zeyue and Jin, Yizhu and Liu, Zhaoyang and Yuan, Ruibin and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},

  journal={arXiv preprint arXiv:2503.10522},

  year={2025}

}

```