audio
music
generation
video2audio
liuhuadai commited on
Commit
972bf4c
Β·
verified Β·
1 Parent(s): 5652644

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -3
README.md CHANGED
@@ -1,3 +1,140 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+ <h1 align="center">PrismAudio</h1>
7
+
8
+
9
+ <p align="center">
10
+ <img src="https://img.shields.io/badge/ICLR 2026-Main Conference-blue.svg" alt="ICLR 2026"/>
11
+ </p>
12
+
13
+ <p align="center">
14
+ <a href="https://arxiv.org/abs/2511.18833">
15
+ <img src="https://img.shields.io/badge/arXiv-2511.18833-b31b1b.svg" alt="arXiv"/>
16
+ </a>
17
+ &nbsp;
18
+ <a href="http://prismaudio-project.github.io/">
19
+ <img src="https://img.shields.io/badge/Online%20Demo-🌐-blue" alt="Online Demo"/>
20
+ </a>
21
+
22
+ </p>
23
+
24
+ <p align="center">
25
+ If you find this project useful,<br>
26
+ a star ⭐ on GitHub would be greatly appreciated!
27
+ </p>
28
+
29
+ ---
30
+
31
+ **PrismAudio** is the first framework to integrate Reinforcement Learning into Video-to-Audio (V2A) generation with specialized Chain-of-Thought (CoT) planning. Building upon [ThinkSound](https://arxiv.org/pdf/2506.21448)'s pioneering CoT-based V2A framework, PrismAudio further decomposes monolithic reasoning into four specialized CoT modules (Semantic, Temporal, Aesthetic, and Spatial), each paired with targeted reward functions, enabling multi-dimensional RL optimization that jointly improves reasoning across all perceptual dimensions.
32
+
33
+
34
+
35
+ ---
36
+
37
+ ## πŸ“° News
38
+
39
+ - **2026.03.22** &nbsp; πŸ”₯ We have released **PrismAudio**, our next-generation video-to-audio generation model! For more details, please refer to the [`prismaudio`](https://github.com/liuhuadai/ThinkSound/tree/prismaudio) branch!
40
+ - **2026.01.26** &nbsp; πŸŽ‰ PrismAudio has been accepted to the **ICLR 2026 Main Conference**! We plan to release the project in February 2026.
41
+ - **2025.11.25** &nbsp; πŸ”₯ [Online PrismAudio Demo](http://prismaudio-project.github.io/) is live - try it now!
42
+ - **2025.11.25** &nbsp; πŸ”₯ [PrismAudio paper](https://arxiv.org/pdf/2511.18833) released on arXiv, the first multi-dimensional CoT-RL framework for Video-to-Audio Generation!
43
+ - **2025.09.19** &nbsp; πŸŽ‰ ThinkSound has been accepted to the **NeurIPS 2025 Main Conference**!
44
+ - **2025.09.01** &nbsp; Our AudioCoT dataset is now open-sourced and available on [Hugging Face](https://huggingface.co/datasets/liuhuadai/AudioCoT)!
45
+ - **2025.07.17** &nbsp; 🧠 Finetuning enabled: training and finetuning code is now publicly available, along with clear usage instructions to help you customize and extend ThinkSound with your own data.
46
+ - **2025.07.15** &nbsp; πŸ“¦ Simplified installation and usability: dependencies on PyPI for easy cross-platform setup; Windows `.bat` scripts automate environment creation and script running.
47
+ - **2025.07.08** &nbsp; πŸ”§ Major update: model lightweighted and optimized memory and GPU usage, now supports high-throughput audio generation at scale!
48
+ - **2025.07.01** &nbsp; Online demo on [Hugging Face Spaces](https://huggingface.co/spaces/FunAudioLLM/ThinkSound) and [ModelScope](https://modelscope.cn/studios/iic/ThinkSound) for interactive experience!
49
+ - **2025.07.01** &nbsp; Released inference scripts and web interface.
50
+ - **2025.06** &nbsp; [ThinkSound paper](https://arxiv.org/pdf/2506.21448) released on arXiv!
51
+ - **2025.06** &nbsp; [Online Demo](http://thinksound-project.github.io/) is live - try it now!
52
+
53
+ ---
54
+
55
+ ## πŸš€ Features
56
+
57
+ - **V2A SOTA**: Achieves state-of-the-art results across all four perceptual dimensions on both VGGSound and AudioCanvas benchmarks.
58
+ - **Decomposed CoT Reasoning**: Four specialized CoT modules (Semantic, Temporal, Aesthetic, Spatial) each providing focused, interpretable reasoning for its corresponding perceptual dimension.
59
+ - **Multi-dimensional RL**: Fast-GRPO enables efficient multi-dimensional reward optimization without compromising generation quality.
60
+ - **New Benchmark**: AudioCanvas β€” a rigorous V2A benchmark with 300 single-event classes and 501 multi-event samples covering diverse and challenging scenarios.
61
+ - **Efficient**: 518M parameters with faster inference than prior SOTAs.
62
+
63
+ ---
64
+
65
+ ## ✨ Method Overview
66
+
67
+ PrismAudio consists of three main components:
68
+
69
+ 1. **CoT-Aware Audio Foundation Model**: Built on a Multimodal Diffusion Transformer with flow matching, enhanced with VideoPrism for video understanding and T5-Gemma for structured CoT text encoding.
70
+ 2. **Decomposed Multi-Dimensional CoT Reasoning**: Four specialized CoT modules β€” Semantic, Temporal, Aesthetic, and Spatial β€” each providing targeted reasoning for its corresponding perceptual dimension.
71
+ 3. **Fast-GRPO Multi-Dimensional RL Framework**: A hybrid ODE-SDE sampling strategy that dramatically reduces training overhead while enabling multi-dimensional reward optimization across all perceptual dimensions.
72
+
73
+
74
+ ---
75
+
76
+ ## ⚑ Quick Start
77
+ For more details, please refer to [ThinkSound](https://github.com/FunAudioLLM/ThinkSound).
78
+
79
+
80
+ ---
81
+
82
+ ## ▢️ Run Demo
83
+
84
+ For more details, please refer to [ThinkSound](https://github.com/FunAudioLLM/ThinkSound).
85
+
86
+ ---
87
+
88
+ ## πŸ‹οΈ Train the Model
89
+
90
+ For more details, please refer to [ThinkSound](https://github.com/FunAudioLLM/ThinkSound).
91
+
92
+ ---
93
+
94
+ ## πŸ“„ License
95
+
96
+ This project is released under the Apache 2.0 License.
97
+
98
+ > **Note:**
99
+ > The code, models, and dataset are **for research and educational purposes only**.
100
+ > **Commercial use is NOT permitted.**
101
+ > For commercial licensing, please contact the authors.
102
+
103
+ **πŸ“¦ Third-Party Components**
104
+
105
+ - **Stable Audio Open VAE** (by Stability AI): Licensed under the [Stability AI Community License](./third_party/LICENSE_StabilityAI.md). **Commercial use and redistribution require prior permission from Stability AI.**
106
+ - πŸ“˜ **All other code and models** are released under the Apache License 2.0.
107
+
108
+ ---
109
+
110
+ ## Acknowledgements
111
+
112
+ Many thanks to:
113
+
114
+ - **stable-audio-tools** (by Stability AI): For providing an easy-to-use framework for audio generation, as well as the VAE module and weights.
115
+ - **MMAudio**: For the implementation of the MM-DiT backbone in the audio domain.
116
+ - **ThinkSound**: For the foundational CoT-based V2A generation framework that PrismAudio builds upon.
117
+
118
+ ---
119
+
120
+ ## πŸ“– Citation
121
+
122
+ If you find PrismAudio useful in your research or work, please cite our paper:
123
+
124
+ ```bibtex
125
+ @misc{liu2025prismaudiodecomposedchainofthoughtsmultidimensional,
126
+ title={PrismAudio: Decomposed Chain-of-Thoughts and Multi-dimensional Rewards for Video-to-Audio Generation},
127
+ author={Huadai Liu and Kaicheng Luo and Wen Wang and Qian Chen and Peiwen Sun and Rongjie Huang and Xiangang Li and Jieping Ye and Wei Xue},
128
+ year={2025},
129
+ eprint={2511.18833},
130
+ archivePrefix={arXiv},
131
+ primaryClass={cs.SD},
132
+ url={https://arxiv.org/abs/2511.18833},
133
+ }
134
+ ```
135
+
136
+ ---
137
+
138
+ ## πŸ“¬ Contact
139
+
140
+ ✨ Feel free to [open an issue](https://github.com/liuhuadai/ThinkSound/issues) or contact us via email ([huadai.liu@connect.ust.hk](mailto:huadai.liu@connect.ust.hk)) if you have any questions or suggestions!