liuhuadai commited on
Commit
c25ab87
Β·
verified Β·
1 Parent(s): c0c1224

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -74
README.md CHANGED
@@ -11,7 +11,6 @@ pinned: false
11
  ---
12
 
13
  <h1 align="center">PrismAudio</h1>
14
-
15
  <p align="center">
16
  <img src="https://img.shields.io/badge/ICLR 2026-Main Conference-blue.svg" alt="ICLR 2026"/>
17
  </p>
@@ -38,100 +37,70 @@ pinned: false
38
  </a>
39
  </p>
40
 
41
- <p align="center">
42
- If you find this project useful,<br>
43
- a star ⭐ on GitHub would be greatly appreciated!
44
- </p>
45
-
46
-
47
  ---
48
 
49
- **PrismAudio** is the first framework to integrate Reinforcement Learning into Video-to-Audio (V2A) generation with specialized Chain-of-Thought (CoT) planning. Building upon [ThinkSound](https://arxiv.org/pdf/2506.21448)'s pioneering CoT-based V2A framework, PrismAudio further decomposes monolithic reasoning into four specialized CoT modules (Semantic, Temporal, Aesthetic, and Spatial), each paired with targeted reward functions, enabling multi-dimensional RL optimization that jointly improves reasoning across all perceptual dimensions.
50
-
51
- ---
52
-
53
- ## πŸ“° News
54
-
55
- - **2026.03.22** &nbsp; πŸ”₯ We have released **PrismAudio**, our next-generation video-to-audio generation model! Model weights are available on [Hugging Face](https://huggingface.co/FunAudioLLM/PrismAudio) and [ModelScope](https://www.modelscope.cn/models/iic/PrismAudio). For more details, please refer to the [`prismaudio`](https://github.com/liuhuadai/ThinkSound/tree/prismaudio) branch!
56
- - **2026.01.26** πŸŽ‰ PrismAudio has been accepted to the **ICLR 2026 Main Conference**!
57
- - **2025.11.25** πŸ”₯ [PrismAudio Online Demo](http://prismaudio-project.github.io/) is live!
58
- - **2025.11.25** πŸ”₯ [PrismAudio paper](https://arxiv.org/pdf/2511.18833) released on arXiv!
59
- - **2025.09.19** πŸŽ‰ ThinkSound has been accepted to the **NeurIPS 2025 Main Conference**!
60
- - **2025.09.01** AudioCoT dataset is now open-sourced on [Hugging Face](https://huggingface.co/datasets/liuhuadai/AudioCoT)!
61
- - **2025.07.17** 🧠 Finetuning enabled: training and finetuning code is now publicly available!
62
- - **2025.07.15** πŸ“¦ Simplified installation with Windows `.bat` scripts for one-click setup!
63
- - **2025.07.08** πŸ”§ Major update: model lightweighted, optimized memory and GPU usage, supports large-scale high-throughput audio generation!
64
- - **2025.07.01** Online demo on [Hugging Face Spaces](https://huggingface.co/spaces/FunAudioLLM/ThinkSound) and [ModelScope](https://modelscope.cn/studios/iic/ThinkSound)!
65
- - **2025.07.01** Released inference scripts and web interface!
66
- - **2025.06** [ThinkSound paper](https://arxiv.org/pdf/2506.21448) released on arXiv!
67
- - **2025.06** [Online Demo](http://thinksound-project.github.io/) is live!
68
-
69
- ---
70
-
71
- ## ⚑ Quick Start
72
-
73
- For detailed training and inference code, please refer to [ThinkSound (prismaudio branch)](https://github.com/FunAudioLLM/ThinkSound/tree/prismaudio).
74
-
75
  ---
76
 
77
- ## πŸš€ Features
78
 
79
- - **V2A SOTA**: Achieves state-of-the-art results across all four perceptual dimensions on both VGGSound and AudioCanvas benchmarks.
80
- - **Decomposed CoT Reasoning**: Four specialized CoT modules (Semantic, Temporal, Aesthetic, Spatial) each providing focused, interpretable reasoning for its corresponding perceptual dimension.
81
- - **Multi-dimensional RL**: Fast-GRPO enables efficient multi-dimensional reward optimization without compromising generation quality.
82
- - **New Benchmark AudioCanvas**: A rigorous V2A benchmark with 300 single-event classes and 501 multi-event samples covering diverse and challenging scenarios.
83
- - **Efficient**: 518M parameters with faster inference than prior SOTAs.
84
 
85
- ---
86
-
87
- ## ✨ Method Overview
 
88
 
89
- PrismAudio consists of three main components:
90
-
91
- 1. **CoT-Aware Audio Foundation Model**: Built on a Multimodal Diffusion Transformer with flow matching, enhanced with VideoPrism for video understanding and T5-Gemma for structured CoT text encoding.
92
- 2. **Decomposed Multi-Dimensional CoT Reasoning**: Four specialized CoT modules β€” Semantic, Temporal, Aesthetic, and Spatial β€” each providing targeted reasoning for its corresponding perceptual dimension.
93
- 3. **Fast-GRPO Multi-Dimensional RL Framework**: A hybrid ODE-SDE sampling strategy that dramatically reduces training overhead while enabling multi-dimensional reward optimization across all perceptual dimensions.
 
94
 
95
  ---
96
 
97
- ## πŸ“„ License
98
 
99
- This project is released under the Apache 2.0 License.
100
 
101
- > **Note:**
102
- > The code, models, and dataset are **for research and educational purposes only**.
103
- > **Commercial use is NOT permitted.**
104
- > For commercial licensing, please contact the authors.
105
 
106
- **πŸ“¦ Third-Party Components**
107
-
108
- - **Stable Audio Open VAE** (by Stability AI): Licensed under the [Stability AI Community License](./third_party/LICENSE_StabilityAI.md). **Commercial use and redistribution require prior permission from Stability AI.**
109
- - πŸ“˜ **All other code and models** are released under the Apache License 2.0.
110
 
111
  ---
112
 
113
- ## Acknowledgements
114
-
115
- Many thanks to:
116
 
117
- - **stable-audio-tools** (by Stability AI): For providing an easy-to-use framework for audio generation, as well as the VAE module and weights.
118
 
119
- ---
120
 
121
- ## πŸ“– Citation
122
-
123
- If you find PrismAudio useful in your research or work, please cite our paper:
124
 
125
  ```bibtex
 
 
 
 
 
 
 
 
 
 
126
  @misc{liu2025prismaudiodecomposedchainofthoughtsmultidimensional,
127
- title={PrismAudio: Decomposed Chain-of-Thoughts and Multi-dimensional Rewards for Video-to-Audio Generation},
128
- author={Huadai Liu and Kaicheng Luo and Wen Wang and Qian Chen and Peiwen Sun and Rongjie Huang and Xiangang Li and Jieping Ye and Wei Xue},
129
- year={2025},
130
- eprint={2511.18833},
131
- archivePrefix={arXiv},
132
- primaryClass={cs.SD},
133
- url={https://arxiv.org/abs/2511.18833},
134
  }
135
  ```
136
- πŸ“¬ Contact
137
- ✨ Feel free to open an issue or contact us via email (huadai.liu@connect.ust.hk) if you have any questions or suggestions!
 
 
 
 
11
  ---
12
 
13
  <h1 align="center">PrismAudio</h1>
 
14
  <p align="center">
15
  <img src="https://img.shields.io/badge/ICLR 2026-Main Conference-blue.svg" alt="ICLR 2026"/>
16
  </p>
 
37
  </a>
38
  </p>
39
 
 
 
 
 
 
 
40
  ---
41
 
42
+ **PrismAudio** is the first framework to integrate reinforcement learning into video-to-audio (V2A) generation, equipped with a dedicated Chain-of-Thought (CoT) planning mechanism. Building on the pioneering CoT-based V2A framework of ThinkSound, PrismAudio further decomposes single-step reasoning into four specialized CoT modules β€” **semantic**, **temporal**, **aesthetic**, and **spatial** β€” each with targeted reward functions, enabling multi-dimensional RL optimization that simultaneously improves reasoning across all perceptual dimensions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ---
44
 
45
+ ## Quick Start
46
 
47
+ For full training and inference details, please refer to the [ThinkSound `prismaudio` branch](https://github.com/FunAudioLLM/ThinkSound/tree/prismaudio).
48
+ ```bash
49
+ git clone -b prismaudio https://github.com/liuhuadai/ThinkSound.git
50
+ cd ThinkSound
 
51
 
52
+ conda create -n prismaudio python=3.10
53
+ conda activate prismaudio
54
+ chmod +x scripts/PrismAudio/setup/build_env.sh
55
+ ./scripts/PrismAudio/setup/build_env.sh
56
 
57
+ # Download pretrained weights to ckpts/
58
+ # From Hugging Face: https://huggingface.co/FunAudioLLM/PrismAudio
59
+ # From ModelScope: https://www.modelscope.cn/models/iic/PrismAudio
60
+ git lfs install
61
+ git clone https://huggingface.co/FunAudioLLM/PrismAudio ckpts
62
+ ```
63
 
64
  ---
65
 
 
66
 
67
+ ## License
68
 
69
+ This project is released under the [MIT License](https://opensource.org/licenses/MIT).
 
 
 
70
 
71
+ > **Note:** The code, model weights, and datasets are intended for **research and educational purposes only**. Commercial use is not permitted without explicit authorization from the authors.
 
 
 
72
 
73
  ---
74
 
 
 
 
75
 
76
+ ## Citation
77
 
78
+ If you find PrismAudio useful in your research, please consider citing our papers:
79
 
 
 
 
80
 
81
  ```bibtex
82
+ @misc{liu2025thinksoundchainofthoughtreasoningmultimodal,
83
+ title={ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing},
84
+ author={Huadai Liu and Jialei Wang and Kaicheng Luo and Wen Wang and Qian Chen and Zhou Zhao and Wei Xue},
85
+ year={2025},
86
+ eprint={2506.21448},
87
+ archivePrefix={arXiv},
88
+ primaryClass={eess.AS},
89
+ url={https://arxiv.org/abs/2506.21448},
90
+ }
91
+
92
  @misc{liu2025prismaudiodecomposedchainofthoughtsmultidimensional,
93
+ title={PrismAudio: Decomposed Chain-of-Thoughts and Multi-dimensional Rewards for Video-to-Audio Generation},
94
+ author={Huadai Liu and Kaicheng Luo and Wen Wang and Qian Chen and Peiwen Sun and Rongjie Huang and Xiangang Li and Jieping Ye and Wei Xue},
95
+ year={2025},
96
+ eprint={2511.18833},
97
+ archivePrefix={arXiv},
98
+ primaryClass={cs.SD},
99
+ url={https://arxiv.org/abs/2511.18833},
100
  }
101
  ```
102
+
103
+ ---
104
+ ## Contact
105
+
106
+ If you have any questions or suggestions, feel free to [open an issue](https://github.com/liuhuadai/ThinkSound/issues) or reach out via email: [huadai.liu@connect.ust.hk](mailto:huadai.liu@connect.ust.hk)