audio
music
generation
video2audio
liuhuadai commited on
Commit
226945c
·
verified ·
1 Parent(s): 0df2ce7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -42
README.md CHANGED
@@ -1,63 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
1
  <h1 align="center">PrismAudio</h1>
2
 
3
  ---
4
 
5
- **PrismAudio** 是首个将强化学习融入视频转音频(V2A)生成的框架,并配备专门的思维链(CoT)规划机制。在 [ThinkSound](https://arxiv.org/pdf/2506.21448) 开创性的基于 CoT V2A 框架基础上,PrismAudio 进一步将单一推理分解为四个专门的 CoT 模块(语义、时序、美学和空间),每个模块配备针对性的奖励函数,实现多维强化学习优化,从而在所有感知维度上同步提升推理能力。
6
 
7
  ---
8
 
9
- ## 📰 最新动态
10
-
11
- - **2026.03.22** 🔥 **PrismAudio** 正式发布,这是我们的下一代视频转音频生成模型!
12
- - **2026.01.26** 🎉 PrismAudio **ICLR 2026 主会场** 接收!
13
- - **2025.11.25** 🔥 [PrismAudio 在线 Demo](http://prismaudio-project.github.io/) 上线!
14
- - **2025.11.25** 🔥 [PrismAudio 论文](https://arxiv.org/pdf/2511.18833) 发布于 arXiv
15
- - **2025.09.19** 🎉 ThinkSound **NeurIPS 2025 主会场** 接收!
16
- - **2025.09.01** AudioCoT 数据集在 [Hugging Face](https://huggingface.co/datasets/liuhuadai/AudioCoT) 开源!
17
- - **2025.07.17** 🧠 开放微调:训练和微调代码正式公开!
18
- - **2025.07.15** 📦 简化安装,支持 Windows `.bat` 脚本一键配置!
19
- - **2025.07.08** 🔧 重大更新:模型轻量化,优化显存和 GPU 使用,支持大规模高吞吐量音频生成!
20
- - **2025.07.01** [Hugging Face Spaces](https://huggingface.co/spaces/FunAudioLLM/ThinkSound) [ModelScope](https://modelscope.cn/studios/iic/ThinkSound) 上线在线 Demo!
21
- - **2025.07.01** 发布推理脚本和 Web 界面!
22
- - **2025.06** [ThinkSound 论文](https://arxiv.org/pdf/2506.21448) 发布于 arXiv
23
- - **2025.06** [在线 Demo](http://thinksound-project.github.io/) 上线!
24
 
25
  ---
26
 
27
- ## 🚀 主要特性
28
 
29
- - **V2A 最优性能**:在 VGGSound AudioCanvas 基准测试的全部四个感知维度上均达到最先进水平。
30
- - **分解式 CoT 推理**:四个专门的 CoT 模块(语义、时序、美学、空间),各自提供聚焦、可解释的推理。
31
- - **多维强化学习**:Fast-GRPO 实现高效的多维奖励优化,同时不牺牲生成质量。
32
- - **新基准 AudioCanvas**:包含 300 个单事件类别和 501 个多事件样本的严格 V2A 基准测试。
33
- - **高效轻量**:仅 5.18 亿参数,推理速度快于此前的最优方法。
34
 
35
  ---
36
 
37
- ## 方法概述
 
 
 
 
 
 
 
 
38
 
39
- PrismAudio 由三个主要组件构成:
40
 
41
- 1. **CoT 感知音频基础模型**:基于多模态扩散 Transformer 和流匹配,结合 VideoPrism 进行视频理解,T5-Gemma 进行结构化 CoT 文本编码。
42
- 2. **分解式多维 CoT 推理**:四个专门的 CoT 模块——语义、时序、美学和空间,各自针对对应感知维度提供精准推理。
43
- 3. **Fast-GRPO 多维强化学习框架**:混合 ODE-SDE 采样策略,大幅降低训练开销,同时实现跨所有感知维度的多维奖励优化。
 
 
44
 
45
  ---
46
 
47
- ## 📄 许可证
 
 
48
 
49
- 本项目基于 Apache 2.0 协议发布。
 
 
 
50
 
51
- > **注意:**
52
- > 代码、模型和数据集**仅供研究和教育用途**。
53
- > **不允许商业使用。**
54
- > 如需商业授权,请联系作者。
55
 
56
  ---
57
 
58
- ## 📖 引用
 
 
59
 
60
- 如果 PrismAudio 对您的研究有帮助,请引用我们的论文:
 
 
 
 
 
 
 
 
61
 
62
  ```bibtex
63
  @misc{liu2025prismaudiodecomposedchainofthoughtsmultidimensional,
@@ -69,10 +101,5 @@ PrismAudio 由三个主要组件构成:
69
  primaryClass={cs.SD},
70
  url={https://arxiv.org/abs/2511.18833},
71
  }
72
- ```
73
-
74
- ---
75
-
76
- ## 📬 联系我们
77
-
78
- ✨ 如有任何问题或建议,欢迎 [提交 Issue](https://github.com/liuhuadai/ThinkSound/issues) 或通过邮件联系我们:[huadai.liu@connect.ust.hk](mailto:huadai.liu@connect.ust.hk)
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - google/videoprism-large-f8r288
5
+ - google/t5gemma-l-l-ul2-it
6
+ tags:
7
+ - audio
8
+ - music
9
+ - generation
10
+ - video2audio
11
+ ---
12
  <h1 align="center">PrismAudio</h1>
13
 
14
  ---
15
 
16
+ **PrismAudio** is the first framework to integrate Reinforcement Learning into Video-to-Audio (V2A) generation with specialized Chain-of-Thought (CoT) planning. Building upon [ThinkSound](https://arxiv.org/pdf/2506.21448)'s pioneering CoT-based V2A framework, PrismAudio further decomposes monolithic reasoning into four specialized CoT modules (Semantic, Temporal, Aesthetic, and Spatial), each paired with targeted reward functions, enabling multi-dimensional RL optimization that jointly improves reasoning across all perceptual dimensions.
17
 
18
  ---
19
 
20
+ ## 📰 News
21
+
22
+ - **2026.03.22** 🔥 **PrismAudio** is officially released — our next-generation video-to-audio generation model!
23
+ - **2026.01.26** 🎉 PrismAudio has been accepted to the **ICLR 2026 Main Conference**!
24
+ - **2025.11.25** 🔥 [PrismAudio Online Demo](http://prismaudio-project.github.io/) is live!
25
+ - **2025.11.25** 🔥 [PrismAudio paper](https://arxiv.org/pdf/2511.18833) released on arXiv!
26
+ - **2025.09.19** 🎉 ThinkSound has been accepted to the **NeurIPS 2025 Main Conference**!
27
+ - **2025.09.01** AudioCoT dataset is now open-sourced on [Hugging Face](https://huggingface.co/datasets/liuhuadai/AudioCoT)!
28
+ - **2025.07.17** 🧠 Finetuning enabled: training and finetuning code is now publicly available!
29
+ - **2025.07.15** 📦 Simplified installation with Windows `.bat` scripts for one-click setup!
30
+ - **2025.07.08** 🔧 Major update: model lightweighted, optimized memory and GPU usage, supports large-scale high-throughput audio generation!
31
+ - **2025.07.01** Online demo on [Hugging Face Spaces](https://huggingface.co/spaces/FunAudioLLM/ThinkSound) and [ModelScope](https://modelscope.cn/studios/iic/ThinkSound)!
32
+ - **2025.07.01** Released inference scripts and web interface!
33
+ - **2025.06** [ThinkSound paper](https://arxiv.org/pdf/2506.21448) released on arXiv!
34
+ - **2025.06** [Online Demo](http://thinksound-project.github.io/) is live!
35
 
36
  ---
37
 
38
+ ## Quick Start
39
 
40
+ For detailed training and inference code, please refer to [ThinkSound (prismaudio branch)](https://github.com/FunAudioLLM/ThinkSound/tree/prismaudio).
 
 
 
 
41
 
42
  ---
43
 
44
+ ## 🚀 Features
45
+
46
+ - **V2A SOTA**: Achieves state-of-the-art results across all four perceptual dimensions on both VGGSound and AudioCanvas benchmarks.
47
+ - **Decomposed CoT Reasoning**: Four specialized CoT modules (Semantic, Temporal, Aesthetic, Spatial) each providing focused, interpretable reasoning for its corresponding perceptual dimension.
48
+ - **Multi-dimensional RL**: Fast-GRPO enables efficient multi-dimensional reward optimization without compromising generation quality.
49
+ - **New Benchmark AudioCanvas**: A rigorous V2A benchmark with 300 single-event classes and 501 multi-event samples covering diverse and challenging scenarios.
50
+ - **Efficient**: 518M parameters with faster inference than prior SOTAs.
51
+
52
+ ---
53
 
54
+ ## ✨ Method Overview
55
 
56
+ PrismAudio consists of three main components:
57
+
58
+ 1. **CoT-Aware Audio Foundation Model**: Built on a Multimodal Diffusion Transformer with flow matching, enhanced with VideoPrism for video understanding and T5-Gemma for structured CoT text encoding.
59
+ 2. **Decomposed Multi-Dimensional CoT Reasoning**: Four specialized CoT modules — Semantic, Temporal, Aesthetic, and Spatial — each providing targeted reasoning for its corresponding perceptual dimension.
60
+ 3. **Fast-GRPO Multi-Dimensional RL Framework**: A hybrid ODE-SDE sampling strategy that dramatically reduces training overhead while enabling multi-dimensional reward optimization across all perceptual dimensions.
61
 
62
  ---
63
 
64
+ ## 📄 License
65
+
66
+ This project is released under the Apache 2.0 License.
67
 
68
+ > **Note:**
69
+ > The code, models, and dataset are **for research and educational purposes only**.
70
+ > **Commercial use is NOT permitted.**
71
+ > For commercial licensing, please contact the authors.
72
 
73
+ **📦 Third-Party Components**
74
+
75
+ - **Stable Audio Open VAE** (by Stability AI): Licensed under the [Stability AI Community License](./third_party/LICENSE_StabilityAI.md). **Commercial use and redistribution require prior permission from Stability AI.**
76
+ - 📘 **All other code and models** are released under the Apache License 2.0.
77
 
78
  ---
79
 
80
+ ## Acknowledgements
81
+
82
+ Many thanks to:
83
 
84
+ - **stable-audio-tools** (by Stability AI): For providing an easy-to-use framework for audio generation, as well as the VAE module and weights.
85
+ - **MMAudio**: For the implementation of the MM-DiT backbone in the audio domain.
86
+ - **ThinkSound**: For the foundational CoT-based V2A generation framework that PrismAudio builds upon.
87
+
88
+ ---
89
+
90
+ ## 📖 Citation
91
+
92
+ If you find PrismAudio useful in your research or work, please cite our paper:
93
 
94
  ```bibtex
95
  @misc{liu2025prismaudiodecomposedchainofthoughtsmultidimensional,
 
101
  primaryClass={cs.SD},
102
  url={https://arxiv.org/abs/2511.18833},
103
  }
104
+ 📬 Contact
105
+ ✨ Feel free to open an issue or contact us via email (huadai.liu@connect.ust.hk) if you have any questions or suggestions!