Update README_zh.md
Browse files- README_zh.md +22 -6
README_zh.md
CHANGED
|
@@ -5,19 +5,16 @@ language:
|
|
| 5 |
license: apache-2.0
|
| 6 |
library_name: transformers
|
| 7 |
tags:
|
| 8 |
-
- audio
|
| 9 |
-
- speech
|
| 10 |
- audio-language-model
|
| 11 |
-
- speech-to-text
|
| 12 |
- speech-to-speech
|
| 13 |
- voice-chat
|
| 14 |
-
pipeline_tag:
|
| 15 |
---
|
| 16 |
|
| 17 |
# Fun-Audio-Chat-8B
|
| 18 |
|
| 19 |
<p align="right">
|
| 20 |
-
<a href="README.md">English</a> | <a href="README_zh.md">中文</a>
|
| 21 |
</p>
|
| 22 |
|
| 23 |
<div align="center">
|
|
@@ -36,12 +33,20 @@ pipeline_tag: audio-text-to-text
|
|
| 36 |
|
| 37 |
Fun-Audio-Chat 是一个专为自然、低延迟语音交互打造的大型音频语言模型。它引入了**双分辨率语音表征**(高效的5Hz共享骨干网络 + 25Hz精细化头部),在保持高语音质量的同时大幅降低计算开销,并采用**Core-Cocktail训练策略**来保持强大的文本LLM能力。该模型在语音问答、音频理解、语音函数调用、语音指令遵循和语音情感共鸣等基准测试中均取得了顶尖成绩。
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
### 核心特性
|
| 40 |
|
| 41 |
- **双分辨率语音表征**:高效的5Hz帧率(相比其他模型的12.5Hz或25Hz),将GPU训练时间减少近50%,同时保持高语音质量
|
| 42 |
- **业界领先性能**:在同等规模模型(约8B参数)中,在OpenAudioBench、VoiceBench、UltraEval-Audio、MMAU、MMAU-Pro、MMSU、Speech-ACEBench、Speech-BFCL、Speech-SmartInteract、VStyle等评测集上排名领先
|
| 43 |
- **全面的能力覆盖**:支持语音问答、音频理解、语音函数调用、语音指令遵循、语音情感共鸣
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
## 模型详情
|
| 46 |
|
| 47 |
| 属性 | 值 |
|
|
@@ -117,12 +122,23 @@ python examples/infer_s2s.py
|
|
| 117 |
|
| 118 |
```bibtex
|
| 119 |
@article{funaudiochat2025,
|
| 120 |
-
title={Fun-Audio-Chat
|
| 121 |
author={Tongyi Fun Team},
|
| 122 |
year={2025}
|
| 123 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
```
|
| 125 |
|
|
|
|
| 126 |
## 许可证
|
| 127 |
|
| 128 |
本模型采用 [Apache 2.0 许可证](https://www.apache.org/licenses/LICENSE-2.0)。
|
|
|
|
| 5 |
license: apache-2.0
|
| 6 |
library_name: transformers
|
| 7 |
tags:
|
|
|
|
|
|
|
| 8 |
- audio-language-model
|
|
|
|
| 9 |
- speech-to-speech
|
| 10 |
- voice-chat
|
| 11 |
+
pipeline_tag: any-to-any
|
| 12 |
---
|
| 13 |
|
| 14 |
# Fun-Audio-Chat-8B
|
| 15 |
|
| 16 |
<p align="right">
|
| 17 |
+
<a href="Fun-Audio-Chat-8B/blob/main/README.md">English</a> | <a href="Fun-Audio-Chat-8B/blob/main/README_zh.md">中文</a>
|
| 18 |
</p>
|
| 19 |
|
| 20 |
<div align="center">
|
|
|
|
| 33 |
|
| 34 |
Fun-Audio-Chat 是一个专为自然、低延迟语音交互打造的大型音频语言模型。它引入了**双分辨率语音表征**(高效的5Hz共享骨干网络 + 25Hz精细化头部),在保持高语音质量的同时大幅降低计算开销,并采用**Core-Cocktail训练策略**来保持强大的文本LLM能力。该模型在语音问答、音频理解、语音函数调用、语音指令遵循和语音情感共鸣等基准测试中均取得了顶尖成绩。
|
| 35 |
|
| 36 |
+
<p align="center">
|
| 37 |
+
<img width="95%" src="https://github.com/FunAudioLLM/Fun-Audio-Chat/blob/main/assets/Results.png?raw=true">
|
| 38 |
+
</p>
|
| 39 |
+
|
| 40 |
### 核心特性
|
| 41 |
|
| 42 |
- **双分辨率语音表征**:高效的5Hz帧率(相比其他模型的12.5Hz或25Hz),将GPU训练时间减少近50%,同时保持高语音质量
|
| 43 |
- **业界领先性能**:在同等规模模型(约8B参数)中,在OpenAudioBench、VoiceBench、UltraEval-Audio、MMAU、MMAU-Pro、MMSU、Speech-ACEBench、Speech-BFCL、Speech-SmartInteract、VStyle等评测集上排名领先
|
| 44 |
- **全面的能力覆盖**:支持语音问答、音频理解、语音函数调用、语音指令遵循、语音情感共鸣
|
| 45 |
|
| 46 |
+
<p align="center">
|
| 47 |
+
<img width="95%" src="https://github.com/FunAudioLLM/Fun-Audio-Chat/blob/main/assets/Architecture.png?raw=true">
|
| 48 |
+
</p>
|
| 49 |
+
|
| 50 |
## 模型详情
|
| 51 |
|
| 52 |
| 属性 | 值 |
|
|
|
|
| 122 |
|
| 123 |
```bibtex
|
| 124 |
@article{funaudiochat2025,
|
| 125 |
+
title={Fun-Audio-Chat Technical Report},
|
| 126 |
author={Tongyi Fun Team},
|
| 127 |
year={2025}
|
| 128 |
}
|
| 129 |
+
|
| 130 |
+
@misc{tan2025drvoiceparallelspeechtextvoice,
|
| 131 |
+
title={DrVoice: Parallel Speech-Text Voice Conversation Model via Dual-Resolution Speech Representations},
|
| 132 |
+
author={Chao-Hong Tan and Qian Chen and Wen Wang and Chong Deng and Qinglin Zhang and Luyao Cheng and Hai Yu and Xin Zhang and Xiang Lv and Tianyu Zhao and Chong Zhang and Yukun Ma and Yafeng Chen and Hui Wang and Jiaqing Liu and Xiangang Li and Jieping Ye},
|
| 133 |
+
year={2025},
|
| 134 |
+
eprint={2506.09349},
|
| 135 |
+
archivePrefix={arXiv},
|
| 136 |
+
primaryClass={cs.CL},
|
| 137 |
+
url={https://arxiv.org/abs/2506.09349},
|
| 138 |
+
}
|
| 139 |
```
|
| 140 |
|
| 141 |
+
|
| 142 |
## 许可证
|
| 143 |
|
| 144 |
本模型采用 [Apache 2.0 许可证](https://www.apache.org/licenses/LICENSE-2.0)。
|