detao commited on
Commit
3c85c00
·
verified ·
1 Parent(s): c109287

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -3
README.md CHANGED
@@ -1,3 +1,174 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">CoGenAV: Contrastive-Generative Audio-Visual Representation Learning</h1>
2
+
3
+ <div align="center" style="display: flex; justify-content: center; align-items: center; gap: 10px;">
4
+
5
+ <a href="https://arxiv.org/pdf/2505.03186" target="_blank">
6
+ <img src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg?logo=arXiv" alt="arXiv Paper">
7
+ </a>
8
+
9
+ <a href="https://huggingface.co/detao/CoGenAV" target="_blank">
10
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow" alt="HuggingFace Model">
11
+ </a>
12
+
13
+ <a href="https://modelscope.cn/models/hongqi/cogenav" target="_blank">
14
+ <img src="https://img.shields.io/badge/🤖%20ModelScope-Model-yellow" alt="ModelScope Model">
15
+ </a>
16
+
17
+ </div>
18
+
19
+
20
+ ---
21
+ #### 🚀 Project Overview
22
+ CoGenAV is a framework for audio-visual representation learning based on **Contrastive-Generative Synchronization**, designed to learn efficient and generalizable audio-visual representations through multimodal alignment of speech, lip movements, and text. The model performs exceptionally well across multiple audio-visual tasks, including:
23
+ - **Audio-Visual Speech Recognition (AVSR)**
24
+ - **Visual Speech Recognition (VSR)**
25
+ - **Audio-Visual Speech Enhancement and Separation (AVSE/AVSS)**
26
+ - **Active Speaker Detection (ASD)**
27
+
28
+ ---
29
+
30
+ ## 🏗️ Framework
31
+
32
+ <p align="center">
33
+ <img src="https://huggingface.co/detao/CoGenAV/resolve/main/cogenav_arch/cogen_arch.png" width=100%>
34
+ <p>
35
+
36
+ The left panel depicts the Audio-Visual Feature Representation framework and the Contrastive-Generative Synchronization Training methodology. For generative synchronization, we design a Feature Adaptation Module and employ a [frozen pre-trained ASR model](https://github.com/openai/whisper) as the Speech Recognition (SR) head. The right panel demonstrates the application of CoGenAV to diverse downstream tasks, including Visual Speech Recognition (VSR), Audio-Visual Speech Recognition (AVSR), Audio-Visual Speech Separation (AVSS), Audio-Visual Speech Enhancement (AVSE), and Active Speaker Detection (ASD).
37
+
38
+ ---
39
+
40
+ #### 🌟 Key Advantages
41
+ 1. **Efficient Learning**: High-performance models can be trained with only **223 hours of labeled data** (from the LRS2 dataset).
42
+ 2. **Cross-Task Generalizability**: Unified representation learning allows direct adaptation to various downstream tasks without task-specific architectural adjustments.
43
+ 3. **Robustness**: Performance improves by **70%+** in noisy environments (0 dB SNR), significantly outperforming traditional audio-only models.
44
+
45
+
46
+ ---
47
+ #### Usage
48
+ 1. Install dependencies:
49
+ ```bash
50
+ pip install -r requirements.txt
51
+ #Need to ensure that whisper and fairseq is installed
52
+ pip install -U openai-whisper
53
+ git clone https://github.com/pytorch/fairseq
54
+ cd fairseq
55
+ pip install --editable ./
56
+
57
+ 2. Infer CoGenAV for VSR/AVSR :
58
+ ```python
59
+ import whisper
60
+ from whisper.model import AudioEncoder
61
+ from infer_vsr_avsr import cogenav_forward
62
+ from models.cogenav import CoGenAV
63
+ # Override the Whisper encoder's forward function
64
+ AudioEncoder.forward = cogenav_forward
65
+ # Load CoGenAV model
66
+ cogenav = CoGenAV(cfg_file="config/base.yaml", model_tensor="weights/base_cogenav.pt")
67
+ # Load Whisper model as SR_Head
68
+ SR_Head = whisper.load_model("small", download_root="weights/whisper/")
69
+ SR_Head.encoder.adapter = cogenav.adapter.half()
70
+ # Prepare input using CoGenAV
71
+ input_ids = cogenav(video, audio).permute(0, 2, 1) # For cogenav_av
72
+ # input_ids = cogenav(video, None).permute(0, 2, 1) # For cogenav_v
73
+ # input_ids = cogenav(None, audio).permute(0, 2, 1) # For cogenav_a
74
+ # input_ids = audio # For whisper_a
75
+ # Decode using Whisper model
76
+ result = whisper.decode(SR_Head, input_ids, options)[0]
77
+
78
+ 3. Infer CoGenAV for AVSS/AVSE :
79
+ ```python
80
+ from models.cogenav import CoGenAV
81
+ from models.sepformer import build_Sepformer
82
+ # Load CoGenAV model
83
+ cogenav = CoGenAV(cfg_file="config/base.yaml", model_tensor="weights/base_cogenav.pt")
84
+ # Load sepformer model as avss/avse head
85
+ sepformer_head = build_Sepformer().cuda()
86
+ # sep speech with lip feature from mix wav
87
+ lip_feature = cogenav(video, None,use_upsampler=False)
88
+ sep_wav = sepformer_head.forward(audio_mix, lip_feature)
89
+
90
+ 4. Infer script:
91
+ ```bash
92
+ python infer_vsr_avsr.py --input_type cogenav_av --model_size large --cogenav_ckpt weights/large_cogenav.pt
93
+ python infer_avse_avss.py --task_type avse
94
+
95
+ ## 🎬 Demo
96
+ ### Demo For AVSR/VSR
97
+ <table class="center">
98
+ <tr>
99
+ <td colspan="2" style="text-align: center; font-weight: bold;">
100
+ AVSR/VSR
101
+ </td>
102
+ </tr>
103
+ <tr>
104
+ <td colspan="2" style="text-align: center;">
105
+ <video src="https://github.com/user-attachments/assets/e44e4606-9ef0-4fc7-a1e0-0add000f8e5f" controls preload></video>
106
+ <video src="https://github.com/user-attachments/assets/6c0cfe05-e82e-4b05-bd07-f4e0ebf2375f" controls preload></video>
107
+ <video src="https://github.com/user-attachments/assets/d1190323-dd31-4a74-b2f7-25ce3ec72c35" controls preload></video>
108
+ </td>
109
+ </tr>
110
+ </table>
111
+
112
+ ### Demo For AVSS/AVSE
113
+
114
+ <table style="width:100%; text-align:center;">
115
+ <tr>
116
+ <td colspan="2" style="font-weight: bold; font-size: 1.5em; text-align: center;">
117
+ AVSS(Audio-Visual Speech Separation)
118
+ </td>
119
+ </tr>
120
+ <tr>
121
+ <td width="50%">
122
+ <video src="https://github.com/user-attachments/assets/13181ace-bb1e-4a6a-97b5-440caa1c93ef" controls preload></video>
123
+ </td>
124
+ <td width="50%">
125
+ <video src="https://github.com/user-attachments/assets/24a128fb-9686-4c48-955c-8f48c98847a8" controls preload></video>
126
+ </td>
127
+ </tr>
128
+ </table>
129
+
130
+ <table style="width:100%; text-align:center;">
131
+ <tr>
132
+ <td colspan="4" style="font-weight: bold; font-size: 1.5em; text-align: center;">
133
+ AVSE(Audio-Visual Speech Enhancement)
134
+ </td>
135
+ </tr>
136
+ <tr>
137
+ <td width="25%">
138
+ <video src="https://github.com/user-attachments/assets/bd7205e8-4eac-4f24-b5a3-251c35b35429" controls preload></video>
139
+ </td>
140
+ <td width="25%">
141
+ <video src="https://github.com/user-attachments/assets/3101da59-b535-43dc-b58f-8d62625a4b8b" controls preload></video>
142
+ </td>
143
+ <td width="25%">
144
+ <video src="https://github.com/user-attachments/assets/7f2011bf-ad67-4a67-b7b9-619e3bf04692" controls preload></video>
145
+ </td>
146
+ <td width="25%">
147
+ <video src="https://github.com/user-attachments/assets/e37e19d6-9a63-422b-b200-d827b4e9b317" controls preload></video>
148
+ </td>
149
+ </tr>
150
+ </table>
151
+
152
+
153
+ ---
154
+ ## Result
155
+ ### CoGenAV Base for VSR/AVSR
156
+ | Size | SR Head | Modalities | VSR | AVSR@noise | AVSR@clean | AVSR with sft whisper @clean |
157
+ |-------------|----------------|------------|------|------------|------------|------------|
158
+ | - | Whisper medium | A | - | 34.2 | 6.4 | 1.5 |
159
+ | **Base** | Whisper small | AV | 24.8 | 5.2 | 2.5 | - |
160
+ | **Large** | Whisper medium | AV | 20.4 | 2.6 | 1.8 | **1.27** |
161
+ > **Note:** VSR/AVSR results on LRS2. The evaluation metric used is WER, and the results are obtained from training conducted solely on the LRS2 dataset.
162
+
163
+ ### CoGenAV Base for AVSS/AVSE
164
+ | Task | SS Head | Test Dataset | SI-SNRi | SDRi | PESQ |
165
+ |-------------|----------------|------------------|---------|------|-------|
166
+ | **AVSS** | AV-Sepformer | mix_2_spk_tt | 15.7 | 16.0 | 3.23 |
167
+ | **AVSE** | AV-Sepformer | lrs2_test+noise | 8.3 | 9.0 | 2.56 |
168
+
169
+ > **Note:** AVSS/AVSE results on LRS2. These metrics represent the average values for all speakers in each test set, where larger SI-SNRi, SDRi, and PESQ are better.
170
+
171
+ ### CoGenAV Base for ASD
172
+ | Task | SD Head | Test Dataset | mAP |
173
+ |-------------|----------------|------------------|---------|
174
+ | **ASD** | LRASD | Talkies | 96.3 |