digital-avatar commited on
Commit
3f43ef4
Β·
verified Β·
1 Parent(s): b1436e7

Update readme

Browse files
Files changed (1) hide show
  1. README.md +201 -6
README.md CHANGED
@@ -17,12 +17,207 @@ hugging_face_paper_page: https://huggingface.co/papers/2508.10576
17
 
18
  <div align="center" style="font-family: charter;">
19
 
 
20
 
21
- <p align="center">
22
- <img src="pic.png" width="400"/>
23
- <p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- <!-- <h1></br>From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs</h1> -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- <div>
28
- <a href="https://scholar.google.com/citations?user=sPQ
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  <div align="center" style="font-family: charter;">
19
 
20
+ <h1></br>HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs</h1>
21
 
22
+ <div>
23
+ <a href="https://scholar.google.com/citations?user=sPQqpXsAAAAJ&hl=en&oi=sra">Zheng Qin<sup>1</sup></a>,
24
+ <a href="https://scholar.google.com/citations?user=S8FmqTUAAAAJ&hl=en">Ruobing Zheng<sup>*</sup><sup>2</sup></a>,
25
+ <a href="https://scholar.google.com/citations?user=3WVFdMUAAAAJ&hl=en">Yabing Wang<sup>1</sup></a>,
26
+ <a href="https://scholar.google.com/citations?user=yOtsVWQAAAAJ&hl=en&oi=sra">Tianqi Li<sup>2</sup></a>,
27
+ <a href="https://yuanyi.pub/">Yi Yuan<sup>2</sup></a>,
28
+ <a href="https://scholar.google.com/citations?hl=en&user=8SCEv-YAAAAJ&view_op=list_works&sortby=pubdate">Jingdong Chen<sup>2</sup></a>,
29
+ <a href="https://scholar.google.com/citations?user=RypRCUQAAAAJ&hl=en">Le Wang<sup>†<dag><sup>1</sup></a> <br>
30
+ <span style="font-size: 13px; margin-top: 0.8em">
31
+ <br>
32
+ <sup>*</sup>Co-first authors. Project Lead.
33
+ <sup>†</sup>Corresponding Author.
34
+ <br>
35
+ <sup>1</sup>Xi’an Jiaotong University. <sup>2</sup>Ant Group.
36
+ <br>
37
+ </span>
38
+ </div>
39
 
40
+ <a target="_blank" href="https://huggingface.co/papers/2508.10576" ><button><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" alt="Hugging Face Paper" style="height:1em; vertical-align:middle;"> Hugging Face Paper</button></a>
41
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
42
+ <a target="_blank" href="https://arxiv.org/abs/2508.10576" ><button><i class="ai ai-arxiv"></i> arXiv:2508.10576</button></a>
43
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
44
+ <a target="_blank" href="https://digital-avatar.github.io/ai/HumanSense/" ><button><i class="ai ai-arxiv"></i> Homepage</button></a>
45
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
46
+ <a target="_blank" href="https://github.com/antgroup/HumanSense" ><button><i class="ai ai-arxiv"></i> GitHub</button></a>
47
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
48
+ <a target="_blank" href="https://huggingface.co/datasets/antgroup/HumanSense_Benchmark">
49
+ <button>
50
+ <img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg"
51
+ alt="Hugging Face" style="height:1em; vertical-align:middle;">
52
+ Hugging Face (data)
53
+ </button>
54
+ </a>
55
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
56
+ <a target="_blank" href="https://huggingface.co/antgroup/HumanSense_Omni_Reasoning">
57
+ <button>
58
+ <img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg"
59
+ alt="Hugging Face" style="height:1em; vertical-align:middle;">
60
+ Hugging Face (model)
61
+ </button>
62
+ </a>
63
 
64
+
65
+ <img src="https://github.com/antgroup/HumanSense/blob/main/docs/figure1.png?raw=true" width="100%"/>
66
+ <p align="justify"><i>While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce HumanSense, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks. Furthermore, we argue that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, with reasoning ability serving as the key to unlocking it. Accordingly, we employ a multi-stage, modality-progressive reinforcement learning to enhance the reasoning abilities of an Omni model, achieving substantial gains on evaluation results. Additionally, we observe that successful reasoning processes exhibit highly consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner. Project page: [HumanSense Homepage](https://digital-avatar.github.io/ai/HumanSense/)
67
+ </i></p>
68
+
69
+ </div>
70
+
71
+ ## Release
72
+ - `2025-11-07` :HumanSense is accepted by AAAI 2026!
73
+ - `2025-08-27` :hearts: We release both the training code and dataset!
74
+ - `2025-08-27` :hearts: We released Benchmark and code!
75
+ - `2025-08-15` :rocket: We released our paper!
76
+
77
+ ## Contents
78
+
79
+ - [Release](#release)
80
+ - [Contents](#contents)
81
+ - [HumanSense](#humansense)
82
+ - [Results](#results)
83
+ - [RUN Your Own Evaluation](#run-your-own-evaluation)
84
+ - [Training Omni Model](#training-omni-model)
85
+ - [Citation](#citation)
86
+
87
+
88
+ ## HumanSense
89
+ The evaluation tasks are organized into a four-tier pyramid structure (L1–L4) according to increasing levels of difficulty:
90
+ <img src="https://github.com/antgroup/HumanSense/blob/main/docs/figure2.png?raw=true" width="100%"/>
91
+
92
+
93
+ ## Results
94
+
95
+ **Evaluation Setups:** We conduct a comprehensive evaluation of leading Multimodal Large Language Models (MLLMs) with sizes up to 10B, including: (1) Visual LLMs, which represent the most mainstream branch of MLLMs today; (2) Audio LLMs; and (3) Omni-modal LLMs that are natively designed for integrating vision, audio, and text.
96
+ <img src="https://github.com/antgroup/HumanSense/blob/main/docs/table1.png?raw=true" width="100%"/>
97
+
98
+ ## RUN Your Own Evaluation
99
+
100
+ Download the test code from [here](https://github.com/antgroup/HumanSense)
101
+
102
+ ### Requirements
103
+ - Configure the environment required for the model to be tested; the benchmark has no special requirements.
104
+
105
+ - ffmpeg
106
+ ```bash
107
+ conda activate Modelxx_env (the environment corresponding to the tested model.)
108
+ cd HumanSense-main
109
+ wget https://ffmpeg.org/releases/ffmpeg-4.4.tar.gz
110
+ tar -xvf ffmpeg-4.4.tar.gz
111
+ cd ffmpeg-4.4
112
+ ./configure
113
+ make
114
+ sudo make install
115
+ ```
116
+
117
+ ### Installation
118
+ - **Download Dataset**: Retrieve all necessary files from the folder `bench_data` in [πŸ€— HumanSense_Benchmark](https://huggingface.co/datasets/antgroup/HumanSense_Benchmark).
119
+
120
+ - **Decompress Files**: Extract the downloaded files and organize them in the `./HumanSense_bench` directory as follows:
121
+
122
+ ```
123
+ HumanSense-main/
124
+ β”œβ”€β”€ HumanSense_bench/src/data
125
+ β”‚ β”œβ”€β”€ audios/
126
+ β”‚ β”œβ”€β”€ videos/
127
+ β”‚ β”œβ”€β”€ HumanSense_AQA.json
128
+ β”‚ └── HumanSense_VQA.json
129
+ ```
130
+
131
+ ### Evaluation
132
+ - **Model Preparation**: Prepare your own model for evaluation by following the instructions provided [here](https://github.com/antgroup/HumanSense/blob/main/docs/model_guide.md). This guide will help you set up and configure your model to ensure it is ready for testing against the dataset.
133
+ Now you can run the benchmark:
134
+
135
+ - **Run and score**:
136
+ ```sh
137
+ cd HumanSense-main
138
+ sh HumanSense_bench/eval.sh
139
+ sh HumanSense_bench/eval_audio.sh
140
+ sh HumanSense_bench/score.sh
141
+ ```
142
+ ## Training Omni Model
143
+ We train [Qwen25-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) using 8 x H20 (96G) GPUs
144
+
145
+ ### Requirements
146
+ ```
147
+ # First, configure the environment required to run Qwen25-Omni-7B.
148
+ conda activate omni
149
+ pip install accelerate
150
+ # It's highly recommended to use `[decord]` feature for faster video loading.
151
+ pip install qwen-omni-utils[decord] -U
152
+
153
+
154
+ # configure the training requirements
155
+ cd HumanSense-main/Open-R1-Video
156
+ pip3 install -e ".[dev]"
157
+ pip uninstall transformers
158
+ unzip transformers-main.zip
159
+ cd transformers-main
160
+ pip install -e .
161
+ cd ..
162
+ pip install nvidia-cublas-cu12 -U
163
+ pip3 install flash_attn --no-build-isolation
164
+ pip uninstall qwen-omni-utils
165
+ cd qwen-omni-utils
166
+ pip install -e .
167
+ cd ..
168
+ pip uninstall qwen-vl-utils
169
+ cd qwen-vl-utils
170
+ pip install -e .
171
+ cd ..
172
+
173
+ pip install qwen-omni-utils[decord] -U
174
+ pip install trl==0.14.0
175
+ pip install tensorboardX
176
+ ```
177
+
178
+ ### Datas Installation
179
+ - **Download Dataset**: Retrieve all necessary files from the folder `train_data` in [πŸ€— HumanSense_Benchmark](https://huggingface.co/datasets/antgroup/HumanSense_Benchmark).
180
+
181
+
182
+ - **Decompress Files**: Extract the downloaded files and organize them in the `./Open-R1-Video` directory as follows:
183
+
184
+ ```
185
+ HumanSense-main/
186
+ β”œβ”€β”€ Open-R1-Video/data
187
+ β”‚ β”œβ”€β”€ audios/
188
+ β”‚ β”œβ”€β”€ videos/
189
+ β”‚ β”œβ”€β”€ merged_video_wo_audio.json
190
+ β”‚ β”œβ”€β”€ merged_video_audio.json
191
+ β”‚ └── merged_video_w_audio.json
192
+ ```
193
+ ### Training
194
+ - **Run**:
195
+ ```sh
196
+ cd HumanSense-main
197
+ sh Open-R1-Video/framework1/qwen-7b_omni_1video_wo_audio.sh
198
+ sh Open-R1-Video/qwen-7b_omni_2audio.sh
199
+ sh Open-R1-Video/framework2/qwen-7b_omni_3video_w_audio.sh
200
+ ```
201
+ At any stage of training, if the loaded weights do not contain spk_dict.pt, please copy Open-R1-Video/experiments/spk_dict.pt to it.
202
+
203
+ We release the trained model in [πŸ€— HumanSense_Omni_Reasoning](https://huggingface.co/antgroup/HumanSense_Omni_Reasoning)
204
+ - **Inference**: Modify the model name to "rivideo-omni7B", and subsequently update the loaded weights to the trained ones.
205
+ ```sh
206
+ cd HumanSense-main
207
+ sh HumanSense_bench/eval.sh
208
+ sh HumanSense_bench/eval_audio.sh
209
+ sh HumanSense_bench/score.sh
210
+ ```
211
+
212
+
213
+ ## Citation
214
+
215
+ If you find our paper and code useful in your research, please consider giving us a star :star: and citing our work :pencil: :)
216
+ ```bibtex
217
+ @article{qin2025humansense,
218
+ title={HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs},
219
+ author={Qin, Zheng and Zheng, Ruobing and Wang, Yabing and Li, Tianqi and Yuan, Yi and Chen, Jingdong and Wang, Le},
220
+ journal={arXiv preprint arXiv:2508.10576},
221
+ year={2025}
222
+ }
223
+ ```