EmoReAlM / README.md
chaubeyG's picture
Update README.md
10dc296 verified
---
license: cc-by-4.0
tags:
- emotion
- reasoning
- omnillm
- multimodal
---
<div align="center">
<img src="./assets/avere_logo_cropped.png" width="400">
<h1>Improving Audiovisual Emotion Reasoning with Preference Optimization</h1>
<h2>EmoReAlM Benchmark</h2>
<h3>ICLR 2026</h3>
<p>
<a href="https://arxiv.org/abs/2602.07054">
<img src="https://img.shields.io/badge/arXiv-2602.07054-b31b1b.svg?logo=arxiv" alt="arXiv">
</a>
<!-- <a href="https://github.com/ihp-lab/AVERE">
<img src="https://img.shields.io/badge/Github-AVERE-black?logo=github" alt="GitHub">
</a> -->
<a href="https://avere-iclr.github.io/">
<img src="https://img.shields.io/badge/Website-avere--iclr.github.io-purple?logo=data:image/svg%2bxml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4NCjwhLS0gU3ZnIFZlY3RvciBJY29ucyA6IGh0dHA6Ly93d3cub25saW5ld2ViZm9udHMuY29tL2ljb24gLS0+DQo8IURPQ1RZUEUgc3ZnIFBVQkxJQyAiLS8vVzNDLy9EVEQgU1ZHIDEuMS8vRU4iICJodHRwOi8vd3d3LnczLm9yZy9HcmFwaGljcy9TVkcvMS4xL0RURC9zdmcxMS5kdGQiPg0KPHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IiB2aWV3Qm94PSIwIDAgMjU2IDI1NiIgZW5hYmxlLWJhY2tncm91bmQ9Im5ldyAwIDAgMjU2IDI1NiIgeG1sOnNwYWNlPSJwcmVzZXJ2ZSI+DQo8bWV0YWRhdGE+IFN2ZyBWZWN0b3IgSWNvbnMgOiBodHRwOi8vd3d3Lm9ubGluZXdlYmZvbnRzLmNvbS9pY29uIDwvbWV0YWRhdGE+DQo8Zz48Zz48cGF0aCBmaWxsPSIjRkZGRkZGIiBkPSJNMTI4LDI0NmMtNjUuMiwwLTExOC01Mi44LTExOC0xMThDMTAsNjIuOCw2Mi44LDEwLDEyOCwxMGM2NS4yLDAsMTE4LDUyLjgsMTE4LDExOEMyNDYsMTkzLjIsMTkzLjIsMjQ2LDEyOCwyNDZMMTI4LDI0NnogTTIzMS4zLDEyOGgtNTljMCwxNS43LTEuMiwzMC42LTMuMyw0NC4yaDUxLjlDMjI3LjMsMTU4LjgsMjMxLjMsMTQzLjksMjMxLjMsMTI4TDIzMS4zLDEyOHogTTIxMi4yLDE4N2gtNDUuOWMtMy43LDE2LjktOC45LDMxLjItMTUuMSw0MS40QzE3Ni40LDIyMi42LDE5Ny44LDIwNy41LDIxMi4yLDE4N0wyMTIuMiwxODd6IE05OC45LDExMy4zaDU4LjFjLTAuNC0xMC41LTEuNC0yMC4zLTIuNi0yOS41aC01Mi45QzEwMC4zLDkzLDk5LjQsMTAyLjgsOTguOSwxMTMuM0w5OC45LDExMy4zeiBNOTguNSwxMjhjMCwxNS45LDEuMSwzMC44LDMsNDQuMmg1My4xYzEuOC0xMy41LDIuOS0yOC4zLDIuOS00NC4ySDk4LjVMOTguNSwxMjh6IE0xMjgsMjMxLjNjMTAsMCwxOC44LTE3LjYsMjQuMi00NC4zaC00OC4zQzEwOS4yLDIxMy43LDExOCwyMzEuMywxMjgsMjMxLjNMMTI4LDIzMS4zeiBNMTA0LjksMjI4LjRjLTYuMy0xMC4yLTExLjUtMjQuNS0xNS4xLTQxLjRINDMuOEM1OC4yLDIwNy41LDc5LjYsMjIyLjYsMTA0LjksMjI4LjRMMTA0LjksMjI4LjR6IE0yNC44LDEyOEwyNC44LDEyOEwyNC44LDEyOEwyNC44LDEyOEwyNC44LDEyOHogTTM1LjEsMTcyLjNIODdjLTIuMS0xMy43LTMuMy0yOC42LTMuMy00NC4yaC01OUMyNC44LDE0My45LDI4LjcsMTU4LjgsMzUuMSwxNzIuM0wzNS4xLDE3Mi4zeiBNODQuMSwxMTMuM2MwLjUtMTAuNCwxLjUtMjAuMiwyLjktMjkuNUgzNWMtNC40LDkuMi03LjQsMTkuMS05LDI5LjVIODQuMUw4NC4xLDExMy4zeiBNNDMuNiw2OWg0Ni4xYzMuNy0xNi45LDguOC0zMS4yLDE1LjEtNDEuNEM3OS42LDMzLjQsNTgsNDguNSw0My42LDY5TDQzLjYsNjl6IE0xMjgsMjQuN2MtMTAsMC0xOC43LDE3LjctMjQsNDQuM0gxNTJDMTQ2LjcsNDIuNCwxMzgsMjQuNywxMjgsMjQuN0wxMjgsMjQuN3ogTTE1MS4xLDI3LjZjNi4yLDEwLjIsMTEuNCwyNC42LDE1LjEsNDEuNGg0Ni4xQzE5OCw0OC41LDE3Ni40LDMzLjQsMTUxLjEsMjcuNkwxNTEuMSwyNy42eiBNMjIxLDgzLjhoLTUyLjFjMS40LDkuMywyLjUsMTkuMiwzLDI5LjVIMjMwQzIyOC41LDEwMi44LDIyNS40LDkyLjksMjIxLDgzLjhMMjIxLDgzLjh6Ii8+PC9nPjwvZz4NCjwvc3ZnPg==" alt="Website">
</a>
<a href="https://huggingface.co/datasets/chaubeyG/EmoReAlM">
<img src="https://img.shields.io/badge/Benchmark-EmoReAlM-orange?logo=huggingface" alt="Benchmark">
</a>
<a href="https://huggingface.co/datasets/chaubeyG/EmoReAlM/blob/main/LICENSE.rst">
<img src="https://img.shields.io/badge/License-USC%20Research-green?logo=data:image/svg%2bxml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4NCjwhLS0gU3ZnIFZlY3RvciBJY29ucyA6IGh0dHA6Ly93d3cub25saW5ld2ViZm9udHMuY29tL2ljb24gLS0+DQo8IURPQ1RZUEUgc3ZnIFBVQkxJQyAiLS8vVzNDLy9EVEQgU1ZHIDEuMS8vRU4iICJodHRwOi8vd3d3LnczLm9yZy9HcmFwaGljcy9TVkcvMS4xL0RURC9zdmcxMS5kdGQiPg0KPHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IiB2aWV3Qm94PSIwIDAgMjU2IDI1NiIgZW5hYmxlLWJhY2tncm91bmQ9Im5ldyAwIDAgMjU2IDI1NiIgeG1sOnNwYWNlPSJwcmVzZXJ2ZSI+DQo8bWV0YWRhdGE+IFN2ZyBWZWN0b3IgSWNvbnMgOiBodHRwOi8vd3d3Lm9ubGluZXdlYmZvbnRzLmNvbS9pY29uIDwvbWV0YWRhdGE+DQo8Zz4gPHBhdGggZmlsbD0iI0ZGRkZGRiIgZD0iTTIyOC4zLDIyMi40SDI3LjdjLTkuOCwwLTE3LjctNy45LTE3LjctMTcuN1Y1MS4zYzAtOS44LDcuOS0xNy43LDE3LjctMTcuN2gyMDAuNiBjOS44LDAsMTcuNyw3LjksMTcuNywxNy43djE1My40QzI0NiwyMTQuNSwyMzguMSwyMjIuNCwyMjguMywyMjIuNHogTTI3LjcsNDUuNGMtMy4zLDAtNS45LDIuNi01LjksNS45bDAsMHYxNTMuNCBjMCwzLjMsMi42LDUuOSw1LjksNS45aDIwMC42YzMuMywwLDUuOS0yLjYsNS45LTUuOVY1MS4zYzAtMy4zLTIuNi01LjktNS45LTUuOUgyNy43eiIvPiA8cGF0aCBmaWxsPSIjRkZGRkZGIiBkPSJNMTIyLjEsODAuOEg1MS4zYy0zLjMsMC01LjktMi42LTUuOS01LjljMC0zLjMsMi42LTUuOSw1LjktNS45aDcwLjhjMy4zLDAsNS45LDIuNiw1LjksNS45IEMxMjgsNzguMiwxMjUuNCw4MC44LDEyMi4xLDgwLjh6IE0xMjIuMSwxMTYuMkg1MS4zYy0zLjMsMC01LjktMi42LTUuOS01LjlzMi42LTUuOSw1LjktNS45aDcwLjhjMy4zLDAsNS45LDIuNiw1LjksNS45IFMxMjUuNCwxMTYuMiwxMjIuMSwxMTYuMnogTTEyMi4xLDEzOS44SDUxLjNjLTMuMywwLTUuOS0yLjYtNS45LTUuOXMyLjYtNS45LDUuOS01LjloNzAuOGMzLjMsMCw1LjksMi42LDUuOSw1LjkgUzEyNS40LDEzOS44LDEyMi4xLDEzOS44eiBNMTIyLjEsMTYzLjRINTEuM2MtMy4zLDAtNS45LTIuNi01LjktNS45czIuNi01LjksNS45LTUuOWg3MC44YzMuMywwLDUuOSwyLjYsNS45LDUuOSBTMTI1LjQsMTYzLjQsMTIyLjEsMTYzLjR6IE0xMTAuMywxODdoLTU5Yy0zLjMsMC01LjktMi42LTUuOS01LjljMC0zLjMsMi42LTUuOSw1LjktNS45aDU5YzMuMywwLDUuOSwyLjYsNS45LDUuOSBDMTE2LjIsMTg0LjQsMTEzLjYsMTg3LDExMC4zLDE4N3ogTTIyMS43LDg3LjJsLTkuNi03TDIwOC41LDY5aC0xMS45bC05LjYtN2wtOS42LDdoLTExLjlsLTMuNywxMS4zbC05LjYsN2wzLjcsMTEuM2wtMy43LDExLjMgbDkuNiw3bDEuNiw0LjhjMCwwLjIsMCwwLjQsMCwwLjZ2NTljMCwzLjMsMi42LDUuOSw1LjksNS45YzEuNiwwLDMuMS0wLjYsNC4yLTEuN2wxMy41LTEzLjVsMTMuNSwxMy41YzEuNywxLjcsNC4yLDIuMiw2LjQsMS4zIGMyLjItMC45LDMuNi0zLjEsMy42LTUuNXYtNTljMC0wLjIsMC0wLjQsMC0wLjZsMS42LTQuOGw5LjYtN2wtMy43LTExLjNMMjIxLjcsODcuMkwyMjEuNyw4Ny4yeiBNMTY2LjEsOTEuN2w1LjgtNC4ybDIuMi02LjhoNy4xIGw1LjgtNC4ybDUuOCw0LjJoNy4xbDIuMiw2LjhsNS44LDQuMmwtMi4yLDYuOGwyLjIsNi44bC01LjgsNC4ybC0yLjIsNi44aC03LjFsLTUuOCw0LjJsLTUuOC00LjJoLTcuMWwtMi4yLTYuOGwtNS44LTQuMmwyLjItNi44IEwxNjYuMSw5MS43eiBNMTkxLjIsMTU5LjJjLTIuMy0yLjMtNi0yLjMtOC4zLDBsLTcuNiw3LjZWMTI4aDIuMmw5LjYsN2w5LjYtN2gyLjJ2MzguOEwxOTEuMiwxNTkuMkwxOTEuMiwxNTkuMnoiLz48L2c+DQo8L3N2Zz4=" alt="License">
</a>
<!-- <a href="https://www.python.org/">
<img src="https://img.shields.io/badge/Python-3.10+-blue.svg?logo=python" alt="Python Version">
</a> -->
</p>
<br>
</div>
This is the official benchmark dataset for the **ICLR 2026** paper — [AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization](https://arxiv.org/abs/2602.07054).
Refer to our [project page](https://avere-iclr.github.io/) for more information on the method.
---
## Overview
**EmoReAlM** is a benchmark designed to evaluate multimodal large language models (MLLMs) on audiovisual emotion understanding. It specifically targets two critical failure modes of current MLLMs:
1. **Reasoning errors** — spurious associations between emotions and irrelevant audiovisual cues.
2. **Perception errors** — hallucination of audiovisual cues driven by text priors in the language model backbone.
EmoReAlM consists of **4,000 multiple-choice questions** spanning five evaluation tasks across audio and visual modalities, built on top of video clips from DFEW.
---
## Benchmark Tasks
EmoReAlM evaluates MLLMs across five tasks:
| Task | Key | Description | # Samples |
|------|-----|-------------|-----------|
| **Reasoning Basic (Audio)** | `reasoning_basic_audio` | Tests whether the model can correctly associate speech semantics and paralinguistic cues (e.g., tone, pitch) with the expressed emotion. | 972 |
| **Reasoning Basic (Visual)** | `reasoning_basic_video` | Tests whether the model can correctly associate facial expressions and body language with the expressed emotion. | 1024 |
| **Modality Agreement** | `modality_agreement` | Tests whether the model can determine if visual and audio cues are consistent in conveying the same emotion. | 456 |
| **Reasoning Stress Test (Audio)** | `reasoning_stress_audio` | Probes the model for audio hallucinations — whether the model fabricates or agrees with non-existent audio cues (e.g., affirming a "somber tone" that is not present). | 820 |
| **Reasoning Stress Test (Visual)** | `reasoning_stress_video` | Probes the model for visual hallucinations — whether the model fabricates or agrees with non-existent visual cues (e.g., affirming "clenched fists" that are not present). | 728 |
---
## Data Format
Each sample in `emorealm_v1.json` follows this structure:
```json
{
"id": 77172,
"video": "part_1/1252.mp4",
"question": "Does a somber tone or soft-spoken dialogue enhance the feeling of sadness conveyed by the person in the video?",
"answer": "A",
"choices": [
"(A) No",
"(B) Yes"
],
"task": "reasoning_stress_audio"
}
```
| Field | Description |
|-------|-------------|
| `id` | Unique sample identifier |
| `video` | Relative path to the video file (sourced from DFEW) |
| `question` | The multiple-choice question |
| `answer` | The correct answer key (e.g., `"A"` or `"B"`) |
| `choices` | List of answer choices |
| `task` | One of: `reasoning_basic_audio`, `reasoning_basic_video`, `modality_agreement`, `reasoning_stress_audio`, `reasoning_stress_video` |
---
## Leaderboard
For the full leaderboard (including vision-only and audio-only models), visit our [project page](https://avere-iclr.github.io/#leaderboard).
Accuracy (%) on EmoReAlM. Higher is better.
### Proprietary Models
| Model | Reas. Basic (A) | Reas. Basic (V) | Mod. Agree. | Stress (A) | Stress (V) | Avg. Acc. |
|-------|:---:|:---:|:---:|:---:|:---:|:---:|
| Gemini 2.5 Flash | 78.0 | 88.9 | 57.0 | 63.5 | 73.2 | 72.1 |
| Gemini 2.5 Pro | 72.7 | 87.0 | 54.7 | 63.8 | 73.1 | 70.3 |
### Open-source Omni (Audiovisual) Models
| Model | Reas. Basic (A) | Reas. Basic (V) | Mod. Agree. | Stress (A) | Stress (V) | Avg. Acc. |
|-------|:---:|:---:|:---:|:---:|:---:|:---:|
| VideoLLaMA | 21.7 | 22.2 | 34.1 | 46.1 | 48.8 | 37.1 |
| PandaGPT | 37.4 | 35.7 | 53.7 | 45.8 | 47.1 | 44.0 |
| OneLLM | 42.0 | 55.6 | 54.8 | 56.8 | 62.0 | 54.2 |
| VideoLLaMA2 | 63.1 | 66.8 | 52.6 | 53.7 | 59.4 | 59.1 |
| OLA | 63.2 | 60.4 | 51.7 | 63.5 | 62.3 | 60.2 |
| VITA-1.5 | 63.1 | 84.3 | 51.7 | 63.0 | 66.1 | 65.6 |
| Qwen 2.5 Omni | 76.8 | 89.2 | 52.2 | 64.0 | 67.8 | 70.0 |
### AVEm-DPO (Ours)
| Model | Reas. Basic (A) | Reas. Basic (V) | Mod. Agree. | Stress (A) | Stress (V) | Avg. Acc. |
|-------|:---:|:---:|:---:|:---:|:---:|:---:|
| Our base | 69.2 | 85.3 | 51.4 | 53.1 | 66.4 | 65.1 |
| Our base + AVEm-DPO | **77.9** | **92.5** | **68.9** | **82.6** | **94.6** | **83.3** |
| Emot.-LLaMA* | 64.8 | 84.9 | 51.2 | 48.9 | 69.1 | 63.8 |
| Emot.-LLaMA* + AVEm-DPO | **76.5** | **89.1** | **65.6** | **77.3** | **91.8** | **80.1** |
---
## Video Data
The video clips used in EmoReAlM are sourced from the [DFEW](https://dfew-dataset.github.io/) dataset. We provide only the benchmark annotations (questions, answers, and task labels). Users must obtain the original DFEW videos separately under the appropriate license from the DFEW authors.
---
## License
This dataset is distributed under the USC Research license. See [LICENSE.rst](LICENSE.rst) for more details. The benchmark annotations (questions, answer choices, and task labels) are provided by us. The underlying video data is sourced from the DFEW dataset, and users are requested to obtain the videos from the original data source under the appropriate license.
---
## Acknowledgement
Research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-25-2-0040. Work was also in part supported by the National Science Foundation under Grant IIS-2211550 and the National Institute of Mental Health of the National Institutes of Health under Award Number R61MH135407. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, NSF, NIH, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
## Citation
```latex
@inproceedings{chaubey2026avere,
title={AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization},
author={Chaubey, Ashutosh and Pang, Jiacheng and Siniukov, Maksim and Soleymani, Mohammad},
booktitle={International Conference on Learning Representations (ICLR)},
year={2026},
url={https://openreview.net/forum?id=td682AAuPr}
}
```