EmoReAlM / README.md
chaubeyG's picture
Update README.md
10dc296 verified
metadata
license: cc-by-4.0
tags:
  - emotion
  - reasoning
  - omnillm
  - multimodal

Improving Audiovisual Emotion Reasoning with Preference Optimization

EmoReAlM Benchmark

ICLR 2026

arXiv Website Benchmark License


This is the official benchmark dataset for the ICLR 2026 paper — AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization.

Refer to our project page for more information on the method.


Overview

EmoReAlM is a benchmark designed to evaluate multimodal large language models (MLLMs) on audiovisual emotion understanding. It specifically targets two critical failure modes of current MLLMs:

  1. Reasoning errors — spurious associations between emotions and irrelevant audiovisual cues.
  2. Perception errors — hallucination of audiovisual cues driven by text priors in the language model backbone.

EmoReAlM consists of 4,000 multiple-choice questions spanning five evaluation tasks across audio and visual modalities, built on top of video clips from DFEW.


Benchmark Tasks

EmoReAlM evaluates MLLMs across five tasks:

Task Key Description # Samples
Reasoning Basic (Audio) reasoning_basic_audio Tests whether the model can correctly associate speech semantics and paralinguistic cues (e.g., tone, pitch) with the expressed emotion. 972
Reasoning Basic (Visual) reasoning_basic_video Tests whether the model can correctly associate facial expressions and body language with the expressed emotion. 1024
Modality Agreement modality_agreement Tests whether the model can determine if visual and audio cues are consistent in conveying the same emotion. 456
Reasoning Stress Test (Audio) reasoning_stress_audio Probes the model for audio hallucinations — whether the model fabricates or agrees with non-existent audio cues (e.g., affirming a "somber tone" that is not present). 820
Reasoning Stress Test (Visual) reasoning_stress_video Probes the model for visual hallucinations — whether the model fabricates or agrees with non-existent visual cues (e.g., affirming "clenched fists" that are not present). 728

Data Format

Each sample in emorealm_v1.json follows this structure:

{
    "id": 77172,
    "video": "part_1/1252.mp4",
    "question": "Does a somber tone or soft-spoken dialogue enhance the feeling of sadness conveyed by the person in the video?",
    "answer": "A",
    "choices": [
        "(A) No",
        "(B) Yes"
    ],
    "task": "reasoning_stress_audio"
}
Field Description
id Unique sample identifier
video Relative path to the video file (sourced from DFEW)
question The multiple-choice question
answer The correct answer key (e.g., "A" or "B")
choices List of answer choices
task One of: reasoning_basic_audio, reasoning_basic_video, modality_agreement, reasoning_stress_audio, reasoning_stress_video

Leaderboard

For the full leaderboard (including vision-only and audio-only models), visit our project page.

Accuracy (%) on EmoReAlM. Higher is better.

Proprietary Models

Model Reas. Basic (A) Reas. Basic (V) Mod. Agree. Stress (A) Stress (V) Avg. Acc.
Gemini 2.5 Flash 78.0 88.9 57.0 63.5 73.2 72.1
Gemini 2.5 Pro 72.7 87.0 54.7 63.8 73.1 70.3

Open-source Omni (Audiovisual) Models

Model Reas. Basic (A) Reas. Basic (V) Mod. Agree. Stress (A) Stress (V) Avg. Acc.
VideoLLaMA 21.7 22.2 34.1 46.1 48.8 37.1
PandaGPT 37.4 35.7 53.7 45.8 47.1 44.0
OneLLM 42.0 55.6 54.8 56.8 62.0 54.2
VideoLLaMA2 63.1 66.8 52.6 53.7 59.4 59.1
OLA 63.2 60.4 51.7 63.5 62.3 60.2
VITA-1.5 63.1 84.3 51.7 63.0 66.1 65.6
Qwen 2.5 Omni 76.8 89.2 52.2 64.0 67.8 70.0

AVEm-DPO (Ours)

Model Reas. Basic (A) Reas. Basic (V) Mod. Agree. Stress (A) Stress (V) Avg. Acc.
Our base 69.2 85.3 51.4 53.1 66.4 65.1
Our base + AVEm-DPO 77.9 92.5 68.9 82.6 94.6 83.3
Emot.-LLaMA* 64.8 84.9 51.2 48.9 69.1 63.8
Emot.-LLaMA* + AVEm-DPO 76.5 89.1 65.6 77.3 91.8 80.1

Video Data

The video clips used in EmoReAlM are sourced from the DFEW dataset. We provide only the benchmark annotations (questions, answers, and task labels). Users must obtain the original DFEW videos separately under the appropriate license from the DFEW authors.


License

This dataset is distributed under the USC Research license. See LICENSE.rst for more details. The benchmark annotations (questions, answer choices, and task labels) are provided by us. The underlying video data is sourced from the DFEW dataset, and users are requested to obtain the videos from the original data source under the appropriate license.


Acknowledgement

Research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-25-2-0040. Work was also in part supported by the National Science Foundation under Grant IIS-2211550 and the National Institute of Mental Health of the National Institutes of Health under Award Number R61MH135407. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, NSF, NIH, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

Citation

@inproceedings{chaubey2026avere,
  title={AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization},
  author={Chaubey, Ashutosh and Pang, Jiacheng and Siniukov, Maksim and Soleymani, Mohammad},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2026},
  url={https://openreview.net/forum?id=td682AAuPr}
}