---
tags:
- multimodal
- emotion-recognition
- emotional-intelligence
- video
- conversation
language:
- en
license: apache-2.0
pretty_name: EmoBench-M
task_categories:
- video-classification
- video-text-to-text
dataset_info:
- config:
- config_name: default
---
# EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
## Dataset Description
**EmoBench-M** is a comprehensive benchmark designed to evaluate the Emotional Intelligence (EI) of Multimodal Large Language Models (MLLMs). It provides a challenging testbed for assessing a model's ability to understand and interpret human emotions from video, a critical step towards developing more empathetic and human-like AI systems.
The dataset consists of video clips featuring individuals expressing various emotions. Each video is paired with a conversational prompt that asks the model to determine the emotion conveyed. This structure pushes models to integrate visual, auditory, and textual information to make a correct assessment, moving beyond simple text-based sentiment analysis.
If you find this Dataset helpful, feel free to ⭐ it! [EmoBench-M](https://github.com/Emo-gml/EmoBench-M).
## 📦 Dataset
Each JSON file contains conversation-style prompts and labels aligned with the corresponding video clips. The structure looks like:
```json
[
{
"id": "0",
"video": "videos/ch-simsv2s/aqgy4_0004/00023.mp4",
"conversations": [
{
"from": "human",
"value": "