File size: 9,779 Bytes
787afdd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3f9d9b
787afdd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3f9d9b
 
 
 
 
 
 
 
 
 
 
 
 
 
787afdd
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
---
license: cc-by-nc-4.0
task_categories:
  - video-classification
  - audio-classification
  - text-classification
  - question-answering
  - visual-question-answering
language:
  - en
  - zh
tags:
  - multimodal
  - emotion-recognition
  - sentiment-analysis
  - humor-detection
  - mental-health
  - video-qa
  - reinforcement-learning
  - verl
  - rl-training
  - qwen2.5-omni
  - audio
  - video
  - pose-estimation
  - opensmile
pretty_name: Human Behavior Atlas v2
arxiv: 2510.04899
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: train-*.parquet
      - split: validation
        path: validation-*.parquet
      - split: test
        path: test-*.parquet
dataset_info:
  features:
    - name: problem
      dtype: string
    - name: answer
      dtype: string
    - name: images
      sequence: binary
    - name: videos
      sequence: binary
    - name: audios
      sequence: binary
    - name: dataset
      dtype: string
    - name: modality_signature
      dtype: string
    - name: ext_video_feats
      sequence: binary
    - name: ext_audio_feats
      sequence: binary
    - name: task
      dtype: string
    - name: class_label
      dtype: string
  splits:
    - name: train
      num_examples: 74449
    - name: validation
      num_examples: 7646
    - name: test
      num_examples: 18204
---

# Human Behavior Atlas v2

A large-scale multimodal dataset for human behavior understanding, spanning emotion recognition, sentiment analysis, humor detection, mental health screening, and video question answering. The dataset integrates 16 source datasets into a unified schema with audio, video, and pre-extracted features, designed for reinforcement learning training with the [verl](https://github.com/volcengine/verl) framework and multimodal language models such as Qwen2.5-Omni-7B.

## Dataset Summary

| Property | Value |
|---|---|
| Total samples | 100,299 |
| Train split | 74,449 |
| Validation split | 7,646 |
| Test split | 18,204 |
| Source datasets | 16 |
| Modalities | Text, Audio (.wav bytes), Video (.mp4 bytes), OpenSmile features (.pt bytes), Pose features (.pt bytes) — all embedded in parquet |
| Languages | English, Chinese (CHSIMSv2) |
| License | CC BY-NC 4.0 |

## Modality Distribution

| Modality Signature | Samples | Percentage |
|---|---|---|
| text_video_audio | 87,318 | 87.1% |
| text_audio | 10,431 | 10.4% |
| text | 2,550 | 2.5% |

## Source Datasets

| Dataset | Samples | Task | Modality | Description |
|---|---|---|---|---|
| **mosei_senti** | 22,740 | Sentiment classification | text_video_audio | CMU-MOSEI sentiment analysis (negative/neutral/positive) |
| **intentqa** | 14,158 | Video QA | text_video_audio | Intent-driven video question answering |
| **meld_senti** | 13,518 | Sentiment classification | text_video_audio | MELD multimodal sentiment (from Friends TV series) |
| **meld_emotion** | 13,350 | Emotion classification | text_video_audio | MELD multimodal emotion recognition (7 classes) |
| **mosei_emotion** | 8,545 | Emotion classification | text_video_audio | CMU-MOSEI emotion recognition (6 classes) |
| **cremad** | 7,442 | Emotion classification | text_audio | CREMA-D acted emotional speech recognition |
| **siq2** | 6,394 | Video QA | text_video_audio | Social IQ 2.0 social intelligence QA |
| **chsimsv2** | 4,384 | Sentiment classification | text_video_audio | CH-SIMS v2 Chinese multimodal sentiment |
| **tess** | 2,800 | Emotion classification | text_audio | Toronto Emotional Speech Set |
| **urfunny** | 2,113 | Humor classification | text_video_audio | UR-Funny multimodal humor detection |
| **mmpsy_depression** | 1,275 | Depression screening | text_video_audio | Multimodal depression assessment |
| **mmpsy_anxiety** | 1,275 | Anxiety screening | text_video_audio | Multimodal anxiety assessment |
| **mimeqa** | 801 | Video QA | text_video_audio | MIME gesture-based QA |
| **mmsd** | 687 | Humor classification | text | Multimodal sarcasm detection (text only) |
| **ptsd_in_the_wild** | 628 | PTSD detection | text_video_audio | PTSD detection from video interviews |
| **daicwoz** | 189 | Depression screening | text_video_audio | DAIC-WOZ clinical depression interviews |

## Task Types

| Task ID | Description | Datasets |
|---|---|---|
| `emotion_cls` | Emotion classification | mosei_emotion, meld_emotion, cremad, tess |
| `sentiment_cls` | Sentiment classification / regression | mosei_senti, meld_senti, chsimsv2 |
| `humor_cls` | Humor and sarcasm detection | urfunny, mmsd |
| `depression` | Depression screening | mmpsy_depression, daicwoz |
| `anxiety` | Anxiety screening | mmpsy_anxiety |
| `ptsd` | PTSD detection | ptsd_in_the_wild |
| `video_qa` | Video question answering | intentqa, siq2, mimeqa |

## Schema

Each row in the Parquet files contains the following columns:

| Column | Type | Description |
|---|---|---|
| `problem` | string | Prompt text with modality markers (`<audio>`, `<video>`) |
| `answer` | string | Ground truth answer |
| `audios` | list[bytes] | Raw .wav audio bytes (embedded) |
| `videos` | list[bytes] | Raw .mp4 video bytes (embedded) |
| `images` | list[bytes] | Image bytes (currently unused) |
| `dataset` | string | Source dataset name |
| `modality_signature` | string | Modality combination: `text_video_audio`, `text_audio`, or `text` |
| `ext_video_feats` | list[bytes] | Pose estimation feature tensors (.pt bytes, embedded) |
| `ext_audio_feats` | list[bytes] | OpenSmile audio feature tensors (.pt bytes, embedded) |
| `task` | string | Task type identifier |
| `class_label` | string | Classification label |

## Repository Structure

```
sboughorbel/human_behavior_atlas_v2/
  train-00000-of-XXXXX.parquet    # Sharded parquet with embedded audio/video
  train-00001-of-XXXXX.parquet
  ...
  validation-*.parquet
  test-*.parquet
```

All data — including audio, video, and pre-extracted features — is fully embedded in the parquet files. No separate downloads or extraction needed.

## Usage

### Loading with HuggingFace Datasets

```python
from datasets import load_dataset

# Stream without downloading everything
ds = load_dataset("sboughorbel/human_behavior_atlas_v2", split="train", streaming=True)
sample = next(iter(ds))

# Load a subset
ds_100 = load_dataset("sboughorbel/human_behavior_atlas_v2", split="train[:100]")

# Filter by task or modality
emotion_ds = ds_100.filter(lambda x: x["task"] == "emotion_cls")
```

### Accessing Embedded Media

```python
import io
import soundfile as sf

sample = ds_100[0]

# Audio is raw bytes — decode with soundfile or torchaudio
if sample["audios"]:
    audio_data, sr = sf.read(io.BytesIO(sample["audios"][0]))

# Video is raw bytes — decode with decord, opencv, or write to temp file
if sample["videos"]:
    video_bytes = sample["videos"][0]
    # e.g., with decord:
    # from decord import VideoReader
    # vr = VideoReader(io.BytesIO(video_bytes))
```

### Download and Setup

```bash
# Download full dataset
huggingface-cli download sboughorbel/human_behavior_atlas_v2 \
    --repo-type dataset --local-dir /path/to/data

# Or download specific splits only
huggingface-cli download sboughorbel/human_behavior_atlas_v2 \
    --repo-type dataset --local-dir /path/to/data \
    --include "train-*.parquet"
```

### Integration with verl RL Training

This dataset is designed for RL training with [verl](https://github.com/volcengine/verl) using Qwen2.5-Omni-7B. The `problem` field contains structured prompts with `<audio>` and `<video>` modality markers. Audio and video bytes are loaded directly from parquet — no path resolution needed.

All data including feature tensors is embedded directly in the parquet files.

```bash
# verl training config
python3 -m verl.trainer.main_ppo \
    data.train_files=/path/to/data/train-*.parquet \
    data.val_files=/path/to/data/validation-*.parquet \
    data.prompt_key=problem \
    data.image_key=images \
    data.video_key=videos \
    data.modalities='audio,videos' \
    ...
```

## Citation

If you use this dataset, please cite the following paper:

```bibtex
@article{Ong2025HumanBehavior,
  title={Human Behavior Atlas: Benchmarking Unified Psychological and Social Behavior Understanding},
  author={Ong, Keane and Dai, Wei and Li, Carol and Feng, Dewei and Li, Hengzhi and Wu, Jingyao and Cheong, Jiaee and Mao, Rui and Mengaldo, Gianmarco and Cambria, Erik and Liang, Paul Pu},
  journal={arXiv preprint arXiv:2510.04899},
  year={2025}
}
```

> Keane Ong, Wei Dai, Carol Li, Dewei Feng, Hengzhi Li, Jingyao Wu, Jiaee Cheong, Rui Mao, Gianmarco Mengaldo, Erik Cambria, Paul Pu Liang. "Human Behavior Atlas: Benchmarking Unified Psychological and Social Behavior Understanding." ICLR 2026. [arXiv:2510.04899](https://arxiv.org/abs/2510.04899)

Please also cite the individual source datasets as appropriate:

- CMU-MOSEI: Zadeh et al., "Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph", ACL 2018
- MELD: Poria et al., "MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations", ACL 2019
- CREMA-D: Cao et al., "CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset", IEEE TAC 2014
- DAIC-WOZ: Gratch et al., "The Distress Analysis Interview Corpus of Human and Computer Interviews", LREC 2014
- CH-SIMS v2: Liu et al., "Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module", ICMI 2022

## License

This dataset is released under the [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license. Individual source datasets may have their own licensing terms; please consult the original dataset publications for details.