droiden's picture
Duplicate from keentomato/human_behavior_atlas
24bf3b2
metadata
pretty_name: Human Behavior Atlas (HBA)
language:
  - en
license: other
task_categories:
  - text-classification
  - audio-classification
  - image-classification
  - video-classification
  - text-generation
tags:
  - human-behavior
  - social-intelligence
  - multimodal
  - benchmarking
  - psychology
  - affective-computing
  - emotion-recognition
  - intent-recognition
  - sarcasm-detection
  - depression-detection
  - anxiety-detection
  - ptsd
  - nonverbal-behavior
  - pose-estimation
  - opensmile
size_categories:
  - 100K<n<1M

Human Behavior Atlas (HBA)

Human Behavior Atlas (HBA) is a unified benchmark for multimodal behavioral understanding.
It aggregates and standardizes multiple behavioral datasets into a single training and evaluation framework, enabling consistent training and evaluation of foundation models on psychological and social behavior tasks (e.g., emotion, intent, sarcasm, mental health signals, nonverbal behavior).


What’s inside

When you download the dataset, you will find:

1) JSONL splits (centralized indices)

These JSONL files sit at the root level and define all benchmark samples:

  • final_v8_train_cleaned_2.jsonl — training set
  • final_v8_val_cleaned.jsonl — validation set
  • final_v8_test_cleaned.jsonl — test set

Each line is a self-contained sample that provides:

  • the prompt (problem statement),
  • the ground-truth answer/label,
  • pointers to any raw media (video/audio/text/image),
  • pointers to any pre-extracted behavioral features (.pt).
  • miscellaneous information such as the dataset, task, class label, modalities present (i.e. modality signature)

2) Raw media files

Subdirectories contain the raw media referenced by each JSONL sample:

  • video / audio files
  • (optional) associated text files

3) Behavioral feature files (.pt)

Pre-extracted features for common behavioral signals, such as:

  • pose features (video)
  • OpenSMILE features (audio)
  • (and other feature types included in the release)

Data format

HBA uses JSONL to provide a unified sample schema across heterogeneous source datasets.
All file paths are relative to the dataset root (the same directory as the JSONL files).

Example sample

{
  "problem": "<audio>\nDon't forget a jacket.\nThe above is a speech recording along with the transcript from a clinical context. What emotion is the speaker expressing? Answer with one word from the following: anger, disgust, fear, happy, neutral, sad",
  "answer": "sad",
  "images": [],
  "videos": [],
  "audios": ["cremad_dataset_audio/1077_DFA_SAD_XX.wav"],
  "dataset": "cremad",
  "texts": [],
  "modality_signature": "text_audio",
  "ext_video_feats": ["pose/cremad_dataset_audio/1077_DFA_SAD_XX.pt"],
  "ext_audio_feats": ["opensmile/cremad_dataset_audio/1077_DFA_SAD_XX.pt"],
  "task": "emotion_cls",
  "class_label": "sad"
}

Field-Level Explanation

Key fields:

  • problem / answer — the prompt and ground-truth label
  • images, videos, audios, texts — relative paths to raw media files
  • ext_video_feats, ext_audio_feats — relative paths to pre-extracted behavioral feature files (.pt)
  • modality_signature — indicates which modalities are present (e.g., text_audio, video, text_video)
  • dataset — source dataset name
  • task / class_label — behavioral task type and label

Detailed explanation:

  • problem: The full prompt presented to the model. May contain modality markers such as <audio> or <video> and includes the task instruction.
  • answer: The ground-truth label expected during evaluation.
  • images, videos, audios, texts: Lists of relative paths to raw media files stored in subdirectories under the dataset root. Empty lists indicate that the modality is not present.
  • ext_video_feats, ext_audio_feats: Lists of relative paths to pre-extracted behavioral feature tensors (.pt files), also stored in subdirectories under the dataset root.
  • modality_signature: A compact indicator of which modalities are present for the sample (e.g., text_audio, video, text_video).
  • dataset: The original source dataset name (e.g., cremad), enabling provenance tracking.
  • task: High-level behavioral task identifier (e.g., emotion_cls).
  • class_label: Canonical ground-truth class associated with the sample.

The dataloader uses this JSONL as the centralized index to locate and load all raw media and feature files for each sample into the model.


Data Loading Workflow

  1. Read a JSONL entry.
  2. Parse problem, task, and class_label.
  3. Load raw media using relative paths.
  4. Optionally load .pt behavioral feature tensors.
  5. Construct the multimodal sample for training or evaluation.

This unified indexing structure enables heterogeneous behavioral datasets to be standardized under a single multimodal evaluation framework.


Instructions to Download the Full Benchmark (JSONLs + Raw Media)

The dataset consists of:

  • JSONL split files at the repository root
  • Multipart tar archive under parts/

Files:

parts/human_behaviour_data.tar.part-000
...
parts/human_behaviour_data.tar.part-009

You must download both the JSONLs and the archive parts.

If you have sufficient disk space, the simplest and safest approach is to use huggingface-cli:

pip install -U "huggingface_hub[cli]"
huggingface-cli login

huggingface-cli download keentomato/human_behavior_atlas \
  --repo-type dataset \
  --local-dir hba_download \
  --local-dir-use-symlinks False

Then merge and extract:

cd hba_download
cat parts/human_behaviour_data.tar.part-* > human_behaviour_data.tar
tar -xf human_behaviour_data.tar

After extraction, your directory structure will look like:

hba_download/
├── final_v8_train_cleaned_2.jsonl
├── final_v8_val_cleaned.jsonl
├── final_v8_test_cleaned.jsonl
├── pose/
├── opensmile/
├── cremad_dataset_audio/
├── ...