TOMATO_direction / README.md
KHUjongseo's picture
Upload folder using huggingface_hub
b7b9816 verified
metadata
license: cc-by-4.0
task_categories:
  - video-classification
  - visual-question-answering
language:
  - en
tags:
  - video
  - direction-recognition
  - multiple-choice
  - hand-motion
  - vlm-evaluation
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: tomato_direction.json

Hand Direction Recognition Video Dataset

Overview

This dataset is designed to evaluate the directional reasoning capabilities of Video-Language Models (VLMs). Each sample consists of a short video clip of a human hand movement paired with a multiple-choice question about the direction of motion.

The dataset is intended for use with evaluation frameworks such as lmms-eval.

Dataset Structure

Data Fields

Field Type Description
video string Relative path to the video file (e.g., videos/human/0231-04.mp4)
question string A natural language question about the direction of motion
candidates list[string] 5 multiple-choice options (order may vary per sample)
answer string The correct answer string (must match one of the candidates exactly)

Candidate Options

Each sample includes 5 candidate answers drawn from the following set:

  • Not moving at all
  • Left.
  • Right.
  • First to the right then to the left.
  • First to the left then to the right.

Note: The order of candidates is shuffled per sample to avoid position bias.

Example

{
    "video": "videos/human/0231-04.mp4",
    "question": "In which direction(s) did the person's hand move?",
    "candidates": [
        "Not moving at all",
        "Left.",
        "Right.",
        "First to the right then to the left.",
        "First to the left then to the right."
    ],
    "answer": "Right."
}

Usage

Loading the Dataset

from datasets import load_dataset

ds = load_dataset("KHUjongseo/TOMATO_direction")

Using with lmms-eval

This dataset is compatible with the lmms-eval framework. A corresponding task configuration (.yaml) can be found in the lmms-eval task directory.

python -m lmms_eval \
    --model llava_vid \
    --tasks tomato_direction \
    --batch_size 1 \
    --log_samples \
    --output_path ./logs

Motivation

Recent work has shown that state-of-the-art VLMs often fail to correctly identify absolute directions (left/right) in video, even when the motion is visually unambiguous. This dataset probes that specific capability with controlled, minimal stimuli — isolated hand motion clips — to avoid confounding factors from complex scenes.

Data Collection

Videos are sourced from motion capture / RGB recordings of human hand movements. Each clip is trimmed to contain a single, unambiguous directional motion (or no motion). Ground-truth labels were annotated manually.

Citation

If you use this dataset, please cite:

@misc{lee2025handdirection,
  author       = {Hyuntak Lee},
  title        = {Hand Direction Recognition Video Dataset for VLM Evaluation},
  year         = {2025},
  publisher    = {HuggingFace},
  howpublished = {\url{https://huggingface.co/datasets/KHUjongseo/TOMATO_direction}},
}

License

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.