metadata
dataset_name: clarus_handgrip_room_v1
summary: Hand-grip clips in explicit room context for testing spatial grounding.
tasks:
- grip_outcome_prediction
- persistence_under_occlusion
- container_aware_action_recognition
modalities:
- video
- csv_annotations
license: cc-by-4.0
Overview
Short video clips of hand grips in rooms.
Each clip links grip, object, and container.
Designed to expose drift when space is treated as optional.
Why this matters
- grip changes when the room is known
- persistence matters when objects go off-frame
- mis-grips rise when boundaries are ignored
- current models track objects, not rooms
Data
- 14 example clips in v0.1
- mp4 videos, 2–8 seconds
- csv annotation (id, path, class, grip, container, outcome, occlusion, persistence)
Use cases
- robotics calibrations
- warehouse automation
- prosthetic training loops
- spatial benchmarks for world models
Suggested evaluations
- error rate before/after adding container features
- persistence score across occlusion
- boundary-aware grip outcome
- drift cost in mis-placement cycles
File structure
data/
train/
annotations.csv
README.md
Citation
Include link to this page if you publish results.
Contact
Pull request with improvements or new clips welcome.