MotionHalluc / README.md
motionhalluc's picture
Update README.md
d11b527 verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - other
language:
  - en
tags:
  - motion-reasoning
  - video-understanding
  - human-motion
  - benchmark
pretty_name: MotionHalluc Benchmark
size_categories:
  - 1K<n<10K

MotionHalluc Benchmark

This repository contains the MotionHalluc benchmark, a dataset designed for evaluating motion hallucination and motion reasoning in video-based multimodal models.


πŸ“Œ Overview

MotionHalluc introduces three evaluation tasks that require models to compare, reason about, and verify human motion patterns across videos. The benchmark is constructed using curated annotations and motion representations derived from human motion estimation pipelines.

We provide:

  • Structured QA annotations for motion reasoning
  • Motion representations extracted using a state-of-the-art motion reconstruction model
  • Evaluation-ready dataset splits

πŸ“ Dataset Structure

1. MotionHalluc/

Contains all annotation files, including:

  • Three MotionHalluc tasks (QA-based evaluation)
  • Original curated annotations used to construct the benchmark

Each file is in JSON format.

2. motion_4dHumans/

Contains motion representations corresponding to each video sample.

  • Format: .npy
  • Extracted using a pretrained 4D human motion reconstruction pipeline
  • Each file corresponds to a video ID used in the QA annotations

πŸŽ₯ Video Data

We distribute annotations and motion representations only.

Since we do not own the original videos, users are required to download them separately.

The videos are used solely as input references for motion extraction and evaluation alignment.


βš™οΈ Motion Extraction

Motion representations are obtained using a pretrained 4D human motion reconstruction method:

@inproceedings{goel2023humans,
  title={Humans in 4d: Reconstructing and tracking humans with transformers},
  author={Goel, Shubham and Pavlakos, Georgios and Rajasegaran, Jathushan and Kanazawa, Angjoo and Malik, Jitendra},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={14783--14794},
  year={2023}
}

This approach is used to extract 3D human motion trajectories from video inputs.


πŸ’» Preprocessing & Evaluation Code

We provide video processing and evaluation scripts in the official code repository:

πŸ‘‰ GitHub repository: https://anonymous.4open.science/r/MotionHalluc-4E96

This includes:

  • Video preprocessing pipeline.
  • Evaluation scripts for all three MotionHalluc tasks.
  • Accuracy calculation script.

πŸ”„ Fit3D Ground Truth Motion Processing

Due to dataset licensing restrictions, we do not redistribute Fit3D-derived motion data.

However:

  • We will release the full Fit3D ground-truth motion processing pipeline upon acceptance.
  • This includes conversion from raw motion capture format to our benchmark representation.

In the current submission, all experiments are conducted using the 4D-Humans-based motion representation, which already demonstrates strong performance and serves as a reliable proxy for kinematic evaluation.


πŸ“Š Benchmark Usage

Each sample in MotionHalluc contains:

  • A question about motion comparison or reasoning
  • Multiple-choice or binary answers
  • Corresponding motion representation for each video

Example format:

{
  "0001": {
        "v1": "Bench/s03/band_pull_apart/band_pull_apart_front_215_304.mp4",
        "v2": "Bench/s04/band_pull_apart/band_pull_apart_front_236_345.mp4",
        "q": "You are given a query motion in Video1 and a reference motion in Video2. Which of the following correction accurate and necessary to improve the query motion in Video1 based on the reference motion in Video2?",
        "c": [
            "Hands level with your head at the beginning",
            "At the beginning, keep your hands below head level"
        ],
        "a": "A"
    },
}

πŸ“œ License

This dataset is released for non-commercial scientific research purposes only.

  • The annotation data and benchmark design are released under the CC BY 4.0 License.
  • Motion representations are derived from a pretrained human motion reconstruction model.
  • Video data is not redistributed due to licensing restrictions and must be obtained from the original source.

Users must comply with the license terms of all underlying datasets used in this benchmark, including the Fit3D dataset.


πŸ“Œ Citation

Citation for Fit3d and 4D-Human:

@inproceedings{fieraru2021aifit,
  title={Aifit: Automatic 3d human-interpretable feedback models for fitness training},
  author={Fieraru, Mihai and Zanfir, Mihai and Pirlea, Silviu Cristian and Olaru, Vlad and Sminchisescu, Cristian},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={9919--9928},
  year={2021}
}

@inproceedings{goel2023humans,
  title={Humans in 4d: Reconstructing and tracking humans with transformers},
  author={Goel, Shubham and Pavlakos, Georgios and Rajasegaran, Jathushan and Kanazawa, Angjoo and Malik, Jitendra},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={14783--14794},
  year={2023}
}