File size: 5,298 Bytes
eab45df 5ad64c0 d11b527 5ad64c0 f0d6c19 5ad64c0 151c3a8 5ad64c0 202aec1 92323f8 f0d6c19 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | ---
license: cc-by-4.0
task_categories:
- visual-question-answering
- other
language:
- en
tags:
- motion-reasoning
- video-understanding
- human-motion
- benchmark
pretty_name: MotionHalluc Benchmark
size_categories:
- 1K<n<10K
---
# MotionHalluc Benchmark
This repository contains the **MotionHalluc benchmark**, a dataset designed for evaluating motion hallucination and motion reasoning in video-based multimodal models.
---
## π Overview
MotionHalluc introduces three evaluation tasks that require models to compare, reason about, and verify human motion patterns across videos. The benchmark is constructed using curated annotations and motion representations derived from human motion estimation pipelines.
We provide:
- Structured QA annotations for motion reasoning
- Motion representations extracted using a state-of-the-art motion reconstruction model
- Evaluation-ready dataset splits
---
## π Dataset Structure
### 1. `MotionHalluc/`
Contains all annotation files, including:
- Three MotionHalluc tasks (QA-based evaluation)
- Original curated annotations used to construct the benchmark
Each file is in JSON format.
### 2. `motion_4dHumans/`
Contains motion representations corresponding to each video sample.
- Format: `.npy`
- Extracted using a pretrained 4D human motion reconstruction pipeline
- Each file corresponds to a video ID used in the QA annotations
---
## π₯ Video Data
We distribute **annotations and motion representations only**.
Since we do not own the original videos, users are required to download them separately.
- Fit3D dataset: https://fit3d.imar.ro/
The videos are used solely as input references for motion extraction and evaluation alignment.
---
## βοΈ Motion Extraction
Motion representations are obtained using a pretrained 4D human motion reconstruction method:
```bibtex
@inproceedings{goel2023humans,
title={Humans in 4d: Reconstructing and tracking humans with transformers},
author={Goel, Shubham and Pavlakos, Georgios and Rajasegaran, Jathushan and Kanazawa, Angjoo and Malik, Jitendra},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={14783--14794},
year={2023}
}
```
This approach is used to extract 3D human motion trajectories from video inputs.
---
## π» Preprocessing & Evaluation Code
We provide video processing and evaluation scripts in the official code repository:
π GitHub repository: https://anonymous.4open.science/r/MotionHalluc-4E96
This includes:
- Video preprocessing pipeline.
- Evaluation scripts for all three MotionHalluc tasks.
- Accuracy calculation script.
---
## π Fit3D Ground Truth Motion Processing
Due to dataset licensing restrictions, we do **not redistribute Fit3D-derived motion data**.
However:
- We will release the full Fit3D ground-truth motion processing pipeline upon acceptance.
- This includes conversion from raw motion capture format to our benchmark representation.
In the current submission, all experiments are conducted using the **4D-Humans-based motion representation**, which already demonstrates strong performance and serves as a reliable proxy for kinematic evaluation.
---
## π Benchmark Usage
Each sample in MotionHalluc contains:
- A question about motion comparison or reasoning
- Multiple-choice or binary answers
- Corresponding motion representation for each video
Example format:
```json
{
"0001": {
"v1": "Bench/s03/band_pull_apart/band_pull_apart_front_215_304.mp4",
"v2": "Bench/s04/band_pull_apart/band_pull_apart_front_236_345.mp4",
"q": "You are given a query motion in Video1 and a reference motion in Video2. Which of the following correction accurate and necessary to improve the query motion in Video1 based on the reference motion in Video2?",
"c": [
"Hands level with your head at the beginning",
"At the beginning, keep your hands below head level"
],
"a": "A"
},
}
```
## π License
This dataset is released for **non-commercial scientific research purposes only**.
- The annotation data and benchmark design are released under the **CC BY 4.0 License**.
- Motion representations are derived from a pretrained human motion reconstruction model.
- Video data is not redistributed due to licensing restrictions and must be obtained from the original source.
Users must comply with the license terms of all underlying datasets used in this benchmark, including the Fit3D dataset.
---
## π Citation
Citation for Fit3d and 4D-Human:
```bibtex
@inproceedings{fieraru2021aifit,
title={Aifit: Automatic 3d human-interpretable feedback models for fitness training},
author={Fieraru, Mihai and Zanfir, Mihai and Pirlea, Silviu Cristian and Olaru, Vlad and Sminchisescu, Cristian},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={9919--9928},
year={2021}
}
@inproceedings{goel2023humans,
title={Humans in 4d: Reconstructing and tracking humans with transformers},
author={Goel, Shubham and Pavlakos, Georgios and Rajasegaran, Jathushan and Kanazawa, Angjoo and Malik, Jitendra},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={14783--14794},
year={2023}
}
``` |