Datasets:
ArXiv:
License:
File size: 2,706 Bytes
03ec409 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | ---
license: other
task_categories:
- image-to-3d
- video-classification
tags:
- 3d
- video
- point-cloud
- animation
- benchmark
- synthetic
pretty_name: ActionBench
---
<div align="center">
<h1>π¬ ActionBench: Paired Video-3D Synthetic Benchmark</h1>
<img src="actionbench.gif" alt="ActionBench" width="100%">
</div>
## π Overview
ActionBench is a benchmark dataset of **128 paired video β animated point-cloud samples** for evaluating animated 3D mesh generation from video.
Each sample contains:
- **Video**: 16 RGBA frames with alpha mask
- **Animated Point Cloud**: Surface points sampled on the animated object with shape `(T, V, 6)` where:
- `T=16`: number of keyframes
- `V`: number of vertices (points randomly sampled on the mesh surface)
- `6`: position `(x, y, z)` + normal `(nx, ny, nz)` for each point
> **Note:** The point cloud is **tracked**: each point index corresponds to the same surface point deformed across timesteps, providing dense correspondences over time.
The dataset consists of synthetic scenes of animated objects from [ObjaverseXL](https://objaverse.allenai.org/), rendered using **Blender 3.5.1**.
## π Evaluation
To evaluate on ActionBench, produce a list of animated meshes saved as `.glb` files.
Each subdirectory must be named with the corresponding `uid` from ActionBench:
```
predictions/
βββ <uid_1>/
β βββ mesh_00.glb
β βββ mesh_01.glb
β βββ ...
βββ <uid_2>/
β βββ mesh_00.glb
β βββ ...
βββ ...
```
Download the ActionBench dataset:
```bash
pip install huggingface_hub
huggingface-cli download facebook/actionbench --repo-type dataset --local-dir data/actionbench/
```
Then run the evaluation script in [ActionMesh](https://github.com/facebookresearch/actionmesh):
```bash
python actionbench/evaluate.py \
--pred_root predictions/ \
--gt_root data/actionbench/data/ \
--output_csv results.csv \
--device cuda
```
Metrics are described in the [ActionMesh paper](https://arxiv.org/abs/2601.16148):
- **CD-3D**: Chamfer Distance 3D β measures geometric accuracy per frame
- **CD-4D**: Chamfer Distance 4D β measures spatio-temporal consistency
- **CD-M**: Motion Chamfer Distance β measures motion fidelity
## ποΈ License
See the LICENSE file for details about the license under which this dataset is made available.
## π Citation
If you use ActionBench, please cite the following paper:
```bibtex
@inproceedings{ActionMesh2025,
author = {Remy Sabathier and David Novotny and Niloy Mitra and Tom Monnier},
title = {ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion},
year = {2025},
}
```
|