MV4DBench / README.md
SnoopyFan's picture
add README.md
17b3d3d verified
---
license: cc-by-4.0
task_categories:
- image-to-3d
tags:
- 4d
- multiview
- mesh
- animation
- benchmark
size_categories:
- n<1K
---
# MV4DBench
A 31-item multi-view 4D animated mesh benchmark for evaluating image/video → 4D mesh
reconstruction methods. Each item is rendered from 8 azimuths × N source frames and
ships with a tracked ground-truth point-cloud (T=16, N=100k, xyz+normal) for chamfer
distance evaluation.
## Items
31 animated GLBs sourced from a curated subset of 3D animations (mixamo characters,
animal/object scans, etc.). Frame counts range 11 to 757; renders are at 1024×1024,
24 fps, radius=2.2, elevation=15°.
## Layout
```
<uid>/
├── source.glb # original animated mesh
├── meta.json # camera matrices, frame count, fps, etc.
├── rgba_8views/view_NN_azDDD/frame_*.png # 8 azimuths, transparent bg
├── rgb_8views/view_NN_azDDD/frame_*.png # 8 azimuths, white bg
├── anim_4views/view_azDDD.mp4 # 4 azimuths, MP4 videos
└── gt_surfaces.npy # (16, 100000, 6) tracked GT point cloud, normalized to [-1,1]^3
```
The `gt_surfaces.npy` is sampled at 16 evenly-spaced source-frame indices, with
barycentric tracking on the rest-pose mesh, then normalized so the entire animation
fits in a [-1,1]^3 cube (matching the bpyrenderer scene-normalization used during
RGBA rendering — so chamfer scores stay scale-aligned).
## UID mapping
Some original UIDs included whitespace/`|` chars; those are sanitized to the leading
hex stem here. `uid_mapping.json` records the original full names for traceability.
## Citation
Used as held-out evaluation for the multi-view test-time-adaptation extension of
ActionMesh (Meta, 2024). See accompanying paper for chamfer benchmark numbers.