Update README.md
Browse files
README.md
CHANGED
|
@@ -8,12 +8,51 @@ viewer: false
|
|
| 8 |
|
| 9 |
# MASIV Multi-Sequence Dataset
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
[](https://arxiv.org/abs/2508.01112)
|
| 12 |
[](https://github.com/Skaldak/MASIV)
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
# MASIV Multi-Sequence Dataset
|
| 10 |
|
| 11 |
+
> **Toward Material-Agnostic System Identification from Videos**
|
| 12 |
+
>
|
| 13 |
+
> [Yizhou Zhao](https://scholar.google.com/citations?user=nVKRaf4AAAAJ&hl=en)<sup>1</sup>, [Haoyu Chen](https://tonychen050400.github.io/)<sup>1</sup>, [Chunjiang Liu](https://chunjiangliu.com/)<sup>1</sup>, [Zhenyang Li](https://scholar.google.com/citations?hl=en&user=r9f4mLMAAAAJ)<sup>2</sup>, [Charles Herrmann](https://scholar.google.com/citations?user=LQvi5XAAAAAJ&hl=en)<sup>3</sup>, [Junhwa Hur](https://hurjunhwa.github.io/)<sup>3</sup>, [Yinxiao Li](https://scholar.google.com/citations?user=kZsIU74AAAAJ&hl=en)<sup>3</sup>, [Ming‑Hsuan Yang](https://scholar.google.com/citations?user=p9-ohHsAAAAJ&hl=en)<sup>4</sup>, [Bhiksha Raj](https://scholar.google.com/citations?user=IWcGY98AAAAJ&hl=en)<sup>1</sup>, [Min Xu](https://scholar.google.com/citations?user=Y3Cqt0cAAAAJ&hl=en)<sup>1*</sup>
|
| 14 |
+
>
|
| 15 |
+
> <sup>1</sup>Carnegie Mellon University <sup>2</sup>University of Alabama at Birmingham <sup>3</sup>Google <sup>4</sup>UC Merced
|
| 16 |
+
|
| 17 |
[](https://arxiv.org/abs/2508.01112)
|
| 18 |
[](https://github.com/Skaldak/MASIV)
|
| 19 |
|
| 20 |
+
## Introduction
|
| 21 |
+
|
| 22 |
+
The MASIV Multi-Sequence Dataset is a synthetic dataset generated by [Genesis](https://genesis-embodied-ai.github.io/) to evaluate the generalization of data-driven
|
| 23 |
+
constitutive models. The dataset contains 10 objects of 5 distinct materials (Elastic, Elastoplastic, Liquid, Sand, and Snow), with each object including 10
|
| 24 |
+
different multi-view sequences. These sequences have randomized initial conditions for location, pose, and velocity. Each sequence includes 11
|
| 25 |
+
views, with each view consisting of 30 frames.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
## Dataset Structure
|
| 30 |
+
|
| 31 |
+
```
|
| 32 |
+
MASIV/
|
| 33 |
+
├── 0_0/ # <ObjectID>_<SequenceID>
|
| 34 |
+
├── data/ # Contains per-frame images
|
| 35 |
+
├── point_clouds/ # Stores point cloud data for 30 frames
|
| 36 |
+
├── videos/ # Stores 11 videos, one for each view
|
| 37 |
+
├── all_data.json # A JSON file containing camera information
|
| 38 |
+
├── metadata.json # A JSON file containing global simulation and object-specific parameters
|
| 39 |
+
├── transforms_test.json # Contains the camera transformation matrices and file paths for the images in the test set
|
| 40 |
+
├── transforms_train.json # Contains the camera transformation matrices and file paths for the images in the training set
|
| 41 |
+
├── transforms_val.json # Contains the camera transformation matrices and file paths for the images in the validation set
|
| 42 |
+
├── 0_1/
|
| 43 |
+
...
|
| 44 |
+
├── 9_9/
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Citing MASIV
|
| 48 |
+
|
| 49 |
+
If you find this dataset useful in your work, please consider citing our paper:
|
| 50 |
|
| 51 |
+
```
|
| 52 |
+
@article{zhao2025masiv,
|
| 53 |
+
title={MASIV: Toward Material-Agnostic System Identification from Videos},
|
| 54 |
+
author={Zhao, Yizhou and Chen, Haoyu and Liu, Chunjiang and Li, Zhenyang and Herrmann, Charles and Hur, Junhwa and Li, Yinxiao and Yang, Ming-Hsuan and Raj, Bhiksha and Xu, Min},
|
| 55 |
+
journal={arXiv preprint arXiv:2508.01112},
|
| 56 |
+
year={2025}
|
| 57 |
+
}
|
| 58 |
+
```
|