File size: 9,851 Bytes
a6ce3a8
 
 
 
 
ceceb53
a6ce3a8
 
 
b6a0a29
 
 
a6ce3a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6a0a29
a6ce3a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
license: apache-2.0
language:
- en
tags:
- video-generation
- image-to-video
pretty_name: PIRA-Falling-Objects

# end of YAML front matter
---

# PIRA-Falling-Objects: A Synthetic Dataset for Physical Reasoning

## Dataset Summary

This repository contains the **PIRA-Falling-Objects** dataset, a large-scale, synthetically generated video dataset designed to instill and evaluate physical reasoning in Video Generative Models (VGMs). It was created for the paper:


The associated arXiv preprint and code repository will be released soon; links will be added here upon publication.

The dataset addresses the common "physics gap" in generative AI, where models produce visually impressive but physically implausible videos. By focusing on the fundamental dynamics of objects in free-fall, this dataset provides a controlled environment for training models to respect the laws of gravity and for rigorously evaluating their understanding.

It includes **10,000 training videos** and **two distinct test sets** (In-Distribution and Out-of-Distribution) for comprehensive evaluation, along with rich, noise-free annotations crucial for physics-informed training methodologies like **PIRA (Physics-Informed Representation Alignment)**.

## Release Status

- Current release: the ID and OOD test splits only.
- The 10,000-sample SFT training split will be released soon.
- The arXiv paper and codebase are planned for public release soon (links forthcoming).

## Dataset Creation

As detailed in our paper, we advocate for a principled use of synthetic data to effectively instill knowledge of a specific physical law. A clean, controlled, and large-scale data source is vital for this purpose. Our methodology is built on three key advantages of synthetic generation:

#### 1. Fine-Grained Control and Rich Annotations
Simulators provide complete control over the physical process. We can precisely vary initial conditions and, more importantly, obtain perfect, noise-free, multi-modal ground-truth annotations at no cost. This includes:
- Per-frame segmentation masks
- Dense depth maps
- Optical flow fields
- Object and camera trajectories

These annotations serve as the explicit, interpretable "teacher signals" required for physics-informed distillation, a task that is prohibitively difficult and expensive with real-world data.

#### 2. Stable Model Fine-Tuning
Fine-tuning open-source VGMs is often sensitive to the data format used during their original pre-training. Real-world videos with variable formats require extensive and often lossy pre-processing. Our synthetic approach sidesteps this entirely by generating a dataset that **perfectly and natively matches the exact specifications** of our target VDM backbone (CogVideoX), ensuring a stable and maximally effective fine-tuning process.

#### 3. Rigorous Generalization Testing
A primary goal is to ensure the model learns the underlying physical principle, not just memorizes the training data. Our synthetic environment enables the creation of carefully controlled test splits. By reserving a subset of 3D assets and backgrounds exclusively for evaluation, we construct distinct **in-distribution (ID)** and **out-of-distribution (OOD)** test sets to rigorously measure the model's ability to generalize the learned physical laws to unseen scenarios.

### Generation Process
We use Google's [**Kubric**](https://github.com/google-research/kubric) framework to generate all videos. Kubric provides a powerful interface that combines the [**PyBullet**](https://pybullet.org/wordpress/) physics engine for simulation and [**Blender**](https://www.blender.org/) for photorealistic rendering.

- **Objects**: Sourced from the [**Google Scanned Objects (GSO)**](https://research.google/blog/scanned-objects-by-google-research-a-dataset-of-3d-scanned-common-household-items/) dataset, providing a diverse set of ~1000 high-quality 3D models of everyday items.
- **Backgrounds**: Rendered against a set of 426 unique HDRI backgrounds.
- **Physics**: All objects fall under a uniform gravitational field simulating normal Earth gravity (9.81 m/sΒ²).
- **Video Specifications**: Each sample is configured to match the CogVideoX model's requirements:
    - **Duration**: 49 frames
    - **Frame Rate**: 8 FPS
- **Camera**: The camera remains stationary and is oriented parallel to the ground plane to ensure the learning signal is focused exclusively on object dynamics.

## Dataset Structure
The dataset is organized into per-sample subfolders. At the top level:

```
synthetic_single_falling_items_data_10k-480x720-8fps-49frames/
β”œβ”€β”€ 00000/
β”œβ”€β”€ 00001/
β”œβ”€β”€ ...
└── 09999/
```

Each sample folder contains exactly 49 per-frame files for depth, segmentation, and RGBA JPEGs, plus two rendered videos and a metadata JSON. Filenames are zero-indexed and zero-padded to 5 digits, from `00000` to `00048`:

- depth per-frame arrays: `depth_00000.npy` … `depth_00048.npy`
- segmentation per-frame arrays: `segmentation_00000.npy` … `segmentation_00048.npy`
- per-frame RGBA renders: `rgba_00000.jpg` … `rgba_00048.jpg`
- rendered RGBA videos: `rgba.gif`, `rgba.mp4`
- scene metadata: `metadata.json`

Notes:
- All per-frame files correspond to the same frame index across modalities (e.g., `depth_00017.npy`, `segmentation_00017.npy`, and `rgba_00017.jpg` are the same frame).
- Resolution is 480x720 (H x W) at 8 FPS for 49 frames.
- `depth_*.npy` and `segmentation_*.npy` are NumPy arrays of shape [H, W].

Example contents of one sample folder (abridged):

```
all_00000/
β”œβ”€β”€ depth_00000.npy
β”œβ”€β”€ ...
β”œβ”€β”€ depth_00048.npy
β”œβ”€β”€ metadata.json
β”œβ”€β”€ rgba_00000.jpg
β”œβ”€β”€ ...
β”œβ”€β”€ rgba_00048.jpg
β”œβ”€β”€ rgba.gif
β”œβ”€β”€ rgba.mp4
β”œβ”€β”€ segmentation_00000.npy
β”œβ”€β”€ ...
└── segmentation_00048.npy
```

## Data Splits

In addition to the **10,000-sample training set**, we provide two carefully curated **test sets of 64 samples each** for rigorous generalization evaluation:

#### Current Release Folder Layout (test splits only)

The current release includes only the ID and OOD test splits. Each split resides in a directory named like:

```
evaluation_single_falling_items_[seen|unseen]_size_64_resolution_480x720_fps_8_frames_49/
```

Inside each split directory, you will find the following top-level structure (aggregated across the 64 samples):

```
evaluation_single_falling_items_.../
β”œβ”€β”€ first_frames/
β”œβ”€β”€ first_gt_masks/
β”œβ”€β”€ gt_masks/
β”œβ”€β”€ input_files/
β”œβ”€β”€ labels/
β”œβ”€β”€ mask_visualizations/
β”œβ”€β”€ metadata/
β”œβ”€β”€ videos/
└── info.json
```

- first_frames/: the first rendered RGB frame for each sample (one image per sample) for quick preview/inspection.
- first_gt_masks/: the ground-truth mask corresponding to the first frame of each sample (one mask per sample).
- gt_masks/: full ground-truth masks for all frames and samples; contains the per-sample, per-frame segmentation masks for evaluation.
- input_files/: the inputs used to run evaluation for each sample (e.g., prompts or conditioning files, depending on the task setup).
- labels/: compact per-sample labels/annotations summarizing the evaluation targets.
- mask_visualizations/: overlays/visualizations of masks on the corresponding frames to ease qualitative inspection.
- metadata/: per-sample metadata files with generation parameters and split information.
- videos/: rendered videos per sample used in evaluation.
- info.json: a split-level manifest with split summary (counts, resolution, fps, etc.).

#### 1. In-Distribution (ID) Test Set
**`synthetic_single_falling_items_test_split_data-64-480x720-8fps-49frames-seen/`**

This test set uses 3D objects and HDRI backgrounds drawn from the same pool as the training data. It validates the model's ability to accurately reproduce the physics of falling objects in scenarios similar to those seen during training.

```
synthetic_single_falling_items_test_split_data-64-480x720-8fps-49frames-seen/
β”œβ”€β”€ all_00000/
β”œβ”€β”€ all_00001/
β”œβ”€β”€ ...
└── all_00063/
```

#### 2. Out-of-Distribution (OOD) Test Set
**`synthetic_single_falling_items_test_split_data-64-480x720-8fps-49frames-unseen/`**

This test set uses a **reserved subset of 3D objects and backgrounds that were never seen during training**. It provides a more stringent test of whether the model has learned the underlying physical law (gravity) rather than simply memorizing training examples. Success on this split demonstrates true generalization of physical reasoning.

```
synthetic_single_falling_items_test_split_data-64-480x720-8fps-49frames-unseen/
β”œβ”€β”€ all_00000/
β”œβ”€β”€ all_00001/
β”œβ”€β”€ ...
└── all_00063/
```

Both test sets follow the **identical per-sample structure** as the training set (49 frames, depth/segmentation/RGBA arrays, videos, and metadata), ensuring seamless evaluation. The `metadata.json` in test samples contains `"object_split": "test"` and `"background_split": "test"` to distinguish them from training data.

## Languages

English (`en`).

## How to use

The dataset can be downloaded using the following command:

```bash
# Download entire dataset repo locally (recommended)
hf download physics-informed-REPA/pira_dataset \
  --repo-type dataset \
  --local-dir ./pira_dataset

# Or download only the ID (seen) evaluation split
hf download physics-informed-REPA/pira_dataset \
  --repo-type dataset \
  --local-dir ./pira_dataset_seen \
  --include "evaluation_single_falling_items_seen_size_64_resolution_480x720_fps_8_frames_49/**"

# Or download only the OOD (unseen) evaluation split
hf download physics-informed-REPA/pira_dataset \
  --repo-type dataset \
  --local-dir ./pira_dataset_unseen \
  --include "evaluation_single_falling_items_unseen_size_64_resolution_480x720_fps_8_frames_49/**"
```