File size: 2,322 Bytes
ca49000 c080fe0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ---
license: mit
---
# Streaming3D Dataset
This dataset contains assets used by the Streaming3D benchmark. The current
release documents the `GSO30` subset; other subsets may be added later.
## GSO30
`GSO30` is a 30-object subset derived from Google Scanned Objects. Each object
directory contains training renders, evaluation assets, and the original object
mesh/material files.
### Object List
```text
alarm backpack bell blocks chicken cream elephant grandfather grandmother hat
leather lion lunch_bag mario oil school_bus1 school_bus2 shoe shoe1 shoe2
shoe3 soap sofa sorter sorting_board stucking_cups teapot toaster train turtle
```
### Directory Structure
```text
GSO30/
<object_id>/
meshes/
model.glb
model.obj
model.mtl
texture.png
render_spiral_100/
images/
000.png ... 099.png
masks/
000.png ... 099.png
model/
000.png ... 099.png
000.npy ... 099.npy
transforms.json
model_norm.obj
model_norm.mtl
render_mvs_25/
model_norm.glb
model_norm.obj
model_norm.mtl
model/
000.png ... 024.png
000.npy ... 024.npy
```
Some object folders also include auxiliary metadata, thumbnails, or legacy
render folders. The benchmark protocol uses the paths above.
### Usage
For training or reconstruction input, use all 100 images from:
```text
GSO30/<object_id>/render_spiral_100/images/{000..099}.png
```
The corresponding masks are stored in:
```text
GSO30/<object_id>/render_spiral_100/masks/{000..099}.png
```
Camera metadata for the 100 spiral views is available in:
```text
GSO30/<object_id>/render_spiral_100/transforms.json
GSO30/<object_id>/render_spiral_100/model/{000..099}.npy
```
For evaluation, use the normalized GLB mesh and the 25 provided camera views
from `render_mvs_25`:
```text
GSO30/<object_id>/render_mvs_25/model_norm.glb
GSO30/<object_id>/render_mvs_25/model/{000..024}.npy
```
The matching reference renders for those views are:
```text
GSO30/<object_id>/render_mvs_25/model/{000..024}.png
```
In short, the default protocol is:
1. Train or reconstruct from all `render_spiral_100/images` frames.
2. Evaluate by rendering or comparing against `render_mvs_25/model_norm.glb`
using the 25 camera poses in `render_mvs_25/model/*.npy`.
|