Streaming3D / README.md
WalkerCH's picture
docs: add GSO30 dataset readme
c080fe0
---
license: mit
---
# Streaming3D Dataset
This dataset contains assets used by the Streaming3D benchmark. The current
release documents the `GSO30` subset; other subsets may be added later.
## GSO30
`GSO30` is a 30-object subset derived from Google Scanned Objects. Each object
directory contains training renders, evaluation assets, and the original object
mesh/material files.
### Object List
```text
alarm backpack bell blocks chicken cream elephant grandfather grandmother hat
leather lion lunch_bag mario oil school_bus1 school_bus2 shoe shoe1 shoe2
shoe3 soap sofa sorter sorting_board stucking_cups teapot toaster train turtle
```
### Directory Structure
```text
GSO30/
<object_id>/
meshes/
model.glb
model.obj
model.mtl
texture.png
render_spiral_100/
images/
000.png ... 099.png
masks/
000.png ... 099.png
model/
000.png ... 099.png
000.npy ... 099.npy
transforms.json
model_norm.obj
model_norm.mtl
render_mvs_25/
model_norm.glb
model_norm.obj
model_norm.mtl
model/
000.png ... 024.png
000.npy ... 024.npy
```
Some object folders also include auxiliary metadata, thumbnails, or legacy
render folders. The benchmark protocol uses the paths above.
### Usage
For training or reconstruction input, use all 100 images from:
```text
GSO30/<object_id>/render_spiral_100/images/{000..099}.png
```
The corresponding masks are stored in:
```text
GSO30/<object_id>/render_spiral_100/masks/{000..099}.png
```
Camera metadata for the 100 spiral views is available in:
```text
GSO30/<object_id>/render_spiral_100/transforms.json
GSO30/<object_id>/render_spiral_100/model/{000..099}.npy
```
For evaluation, use the normalized GLB mesh and the 25 provided camera views
from `render_mvs_25`:
```text
GSO30/<object_id>/render_mvs_25/model_norm.glb
GSO30/<object_id>/render_mvs_25/model/{000..024}.npy
```
The matching reference renders for those views are:
```text
GSO30/<object_id>/render_mvs_25/model/{000..024}.png
```
In short, the default protocol is:
1. Train or reconstruct from all `render_spiral_100/images` frames.
2. Evaluate by rendering or comparing against `render_mvs_25/model_norm.glb`
using the 25 camera poses in `render_mvs_25/model/*.npy`.