File size: 7,804 Bytes
60fe6e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
license: cc0-1.0
task_categories:
  - image-to-3d
  - depth-estimation
tags:
  - nerf
  - 3d-gaussian-splatting
  - 3dgs
  - nerfstudio
  - multi-view
  - depth-maps
  - normal-maps
  - point-cloud
  - computer-vision
  - 3d-reconstruction
pretty_name: "DX.GL Multi-View Datasets"
size_categories:
  - 1K<n<10K
---

# DX.GL Multi-View Datasets for NeRF & 3D Gaussian Splatting

Multi-view training datasets rendered from CC0 3D models via [DX.GL](https://dx.gl). Each dataset includes calibrated camera poses, depth maps, normal maps, binary masks, and point clouds — ready for [nerfstudio](https://docs.nerf.studio/) out of the box.

**10 objects × 196 views × 1024×1024 resolution × full sphere coverage.**

## Quick Start

```bash
# Download a dataset (Apple, 196 views, 1024x1024)
wget https://dx.gl/api/v/EJbs8npt2RVM/vCHDLxjWG65d/dataset -O apple.zip
unzip apple.zip -d apple

# Train with nerfstudio
pip install nerfstudio
ns-train splatfacto --data ./apple \
  --max-num-iterations 20000 \
  --pipeline.model.sh-degree 3 \
  --pipeline.model.background-color white
```

Or use the download script:

```bash
pip install requests
python download_all.py
```

## What's in Each Dataset ZIP

```
dataset/
├── images/           # RGB frames (PNG, transparent background)
│   ├── frame_00000.png
│   └── ...
├── depth/            # 8-bit grayscale depth maps
├── depth_16bit/      # 16-bit grayscale depth maps (higher precision)
├── normals/          # World-space normal maps
├── masks/            # Binary alpha masks
├── transforms.json   # Camera poses (nerfstudio / instant-ngp format)
└── points3D.ply      # Sparse point cloud for initialization
```

### transforms.json Format

Compatible with both **nerfstudio** and **instant-ngp**:

```json
{
  "camera_angle_x": 0.857,
  "camera_angle_y": 0.857,
  "fl_x": 693.5,
  "fl_y": 693.5,
  "cx": 400,
  "cy": 400,
  "w": 800,
  "h": 800,
  "depth_near": 0.85,
  "depth_far": 2.35,
  "ply_file_path": "points3D.ply",
  "frames": [
    {
      "file_path": "images/frame_00000.png",
      "depth_file_path": "depth/frame_00000.png",
      "normal_file_path": "normals/frame_00000.png",
      "mask_file_path": "masks/frame_00000.png",
      "transform_matrix": [[...], [...], [...], [0, 0, 0, 1]]
    }
  ]
}
```

## Specs

| Property | Value |
|---|---|
| **Views** | 196 per object |
| **Resolution** | 1024×1024 |
| **Coverage** | Full sphere (±89° elevation) |
| **Point cloud** | ~200k points |
| **Camera distribution** | Fibonacci golden-angle spiral |
| **Background** | Transparent (RGBA) |
| **Lighting** | Studio HDRI + directional lights |

## Camera Distribution

Views are distributed on a full sphere (±89° elevation) using a golden-angle Fibonacci spiral. The distribution is uniform in solid angle — more views near the equator, fewer near the poles — optimized for NeRF/3DGS training.

![Camera Distribution](camera-distribution.png)

## Objects

| # | Object | Category | Download | Browse |
|---|---|---|---|---|
| 1 | Apple | organic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/vCHDLxjWG65d/dataset) | [View](https://dx.gl/datasets/vCHDLxjWG65d) |
| 2 | Cash Register | electronics | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/JfjLRexr6J7z/dataset) | [View](https://dx.gl/datasets/JfjLRexr6J7z) |
| 3 | Drill | tool | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/A0dcsk7HHgAg/dataset) | [View](https://dx.gl/datasets/A0dcsk7HHgAg) |
| 4 | Fire Extinguisher | metallic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/cLgyqM5mhQoq/dataset) | [View](https://dx.gl/datasets/cLgyqM5mhQoq) |
| 5 | LED Lightbulb | glass | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/ZuYmv3K9xN7u/dataset) | [View](https://dx.gl/datasets/ZuYmv3K9xN7u) |
| 6 | Measuring Tape | tool | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/qqvDYx7RtHZd/dataset) | [View](https://dx.gl/datasets/qqvDYx7RtHZd) |
| 7 | Modern Arm Chair | furniture | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/KLBJAuie9JaB/dataset) | [View](https://dx.gl/datasets/KLBJAuie9JaB) |
| 8 | Multi Cleaner 5L | product | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/79gDW15Gw9Ft/dataset) | [View](https://dx.gl/datasets/79gDW15Gw9Ft) |
| 9 | Potted Plant | organic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/o4c5zRyGuT7W/dataset) | [View](https://dx.gl/datasets/o4c5zRyGuT7W) |
| 10 | Wet Floor Sign | plastic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/tHdRul1GzzoU/dataset) | [View](https://dx.gl/datasets/tHdRul1GzzoU) |

All source models from [Polyhaven](https://polyhaven.com) (CC0).

## Pre-trained 3DGS Splats

We include pre-trained Gaussian Splat `.ply` files (nerfstudio splatfacto, 20k iterations, SH degree 3) for each object. Download them with:

```bash
python download_all.py --splats
```

Or view them directly:

- [DX.GL Splat Viewer](https://dx.gl/splat/index.html) (all 10 models, use ← → to browse)
- [SuperSplat Editor](https://superspl.at/editor) (drag-drop the .ply)
- nerfstudio viewer: `ns-viewer --load-config outputs/*/config.yml`

### Training Parameters

```bash
ns-train splatfacto --data ./dataset \
  --max-num-iterations 20000 \
  --pipeline.model.sh-degree 3 \
  --pipeline.model.background-color white \
  --pipeline.model.cull-alpha-thresh 0.2 \
  --pipeline.model.densify-size-thresh 0.005 \
  --pipeline.model.use-scale-regularization True \
  --pipeline.model.max-gauss-ratio 5.0
```

Training time: ~10 minutes on RTX 4000 Pro Ada (70W) at the 196×1024 tier.

## Rendering Pipeline

Datasets are rendered using [DX.GL](https://dx.gl)'s cloud GPU rendering pipeline:

- **Lighting**: Studio HDRI environment with PBR materials
- **Camera**: Fibonacci golden-angle sphere distribution
- **Depth**: Tight near/far planes from model bounding sphere for maximum precision
- **Point cloud**: Back-projected from depth maps, ~1000 points per view
- **Background**: Transparent (RGBA)

## Modalities

| Modality | Format | Notes |
|---|---|---|
| **RGB** | PNG, RGBA | Transparent background, PBR-lit |
| **Depth (8-bit)** | PNG, grayscale | Normalized to near/far range |
| **Depth (16-bit)** | PNG, grayscale | RG-encoded, higher precision |
| **Normals** | PNG, RGB | World-space, MeshNormalMaterial |
| **Masks** | PNG, grayscale | Binary alpha from RGB alpha channel |
| **Point Cloud** | PLY, binary | XYZ + RGB, ~100k points |
| **Camera Poses** | JSON | 4×4 camera-to-world matrices |

## License

All source 3D models are **CC0** (public domain) from [Polyhaven](https://polyhaven.com). The rendered datasets inherit this license — use them for anything, no attribution required.

## Citation

```bibtex
@misc{dxgl_multiview_2026,
  title  = {DX.GL Multi-View Datasets for NeRF and 3D Gaussian Splatting},
  author = {DXGL},
  year   = {2026},
  url    = {https://huggingface.co/datasets/dxgl/multiview-datasets},
  note   = {Multi-view datasets with depth, normals, masks, and point clouds. Rendered via DX.GL.}
}
```

## Links

- **This collection**: [dx.gl/datasets/polyhaven-10](https://dx.gl/datasets/polyhaven-10)
- **Browse all datasets**: [dx.gl/datasets](https://dx.gl/datasets)
- **Pipeline details**: [dx.gl/for-research](https://dx.gl/for-research)
- **API documentation**: [dx.gl/portal/docs](https://dx.gl/portal/docs)
- **Generate your own**: [dx.gl/signup](https://dx.gl/signup) (2 free renders included)

## Feedback

We're actively improving the rendering pipeline. If you find issues with depth accuracy, mask quality, camera calibration, or view distribution — please open a Discussion on this repo. Specific feedback we're looking for:

- Depth map accuracy at object edges
- Mask quality for transparent/reflective materials
- Point cloud alignment with RGB views
- View distribution quality for your training method
- Missing modalities or metadata
- Any other issues or suggestions?