| --- |
| license: cc0-1.0 |
| task_categories: |
| - image-to-3d |
| - depth-estimation |
| tags: |
| - nerf |
| - 3d-gaussian-splatting |
| - 3dgs |
| - nerfstudio |
| - multi-view |
| - depth-maps |
| - normal-maps |
| - point-cloud |
| - computer-vision |
| - 3d-reconstruction |
| pretty_name: "DX.GL Multi-View Datasets" |
| size_categories: |
| - 1K<n<10K |
| --- |
| |
| # DX.GL Multi-View Datasets for NeRF & 3D Gaussian Splatting |
|
|
| Multi-view training datasets rendered from CC0 3D models via [DX.GL](https://dx.gl). Each dataset includes calibrated camera poses, depth maps, normal maps, binary masks, and point clouds — ready for [nerfstudio](https://docs.nerf.studio/) out of the box. |
|
|
| **10 objects × 196 views × 1024×1024 resolution × full sphere coverage.** |
|
|
| ## Quick Start |
|
|
| ```bash |
| # Download a dataset (Apple, 196 views, 1024x1024) |
| wget https://dx.gl/api/v/EJbs8npt2RVM/vCHDLxjWG65d/dataset -O apple.zip |
| unzip apple.zip -d apple |
| |
| # Train with nerfstudio |
| pip install nerfstudio |
| ns-train splatfacto --data ./apple \ |
| --max-num-iterations 20000 \ |
| --pipeline.model.sh-degree 3 \ |
| --pipeline.model.background-color white |
| ``` |
|
|
| Or use the download script: |
|
|
| ```bash |
| pip install requests |
| python download_all.py |
| ``` |
|
|
| ## What's in Each Dataset ZIP |
|
|
| ``` |
| dataset/ |
| ├── images/ # RGB frames (PNG, transparent background) |
| │ ├── frame_00000.png |
| │ └── ... |
| ├── depth/ # 8-bit grayscale depth maps |
| ├── depth_16bit/ # 16-bit grayscale depth maps (higher precision) |
| ├── normals/ # World-space normal maps |
| ├── masks/ # Binary alpha masks |
| ├── transforms.json # Camera poses (nerfstudio / instant-ngp format) |
| └── points3D.ply # Sparse point cloud for initialization |
| ``` |
|
|
| ### transforms.json Format |
|
|
| Compatible with both **nerfstudio** and **instant-ngp**: |
|
|
| ```json |
| { |
| "camera_angle_x": 0.857, |
| "camera_angle_y": 0.857, |
| "fl_x": 693.5, |
| "fl_y": 693.5, |
| "cx": 400, |
| "cy": 400, |
| "w": 800, |
| "h": 800, |
| "depth_near": 0.85, |
| "depth_far": 2.35, |
| "ply_file_path": "points3D.ply", |
| "frames": [ |
| { |
| "file_path": "images/frame_00000.png", |
| "depth_file_path": "depth/frame_00000.png", |
| "normal_file_path": "normals/frame_00000.png", |
| "mask_file_path": "masks/frame_00000.png", |
| "transform_matrix": [[...], [...], [...], [0, 0, 0, 1]] |
| } |
| ] |
| } |
| ``` |
|
|
| ## Specs |
|
|
| | Property | Value | |
| |---|---| |
| | **Views** | 196 per object | |
| | **Resolution** | 1024×1024 | |
| | **Coverage** | Full sphere (±89° elevation) | |
| | **Point cloud** | ~200k points | |
| | **Camera distribution** | Fibonacci golden-angle spiral | |
| | **Background** | Transparent (RGBA) | |
| | **Lighting** | Studio HDRI + directional lights | |
|
|
| ## Camera Distribution |
|
|
| Views are distributed on a full sphere (±89° elevation) using a golden-angle Fibonacci spiral. The distribution is uniform in solid angle — more views near the equator, fewer near the poles — optimized for NeRF/3DGS training. |
|
|
|  |
|
|
| ## Objects |
|
|
| | # | Object | Category | Download | Browse | |
| |---|---|---|---|---| |
| | 1 | Apple | organic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/vCHDLxjWG65d/dataset) | [View](https://dx.gl/datasets/vCHDLxjWG65d) | |
| | 2 | Cash Register | electronics | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/JfjLRexr6J7z/dataset) | [View](https://dx.gl/datasets/JfjLRexr6J7z) | |
| | 3 | Drill | tool | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/A0dcsk7HHgAg/dataset) | [View](https://dx.gl/datasets/A0dcsk7HHgAg) | |
| | 4 | Fire Extinguisher | metallic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/cLgyqM5mhQoq/dataset) | [View](https://dx.gl/datasets/cLgyqM5mhQoq) | |
| | 5 | LED Lightbulb | glass | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/ZuYmv3K9xN7u/dataset) | [View](https://dx.gl/datasets/ZuYmv3K9xN7u) | |
| | 6 | Measuring Tape | tool | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/qqvDYx7RtHZd/dataset) | [View](https://dx.gl/datasets/qqvDYx7RtHZd) | |
| | 7 | Modern Arm Chair | furniture | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/KLBJAuie9JaB/dataset) | [View](https://dx.gl/datasets/KLBJAuie9JaB) | |
| | 8 | Multi Cleaner 5L | product | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/79gDW15Gw9Ft/dataset) | [View](https://dx.gl/datasets/79gDW15Gw9Ft) | |
| | 9 | Potted Plant | organic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/o4c5zRyGuT7W/dataset) | [View](https://dx.gl/datasets/o4c5zRyGuT7W) | |
| | 10 | Wet Floor Sign | plastic | [ZIP](https://dx.gl/api/v/EJbs8npt2RVM/tHdRul1GzzoU/dataset) | [View](https://dx.gl/datasets/tHdRul1GzzoU) | |
|
|
| All source models from [Polyhaven](https://polyhaven.com) (CC0). |
|
|
| ## Pre-trained 3DGS Splats |
|
|
| We include pre-trained Gaussian Splat `.ply` files (nerfstudio splatfacto, 20k iterations, SH degree 3) for each object. Download them with: |
|
|
| ```bash |
| python download_all.py --splats |
| ``` |
|
|
| Or view them directly: |
|
|
| - [DX.GL Splat Viewer](https://dx.gl/splat/index.html) (all 10 models, use ← → to browse) |
| - [SuperSplat Editor](https://superspl.at/editor) (drag-drop the .ply) |
| - nerfstudio viewer: `ns-viewer --load-config outputs/*/config.yml` |
|
|
| ### Training Parameters |
|
|
| ```bash |
| ns-train splatfacto --data ./dataset \ |
| --max-num-iterations 20000 \ |
| --pipeline.model.sh-degree 3 \ |
| --pipeline.model.background-color white \ |
| --pipeline.model.cull-alpha-thresh 0.2 \ |
| --pipeline.model.densify-size-thresh 0.005 \ |
| --pipeline.model.use-scale-regularization True \ |
| --pipeline.model.max-gauss-ratio 5.0 |
| ``` |
|
|
| Training time: ~10 minutes on RTX 4000 Pro Ada (70W) at the 196×1024 tier. |
|
|
| ## Rendering Pipeline |
|
|
| Datasets are rendered using [DX.GL](https://dx.gl)'s cloud GPU rendering pipeline: |
|
|
| - **Lighting**: Studio HDRI environment with PBR materials |
| - **Camera**: Fibonacci golden-angle sphere distribution |
| - **Depth**: Tight near/far planes from model bounding sphere for maximum precision |
| - **Point cloud**: Back-projected from depth maps, ~1000 points per view |
| - **Background**: Transparent (RGBA) |
|
|
| ## Modalities |
|
|
| | Modality | Format | Notes | |
| |---|---|---| |
| | **RGB** | PNG, RGBA | Transparent background, PBR-lit | |
| | **Depth (8-bit)** | PNG, grayscale | Normalized to near/far range | |
| | **Depth (16-bit)** | PNG, grayscale | RG-encoded, higher precision | |
| | **Normals** | PNG, RGB | World-space, MeshNormalMaterial | |
| | **Masks** | PNG, grayscale | Binary alpha from RGB alpha channel | |
| | **Point Cloud** | PLY, binary | XYZ + RGB, ~100k points | |
| | **Camera Poses** | JSON | 4×4 camera-to-world matrices | |
|
|
| ## License |
|
|
| All source 3D models are **CC0** (public domain) from [Polyhaven](https://polyhaven.com). The rendered datasets inherit this license — use them for anything, no attribution required. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{dxgl_multiview_2026, |
| title = {DX.GL Multi-View Datasets for NeRF and 3D Gaussian Splatting}, |
| author = {DXGL}, |
| year = {2026}, |
| url = {https://huggingface.co/datasets/dxgl/multiview-datasets}, |
| note = {Multi-view datasets with depth, normals, masks, and point clouds. Rendered via DX.GL.} |
| } |
| ``` |
|
|
| ## Links |
|
|
| - **This collection**: [dx.gl/datasets/polyhaven-10](https://dx.gl/datasets/polyhaven-10) |
| - **Browse all datasets**: [dx.gl/datasets](https://dx.gl/datasets) |
| - **Pipeline details**: [dx.gl/for-research](https://dx.gl/for-research) |
| - **API documentation**: [dx.gl/portal/docs](https://dx.gl/portal/docs) |
| - **Generate your own**: [dx.gl/signup](https://dx.gl/signup) (2 free renders included) |
|
|
| ## Feedback |
|
|
| We're actively improving the rendering pipeline. If you find issues with depth accuracy, mask quality, camera calibration, or view distribution — please open a Discussion on this repo. Specific feedback we're looking for: |
|
|
| - Depth map accuracy at object edges |
| - Mask quality for transparent/reflective materials |
| - Point cloud alignment with RGB views |
| - View distribution quality for your training method |
| - Missing modalities or metadata |
| - Any other issues or suggestions? |
|
|