multiview-datasets / README.md
dxgl's picture
Upload folder using huggingface_hub
60fe6e9 verified
metadata
license: cc0-1.0
task_categories:
  - image-to-3d
  - depth-estimation
tags:
  - nerf
  - 3d-gaussian-splatting
  - 3dgs
  - nerfstudio
  - multi-view
  - depth-maps
  - normal-maps
  - point-cloud
  - computer-vision
  - 3d-reconstruction
pretty_name: DX.GL Multi-View Datasets
size_categories:
  - 1K<n<10K

DX.GL Multi-View Datasets for NeRF & 3D Gaussian Splatting

Multi-view training datasets rendered from CC0 3D models via DX.GL. Each dataset includes calibrated camera poses, depth maps, normal maps, binary masks, and point clouds — ready for nerfstudio out of the box.

10 objects × 196 views × 1024×1024 resolution × full sphere coverage.

Quick Start

# Download a dataset (Apple, 196 views, 1024x1024)
wget https://dx.gl/api/v/EJbs8npt2RVM/vCHDLxjWG65d/dataset -O apple.zip
unzip apple.zip -d apple

# Train with nerfstudio
pip install nerfstudio
ns-train splatfacto --data ./apple \
  --max-num-iterations 20000 \
  --pipeline.model.sh-degree 3 \
  --pipeline.model.background-color white

Or use the download script:

pip install requests
python download_all.py

What's in Each Dataset ZIP

dataset/
├── images/           # RGB frames (PNG, transparent background)
│   ├── frame_00000.png
│   └── ...
├── depth/            # 8-bit grayscale depth maps
├── depth_16bit/      # 16-bit grayscale depth maps (higher precision)
├── normals/          # World-space normal maps
├── masks/            # Binary alpha masks
├── transforms.json   # Camera poses (nerfstudio / instant-ngp format)
└── points3D.ply      # Sparse point cloud for initialization

transforms.json Format

Compatible with both nerfstudio and instant-ngp:

{
  "camera_angle_x": 0.857,
  "camera_angle_y": 0.857,
  "fl_x": 693.5,
  "fl_y": 693.5,
  "cx": 400,
  "cy": 400,
  "w": 800,
  "h": 800,
  "depth_near": 0.85,
  "depth_far": 2.35,
  "ply_file_path": "points3D.ply",
  "frames": [
    {
      "file_path": "images/frame_00000.png",
      "depth_file_path": "depth/frame_00000.png",
      "normal_file_path": "normals/frame_00000.png",
      "mask_file_path": "masks/frame_00000.png",
      "transform_matrix": [[...], [...], [...], [0, 0, 0, 1]]
    }
  ]
}

Specs

Property Value
Views 196 per object
Resolution 1024×1024
Coverage Full sphere (±89° elevation)
Point cloud ~200k points
Camera distribution Fibonacci golden-angle spiral
Background Transparent (RGBA)
Lighting Studio HDRI + directional lights

Camera Distribution

Views are distributed on a full sphere (±89° elevation) using a golden-angle Fibonacci spiral. The distribution is uniform in solid angle — more views near the equator, fewer near the poles — optimized for NeRF/3DGS training.

Camera Distribution

Objects

# Object Category Download Browse
1 Apple organic ZIP View
2 Cash Register electronics ZIP View
3 Drill tool ZIP View
4 Fire Extinguisher metallic ZIP View
5 LED Lightbulb glass ZIP View
6 Measuring Tape tool ZIP View
7 Modern Arm Chair furniture ZIP View
8 Multi Cleaner 5L product ZIP View
9 Potted Plant organic ZIP View
10 Wet Floor Sign plastic ZIP View

All source models from Polyhaven (CC0).

Pre-trained 3DGS Splats

We include pre-trained Gaussian Splat .ply files (nerfstudio splatfacto, 20k iterations, SH degree 3) for each object. Download them with:

python download_all.py --splats

Or view them directly:

Training Parameters

ns-train splatfacto --data ./dataset \
  --max-num-iterations 20000 \
  --pipeline.model.sh-degree 3 \
  --pipeline.model.background-color white \
  --pipeline.model.cull-alpha-thresh 0.2 \
  --pipeline.model.densify-size-thresh 0.005 \
  --pipeline.model.use-scale-regularization True \
  --pipeline.model.max-gauss-ratio 5.0

Training time: ~10 minutes on RTX 4000 Pro Ada (70W) at the 196×1024 tier.

Rendering Pipeline

Datasets are rendered using DX.GL's cloud GPU rendering pipeline:

  • Lighting: Studio HDRI environment with PBR materials
  • Camera: Fibonacci golden-angle sphere distribution
  • Depth: Tight near/far planes from model bounding sphere for maximum precision
  • Point cloud: Back-projected from depth maps, ~1000 points per view
  • Background: Transparent (RGBA)

Modalities

Modality Format Notes
RGB PNG, RGBA Transparent background, PBR-lit
Depth (8-bit) PNG, grayscale Normalized to near/far range
Depth (16-bit) PNG, grayscale RG-encoded, higher precision
Normals PNG, RGB World-space, MeshNormalMaterial
Masks PNG, grayscale Binary alpha from RGB alpha channel
Point Cloud PLY, binary XYZ + RGB, ~100k points
Camera Poses JSON 4×4 camera-to-world matrices

License

All source 3D models are CC0 (public domain) from Polyhaven. The rendered datasets inherit this license — use them for anything, no attribution required.

Citation

@misc{dxgl_multiview_2026,
  title  = {DX.GL Multi-View Datasets for NeRF and 3D Gaussian Splatting},
  author = {DXGL},
  year   = {2026},
  url    = {https://huggingface.co/datasets/dxgl/multiview-datasets},
  note   = {Multi-view datasets with depth, normals, masks, and point clouds. Rendered via DX.GL.}
}

Links

Feedback

We're actively improving the rendering pipeline. If you find issues with depth accuracy, mask quality, camera calibration, or view distribution — please open a Discussion on this repo. Specific feedback we're looking for:

  • Depth map accuracy at object edges
  • Mask quality for transparent/reflective materials
  • Point cloud alignment with RGB views
  • View distribution quality for your training method
  • Missing modalities or metadata
  • Any other issues or suggestions?