text stringlengths 1 3 |
|---|
6 |
21 |
25 |
30 |
31 |
32 |
38 |
39 |
45 |
48 |
51 |
58 |
62 |
69 |
73 |
76 |
77 |
83 |
85 |
89 |
133 |
134 |
140 |
147 |
149 |
160 |
165 |
181 |
183 |
184 |
189 |
193 |
212 |
214 |
215 |
227 |
230 |
231 |
245 |
247 |
250 |
252 |
254 |
262 |
274 |
288 |
290 |
291 |
297 |
299 |
300 |
302 |
306 |
322 |
324 |
325 |
330 |
332 |
340 |
346 |
353 |
356 |
364 |
366 |
368 |
372 |
374 |
394 |
398 |
401 |
402 |
429 |
430 |
440 |
443 |
461 |
462 |
466 |
467 |
475 |
0 |
1 |
3 |
5 |
6 |
7 |
14 |
18 |
19 |
21 |
22 |
24 |
25 |
28 |
30 |
31 |
32 |
33 |
34 |
38 |
Cloth BRDF Dataset (Multi-View Multi-Light HDR)
Multi-view multi-light HDR captures of cloth materials with associated 3D geometry, designed for inverse rendering and BRDF estimation.
Snapshot: 500 materials, 3.62 TB total (3,622 GB), 9,853 files.
- License: CC-BY-4.0 (see
LICENSE) - Croissant metadata:
croissant.jsonld - Sample subset (≤4 GB):
sample/— for quick reviewer inspection - Globals:
globals/— calibration, sample-size table, train/test splits - Examples:
examples/load_material.py
Dataset summary
Cloth-BRDF is a large-scale dataset of densely sampled cloth material appearances captured under controlled multi-view, multi-light conditions. Each material is represented by a structured set of HDR observations, per-camera and per-light pose information, and a sparse 3D point cloud reconstructed from the calibration imagery.
The dataset is intended to enable BRDF / SVBRDF estimation, photometric stereo benchmarks, and inverse-rendering research that requires reliable multi-view, multi-light ground truth on real-world cloth samples.
Repository layout
materials/{id}/
hdr.tar # 500-585 16-bit HDR PNGs (multi-view, multi-light)
observations_structured.npz # xyz, point_ids, rgbs, cam_pos, light_pos arrays
point_positions.npz # sparse 3D point cloud
rotated_camera.json # per-view camera poses (registered to rig frame)
scan_log.json # per-scan camera + light pose log
point_metadata.json # observation count, point count
bbox.json # sample bounding box
hdr_crop_bboxes.json # per-image crop polygon + dilation metadata
unmatched_scan_ids.json # scans without camera/light correspondence
globals/
camera_factor.json # per-camera intensity correction factors
emitter_calibration.json # light emitter intensity profile (degrees)
sample_size.json # material physical-size groups
training_list_{N}.txt # training splits at N = 100, 300, 442, 500
test_list_{N}.txt # corresponding test splits
sample/ # ≤4 GB downsampled subset for reviewer inspection
examples/load_material.py # minimal loader
croissant.jsonld # MLCommons Croissant metadata (core + RAI)
LICENSE # CC-BY-4.0
Data collection
Captures performed with a custom rig combining a robot-arm-mounted
camera, an array of LED light sources at calibrated positions, and a
sample-holding platform with markers for pose recovery. Each material
sample is mounted flat and imaged from <FILL_IN: number> camera viewpoints
under <FILL_IN: number> lighting conditions, producing roughly 500-585
16-bit HDR PNG images per material. A sparse 3D reconstruction (COLMAP)
recovers point geometry and registers cameras into the rig coordinate
frame. Per-pixel observations are then assembled into a structured npz
with (point, camera, light) indexing.
Annotations
No human annotation. All metadata (camera poses, light positions, per-pixel observations, point cloud) is derived from the calibration pipeline.
Loading
pip install huggingface_hub numpy pillow
python examples/load_material.py --mid 0
For batched / streamed loading, treat each materials/{id}/hdr.tar as a WebDataset shard.
Splits
The training/test split scales with the number of materials desired. Choose the variant matching your experiment:
| Split file | Materials | Use case |
|---|---|---|
globals/training_list_100.txt + test_list_100.txt |
100 | small-scale ablations |
globals/training_list_300.txt + test_list_300.txt |
300 | medium-scale benchmarks |
globals/training_list_442.txt + test_list_442.txt |
442 | full prior to material 400 (small-npz outlier) |
globals/training_list_500.txt + test_list_500.txt |
500 | full dataset |
Limitations
- Single capture rig: rig-specific calibration assumptions (lens model, light intensity profile, geometric layout) are baked into the data.
- Cloth deformations are not modelled — samples are flat-mounted and imaged in a planar configuration.
- Specular highlights at grazing angles may be clipped despite the 16-bit HDR encoding.
- Sparse 3D points (typically <FILL_IN: range> per material) are derived from feature-matched calibration imagery rather than dense scanning.
Biases
- Material distribution is biased toward fabrics readily available in <FILL_IN: e.g. North American retail / lab partner suppliers>; not a representative cross-section of global textile diversity.
- Lighting hemisphere only (no transmissive setups, no sub-surface scattering captures).
- HDR capture, while wide-range, may saturate on very specular or very dark materials.
Personal / sensitive information
None. Data consists exclusively of cloth material captures. No people, no faces, no identifiable subjects, no personally-identifying metadata.
Intended use cases
- BRDF / SVBRDF estimation
- Photometric stereo benchmarks
- Multi-view inverse rendering
- Neural appearance models conditioned on geometry + lighting
- Material classification or retrieval research
Social impact
- Intended for graphics, vision, and inverse-rendering research.
- No known dual-use risk: cloth material captures are physical-object measurements with no human subjects, no personally-identifying metadata, and no operational security implications.
- Indirect downstream uses might include realistic cloth rendering for games, films, or virtual try-on; the dataset itself does not enable surveillance or harm.
Citation
@misc{clothbrdf2026,
title = {<FILL_IN: paper title>},
author = {<FILL_IN: anonymous during review>},
year = {2026},
note = {NeurIPS 2026 Datasets \& Benchmarks Track submission},
url = {https://huggingface.co/datasets/koalapenguin/cloth-brdf}
}
Provenance
Pipeline scripts are under scripts/dataset_submission/ in the source repository:
- Capture:
capture/capture_pipeline_fixed_center.py - Reconstruction: COLMAP feature matching + triangulation
- HDR cropping:
scripts/dataset_submission/crop_hdr_by_mask.py - Observation structuring:
scripts/dataset_submission/process_all_materials.py - Upload:
scripts/dataset_submission/upload_hf_debug.py
Detailed Croissant prov:wasGeneratedBy records are in croissant.jsonld.
- Downloads last month
- 4,753