dust3r-dataset / README.md
Yong-Hoon's picture
Add files using upload-large-folder tool
70dd234 verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - depth-estimation
  - image-to-3d
tags:
  - dust3r
  - 3d-reconstruction
  - stereo-vision
  - pointcloud
  - depth
pretty_name: DUSt3R Preprocessed Training Data
size_categories:
  - 1T<n

DUSt3R Preprocessed Training Data

Preprocessed training datasets for DUSt3R: Geometric 3D Vision Made Easy (CVPR 2024).

These datasets have been preprocessed using the scripts provided in the official DUSt3R repository (datasets_preprocess/) and are ready for training.

Datasets

Dataset Original Size Parts Description Original License
arkitscenes_processed 153 GB 8 (aa~ah) ARKitScenes - Apple's indoor 3D scene dataset CC BY-NC-SA 4.0
blendedmvs_processed 79 GB 4 (aa~ad) BlendedMVS - Large-scale multi-view stereo dataset CC BY 4.0
co3d_processed 119 GB 6 (aa~af) CO3Dv2 - Common Objects in 3D CC BY-NC 4.0
habitat_processed 611 GB 29 (aa~bc) Habitat-Sim - Photorealistic 3D simulator See Habitat
megadepth_processed 42 GB 3 (aa~ac) MegaDepth - Single-view depth prediction from internet photos See MegaDepth
scannetpp_processed 36 GB 2 (aa~ab) ScanNet++ - High-fidelity indoor scene dataset Non-commercial
staticthings3d_processed 43 GB 3 (aa~ac) StaticThings3D - Synthetic stereo dataset See StaticThings3D
waymo_processed 24 GB 2 (aa~ab) Waymo Open Dataset - Autonomous driving dataset Non-commercial
wildrgbd_processed 38 GB 2 (aa~ab) WildRGB-D - Wild RGB-D dataset See WildRGB-D

Total: ~1.1 TB (59 parts)

File Structure

Each dataset is split into 20 GB parts with the following naming convention:

<dataset_name>.tar.part_aa
<dataset_name>.tar.part_ab
<dataset_name>.tar.part_ac
...
<dataset_name>.sha256          # SHA256 checksums for integrity verification

Download

Download All (bash)

# Using git lfs
git lfs install
git clone https://huggingface.co/datasets/Yong-Hoon/dust3r-dataset

# Or using huggingface-cli
huggingface-cli download Yong-Hoon/dust3r-dataset --repo-type dataset --local-dir ./data

Download a Specific Dataset (bash)

huggingface-cli download Yong-Hoon/dust3r-dataset --repo-type dataset --include "arkitscenes_processed.*" --local-dir ./data

Download All (Python)

from huggingface_hub import snapshot_download

# Download entire dataset
snapshot_download(
    repo_id="Yong-Hoon/dust3r-dataset",
    repo_type="dataset",
    local_dir="./data",
)

Download a Specific Dataset (Python)

from huggingface_hub import snapshot_download

# Download only a specific dataset (e.g., arkitscenes_processed)
snapshot_download(
    repo_id="Yong-Hoon/dust3r-dataset",
    repo_type="dataset",
    local_dir="./data",
    allow_patterns=["arkitscenes_processed.*"],
)

Download Multiple Datasets (Python)

from huggingface_hub import snapshot_download

# Choose datasets to download
datasets_to_download = [
    "arkitscenes_processed",
    "co3d_processed",
    "megadepth_processed",
]

patterns = [f"{name}.*" for name in datasets_to_download]
snapshot_download(
    repo_id="Yong-Hoon/dust3r-dataset",
    repo_type="dataset",
    local_dir="./data",
    allow_patterns=patterns,
)

Checksum Verification

Each dataset includes a .sha256 file for integrity verification.

Verify (bash)

cd data

# Verify a specific dataset
sha256sum -c arkitscenes_processed.sha256

# Verify all datasets
for f in *.sha256; do
    echo "Verifying $f ..."
    sha256sum -c "$f"
done

Verify (Python)

import hashlib
from pathlib import Path


def verify_dataset(data_dir: str, dataset_name: str) -> bool:
    """Verify SHA256 checksums for a dataset."""
    data_path = Path(data_dir)
    sha256_file = data_path / f"{dataset_name}.sha256"

    if not sha256_file.exists():
        print(f"Checksum file not found: {sha256_file}")
        return False

    all_ok = True
    for line in sha256_file.read_text().strip().splitlines():
        expected_hash, filename = line.split()
        filepath = data_path / filename.strip()

        if not filepath.exists():
            print(f"MISSING: {filename}")
            all_ok = False
            continue

        sha256 = hashlib.sha256()
        with open(filepath, "rb") as f:
            for chunk in iter(lambda: f.read(8192 * 1024), b""):
                sha256.update(chunk)

        if sha256.hexdigest() == expected_hash:
            print(f"OK: {filename}")
        else:
            print(f"FAILED: {filename}")
            all_ok = False

    return all_ok


# Verify a specific dataset
verify_dataset("./data", "arkitscenes_processed")

# Verify all datasets
datasets = [
    "arkitscenes_processed", "blendedmvs_processed", "co3d_processed",
    "habitat_processed", "megadepth_processed", "scannetpp_processed",
    "staticthings3d_processed", "waymo_processed", "wildrgbd_processed",
]
for name in datasets:
    print(f"\n=== {name} ===")
    verify_dataset("./data", name)

Decompression

Decompress a Single Dataset

# Merge split parts and extract
cat arkitscenes_processed.tar.part_* | tar xf -

Decompress All Datasets

cd data

for name in arkitscenes blendedmvs co3d habitat megadepth scannetpp staticthings3d waymo wildrgbd; do
    echo "Extracting ${name}_processed ..."
    cat ${name}_processed.tar.part_* | tar xf -
done

Decompress a Specific Dataset to a Custom Directory

cat arkitscenes_processed.tar.part_* | tar xf - -C /path/to/output/

After extraction, the directory structure will be:

data/
  arkitscenes_processed/
  blendedmvs_processed/
  co3d_processed/
  habitat_processed/
  megadepth_processed/
  scannetpp_processed/
  staticthings3d_processed/
  waymo_processed/
  wildrgbd_processed/

License

  • DUSt3R code: CC BY-NC-SA 4.0 (Naver Corporation)
  • Each dataset has its own license as listed in the table above. Please make sure to agree to the license of each dataset before use.
  • This data is provided for non-commercial research purposes only.

Citation

@inproceedings{dust3r_cvpr24,
      title={DUSt3R: Geometric 3D Vision Made Easy},
      author={Shuzhe Wang and Vincent Leroy and Yohann Cabon and Boris Chidlovskii and Jerome Revaud},
      booktitle = {CVPR},
      year = {2024}
}

Acknowledgements

This dataset collection is based on the preprocessing scripts and pair lists provided by the DUSt3R. All original datasets are the property of their respective authors and institutions.