Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): (None, {}), NamedSplit('validation'): ('arrow', {}), NamedSplit('test'): (None, {})}
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
dREPA Collections
Pre-processed datasets, pretrained encoder weights, and evaluation resources for training and evaluating dREPA (SiT / JiT / MMDiT architectures).
Repository Structure
.
├── data/
│ ├── coco256_features_sdvae_ft_ema/ # MMDiT: MSCOCO 256 (SD-VAE latents + CLIP/T5 embeddings)
│ │ ├── empty_context.npy # empty text token for CFG
│ │ ├── train_part00.tar ... train_partNN.tar # train split (~82K samples, tar archives)
│ │ └── val_part00.tar ... val_partNN.tar # val split (~40K samples, tar archives)
│ ├── imagenet256/ # SiT-256: ImageNet (HuggingFace Arrow format)
│ │ ├── imagenet-latents-sdvae-ft-mse-f8d4/ # VAE latents (87 arrow files)
│ │ └── imagenet-latents-images/ # raw images (505 arrow files)
│ └── imagenet512/ # SiT-512: ImageNet (HuggingFace Arrow format)
│ ├── imagenet-latents-sdvae-ft-mse-f8d4/ # VAE latents (342 arrow files)
│ └── imagenet-latents-images/ # raw images (988 arrow files)
│
├── pretrained_models/ # encoder & VAE weights
│ ├── dinov2_vitb14_pretrain.pth
│ ├── dinov3_vit{s16,s16plus,b16,l16,h16plus,7b16}_pretrain_lvd1689m-*.pth
│ ├── mocov3_vit{b,l}.pth
│ ├── mae_vitl.pth
│ ├── ijepa_vith.pth
│ ├── sdvae-ft-mse-f8d4.pt # SD-VAE F8D4 decoder weights
│ ├── sdvae-ft-mse-f8d4-latents-stats.pt
│ ├── sd-vae-ft-ema/ # SD-VAE-FT-EMA (for MMDiT)
│ ├── weights-inception-2015-12-05-6726825d.pth # InceptionV3 for FID
│ └── dinov2/ # DINOv2 torch.hub code
│
├── dinov3/ # DINOv3 torch.hub code (for torch.hub.load source='local')
│
├── metrics/ # spatial metrics evaluation
│ ├── spatial_metrics.py
│ ├── data.tar # metrics data (~2 GB, tar archive)
│ └── ...
│
└── eval_references/ # FID reference statistics
├── VIRTUAL_imagenet256_labeled.npz
└── VIRTUAL_imagenet512.npz
Quick Start
1. Download
# Install huggingface_hub
pip install huggingface_hub
# Download everything
huggingface-cli download AIPeanutman/dREPA_collections --repo-type dataset --local-dir dREPA_collections
# Or download specific groups
huggingface-cli download AIPeanutman/dREPA_collections --repo-type dataset --local-dir dREPA_collections --include "pretrained_models/*"
huggingface-cli download AIPeanutman/dREPA_collections --repo-type dataset --local-dir dREPA_collections --include "data/coco256_features_sdvae_ft_ema/*"
2. Unpack tar archives
The COCO256 and metrics data are packed as .tar archives (split into parts) to avoid millions of small files on HuggingFace. You must unpack them after downloading.
cd dREPA_collections
# --- Unpack COCO256 train ---
mkdir -p data/coco256_features_sdvae_ft_ema/train
for f in data/coco256_features_sdvae_ft_ema/train*.tar; do
tar xf "$f" -C data/coco256_features_sdvae_ft_ema/train/
done
# --- Unpack COCO256 val ---
mkdir -p data/coco256_features_sdvae_ft_ema/val
for f in data/coco256_features_sdvae_ft_ema/val*.tar; do
tar xf "$f" -C data/coco256_features_sdvae_ft_ema/val/
done
# --- Unpack metrics data ---
mkdir -p metrics/data
tar xf metrics/data.tar -C metrics/data/
# --- (Optional) Remove tar files after unpacking ---
rm -f data/coco256_features_sdvae_ft_ema/train*.tar
rm -f data/coco256_features_sdvae_ft_ema/val*.tar
rm -f metrics/data.tar
Or use the one-liner:
cd dREPA_collections && bash unpack.sh
3. Verify
After unpacking, the directory should look like:
data/coco256_features_sdvae_ft_ema/
├── empty_context.npy
├── train/
│ ├── 0.png, 0.npy, 0_0.npy, 0_1.npy, ... # 82,783 samples
│ └── ...
└── val/
├── 0.png, 0.npy, 0_0.npy, 0_1.npy, ... # 40,504 samples
└── ...
Each sample consists of:
{idx}.png— original image (256x256){idx}.npy— SD-VAE latent features{idx}_{k}.npy— text embeddings (k=0..4, multiple captions per image)
4. Link to dREPA project
# Assuming dREPA repo is at ./dREPA
ln -s $(pwd)/dREPA_collections/pretrained_models/* dREPA/pretrained_models/
ln -s $(pwd)/dREPA_collections/dinov3 dREPA/dinov3
ln -s $(pwd)/dREPA_collections/eval_references/VIRTUAL_imagenet256_labeled.npz dREPA/
# Set data dir in training scripts
export DATA_DIR=$(pwd)/dREPA_collections/data/coco256_features_sdvae_ft_ema # for MMDiT
export DATA_DIR=$(pwd)/dREPA_collections/data/imagenet256 # for SiT-256
Data Formats
| Dataset | Format | Used by | Size (approx) |
|---|---|---|---|
| coco256_features_sdvae_ft_ema | .png + .npy | MMDiT (train_mmdit.py) | ~50-80 GB |
| imagenet256 | HF Arrow | SiT-256 (train_sit.py) | ~150 GB |
| imagenet512 | HF Arrow | SiT-512 (train_sit.py) | ~620 GB |
Pretrained Encoder Weights
| File | Model | Size |
|---|---|---|
| dinov2_vitb14_pretrain.pth | DINOv2 ViT-B/14 | 331 MB |
| dinov3_vitb16_pretrain_lvd1689m-*.pth | DINOv3 ViT-B/16 | 327 MB |
| dinov3_vitl16_pretrain_lvd1689m-*.pth | DINOv3 ViT-L/16 | 1.1 GB |
| dinov3_vith16plus_pretrain_lvd1689m-*.pth | DINOv3 ViT-H/16+ | 3.1 GB |
| dinov3_vit7b16_pretrain_lvd1689m-*.pth | DINOv3 ViT-7B/16 | 25 GB |
| mocov3_vitb.pth | MoCo v3 ViT-B | 823 MB |
| mocov3_vitl.pth | MoCo v3 ViT-L | 2.4 GB |
| mae_vitl.pth | MAE ViT-L | 1.1 GB |
| ijepa_vith.pth | I-JEPA ViT-H | 9.6 GB |
License
Please respect the original licenses of the pretrained models and datasets:
- DINOv2/DINOv3: Meta Platforms (Apache 2.0)
- MoCo v3, MAE, I-JEPA: Meta Platforms
- ImageNet: Academic use only (ILSVRC license)
- MSCOCO: Creative Commons Attribution 4.0
- Downloads last month
- 96