Dataset Viewer
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ImageNet-1k-256
A preprocessed dataset of ImageNet-1k for image generation.
- Each image is resized to 256x256.
- Resized images are processed by
stabilityai/sd-vae-ft-mseto produce latents.
The scripts used for data preprocessing are from REPA & EDM.
The commands used to generate this dataset:
python dataset_tools.py convert --source=/path/to/imagenet/train --dest=/path/to/imagenet_256/images.zip --resolution=256x256 --transform=center-crop-dhariwal
python dataset_tools.py encode --source=/path/to/imagenet_256/images.zip --dest=/path/to/imagenet_256/vae-sd.zip
# Chunk .zip files into multiple smaller segments
split -b 40G --numeric-suffixes=1 --suffix-length=3 images.zip images.zip.chunk_
split -b 20G --numeric-suffixes=1 --suffix-length=3 vae-sd.zip vae-sd.zip.chunk_
The original ImageNet-1k dataset is supposed to be stored in /path/to/imagenet/train, with the following file structure:
/path/to/imagenet/train
+ ...
+ n12768682/
+ n12768682_9750.JPEG
+ n12768682_9768.JPEG
...
+ n12985857/
+ n12998815/
...
+ n15075141/
How to use
- Download the dataset.
huggingface-cli download --local-dir /path/to/imagenet_256 --repo-type dataset Yuehao/imagenet-1k-256
- Join chunks into .zip files:
cat images.zip.chunk_* > images.zip
cat vae-sd.zip.chunk_* > vae-sd.zip
- Data loading example:
'''
reference:
https://github.com/NVlabs/edm2/blob/main/training/dataset.py#L165
https://github.com/sihyun-yu/REPA/blob/main/dataset.py#L17
'''
import os
import zipfile
import json
import numpy as np
import PIL.Image
data_dir = '/path/to/imagenet_256'
images_dir = os.path.join(data_dir, 'images.zip')
features_dir = os.path.join(data_dir, 'vae-sd.zip')
images_zipfile = zipfile.ZipFile(images_dir)
features_zipfile = zipfile.ZipFile(features_dir)
# Get all file names
image_fnames = sorted(fname for fname in set(images_zipfile.namelist()))
features_fnames = sorted(fname for fname in set(features_zipfile.namelist()))
## Load labels
with features_zipfile.open('dataset.json', 'r') as f:
labels = json.load(f)['labels']
labels = dict(labels)
labels = [labels[fname.replace('\\', '/')] for fname in feature_fnames]
labels = np.array(labels)
# all labels
labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
## Load images
idx = 0
image_fname = image_fnames[idx]
feature_fname = feature_fnames[idx]
with images_zipfile.open(image_fname, 'r') as f:
image = np.array(PIL.Image.open(f))
feature = np.load(features_zipfile.open(feature_fname, 'r'))
# RGB image, image latent, image class
# (image, feature, labels[idx])
- Downloads last month
- 39