Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ValueError
Message: Bad split: test. Available splits: ['train']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 61, in get_rows
ds = load_dataset(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1705, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1196, in as_streaming_dataset
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: test. Available splits: ['train']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
reLAIONet
reLAIONet is a manually proofread, web-sourced image classification benchmark aligned to ImageNet's label space. It is designed for out-of-distribution evaluation of class-conditional generative and discriminative models, providing a challenging complement to ImageNet val and ImageNetV2.
This dataset is introduced in:
One-Step Diffusion Models Are Zero-Shot Generative Classifiers https://arxiv.org/abs/2603.14186
Motivation
Standard ImageNet evaluation sets — ImageNet val and ImageNetV2 — share the same Flickr-dominated photographic distribution as ImageNet training data, limiting their ability to measure genuine generalization. One-step class-conditional models trained on ImageNet particularly suffer from this: all available benchmarks either overlap with training data or depart from ImageNet's integer class-ID conditioning.
reLAIONet fills this gap. By sourcing images from open web crawls rather than Flickr, it exhibits substantially different photographic style, context, and composition, making it a challenging out-of-distribution test bed — while retaining full compatibility with ImageNet's 1,000-class label-ID conditioning.
Construction
reLAIONet is built by applying the LAIONet methodology to reLAION-400M, a re-crawled and deduplicated successor to LAION-400M with restored URL availability. The original LAION-400M is not used because many of its URLs are no longer accessible.
Pipeline
Synset matching — All 48 reLAION-400M parquet files are scanned. Image-caption pairs are filtered for case-insensitive substring matches against WordNet lemmas unique to a single ImageNet synset. Lemmas shared across multiple synsets are excluded to prevent ambiguous assignments.
NSFW filtering — Entries flagged as NSFW are removed.
Multi-label filtering — Captions matching more than one ImageNet class are discarded.
CLIP similarity filtering — Each caption is encoded with CLIP ViT-B/32. Only pairs whose cosine similarity to the synset description exceeds 0.82 (the threshold from the original LAIONet paper) are retained.
Ranked download — For each qualifying class, up to 70 images are downloaded, ranked by CLIP similarity (highest first). This step yields images for up to 997 of the 1,000 ImageNet classes.
Manual proofreading — All downloaded images are hand-reviewed to remove mislabeled, visually ambiguous, or low-quality samples. This is the key addition over the original LAIONet construction.
Dataset Statistics
| Property | Value |
|---|---|
| Total images | 25,252 |
| ImageNet classes covered | 757 / 1000 |
| Images per class | 1–69 |
| Image source | Open web crawl (reLAION-400M) |
| Label format | ImageNet class index, WordNet ID (WNID), synset name |
| Curation | Manually proofread |
Data Format
Images are organized as images/<synset>/<synset>_XXXX.png.
Each entry in metadata_imagenet.json maps a file path to its ImageNet label metadata:
{
"images/abacus/abacus_0002.png": {
"imagenet_class_idx": 397,
"wnid": "n02666196",
"synset": "abacus"
}
}
| Field | Type | Description |
|---|---|---|
imagenet_class_idx |
int | ImageNet class index (0–999) |
wnid |
string | WordNet synset ID (e.g. n02666196) |
synset |
string | Human-readable synset name |
Intended Use
reLAIONet is intended as an evaluation-only benchmark. Its primary use cases are:
- Out-of-distribution generalization assessment for models trained on ImageNet
- Class-conditional generative model evaluation (diffusion models, flow-matching models, GANs) that require integer class-ID conditioning
- Comparative benchmarking alongside ImageNet val and ImageNetV2 to separate in-distribution from out-of-distribution performance
Relationship to Prior Work
| Dataset | Source | Distribution | Label space |
|---|---|---|---|
| ImageNet val | Flickr + web (curated) | In-distribution | ImageNet 1K |
| ImageNetV2 | Flickr | Near in-distribution | ImageNet 1K |
| reLAIONet | Open web crawl (reLAION-400M) | Out-of-distribution | ImageNet 1K |
reLAIONet is the only publicly available ImageNet-compatible evaluation set sourced entirely from open web crawls with manual proofreading.
Limitations
- Class coverage — 243 ImageNet classes have no images, typically rare biological species or highly specialized objects that are too infrequent or ambiguous in web crawl data to pass all filters.
- Imbalanced class sizes — Classes range from 1 to 69 images depending on reLAION-400M availability and filter attrition.
- Web crawl biases — As with any web-sourced dataset, reLAIONet inherits biases present in reLAION-400M, including geographic and cultural skew in what is photographed and captioned online.
Citation
If you use reLAIONet in your work, please cite:
@misc{ravishankar2026fairbenchmarkingemergingonestep,
title={Fair Benchmarking of Emerging One-Step Generative Models Against Multistep Diffusion and Flow Models},
author={Advaith Ravishankar and Serena Liu and Mingyang Wang and Todd Zhou and Jeffrey Zhou and Arnav Sharma and Ziling Hu and Léopold Das and Abdulaziz Sobirov and Faizaan Siddique and Freddy Yu and Seungjoo Baek and Yan Luo and Mengyu Wang},
year={2026},
eprint={2603.14186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.14186},
}
Data sourced from reLAION-400M (Schuhmann et al.).
- Downloads last month
- 2,044