|
|
--- |
|
|
license: other |
|
|
--- |
|
|
# Dataset Card for SmallNORB |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
The **SmallNORB dataset** is a **real-world stereo image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It was introduced by **LeCun et al. (2004)** for evaluating **generic object recognition** with **invariance to pose and lighting**. |
|
|
|
|
|
Unlike synthetic datasets such as **dSprites** or **MPI3D**, which are generated as a **complete Cartesian product of factors** (i.e. every possible combination is present), SmallNORB consists of **real photographs** of physical toy objects under controlled variations, but **not every combination of factors is present** — for example, object instances are sampled randomly and the views (azimuth, elevation, lighting) do not form an exact grid. |
|
|
|
|
|
Each sample contains **two views**: |
|
|
- **Left image** (96x96 grayscale) |
|
|
- **Right image** (96x96 grayscale) |
|
|
|
|
|
Each image pair is associated with **4 known factors of variation** and **instance index**: |
|
|
- **category** (object type) |
|
|
- **instance** (specific object instance) |
|
|
- **elevation** (camera tilt angle) |
|
|
- **azimuth** (camera rotation angle) |
|
|
- **lighting** (lighting condition) |
|
|
|
|
|
The dataset allows researchers to evaluate **representation learning on real-world 3D objects**, under complex lighting and pose variations. SmallNORB provides an **official train/test split**. Typically, **instance** is not considered as a factor. |
|
|
 |
|
|
|
|
|
## Dataset Source |
|
|
- **Homepage**: [https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/) |
|
|
- **License**: other. Small NORB is public domain, for research use. |
|
|
- **Paper**: Yann LeCun et al. _Learning methods for generic object recognition with invariance to pose and lighting_. CVPR 2004. |
|
|
|
|
|
## Dataset Structure |
|
|
|Factors|Possible Classes (Indices)|Values| |
|
|
|---|---|---| |
|
|
|category|0,...,4| airplane=0, car=1, truck=2, human=3, animal=4 | |
|
|
|instance|0,...,9| specific instance of object | |
|
|
|elevation|0,...,8| 9 elevation angles | |
|
|
|azimuth|0,...,17| azimuth originally 0,2,...,34 → scaled to 0-17 | |
|
|
|lighting|0,...,5| 6 lighting conditions | |
|
|
|
|
|
**Note:** The dataset is not a complete Cartesian product — **instances and views are sampled** in the original design. Each sample contains a **left image** and a **right image**, both corresponding to the same factors. |
|
|
|
|
|
## Example Usage |
|
|
|
|
|
Below is a quick example of how to load this dataset via the Hugging Face Datasets library: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load train set |
|
|
train_ds = load_dataset("randall-lab/small-norb", split="train", trust_remote_code=True) |
|
|
|
|
|
# Load test set |
|
|
# test_ds = load_dataset("randall-lab/small-norb", split="test", trust_remote_code=True) |
|
|
|
|
|
# Access a sample |
|
|
example = train_ds[0] |
|
|
left_image = example["left_image"] |
|
|
right_image = example["right_image"] |
|
|
label = example["label"] # [category, elevation, azimuth, lighting] |
|
|
|
|
|
# Label breakdown |
|
|
category = example["category"] # 0-4 |
|
|
instance = example["instance"] # 0-9 |
|
|
elevation = example["elevation"] # 0-8 |
|
|
azimuth = example["azimuth"] # 0-17 |
|
|
lighting = example["lighting"] # 0-5 |
|
|
|
|
|
# Visualize |
|
|
left_image.show() |
|
|
right_image.show() |
|
|
|
|
|
print(f"Label (factors): {label}") |
|
|
``` |
|
|
If you are using colab, you should update datasets to avoid errors |
|
|
``` |
|
|
pip install -U datasets |
|
|
``` |
|
|
## Citation |
|
|
``` |
|
|
@inproceedings{lecun2004learning, |
|
|
title={Learning methods for generic object recognition with invariance to pose and lighting}, |
|
|
author={LeCun, Yann and Huang, Fu Jie and Bottou, Leon}, |
|
|
booktitle={Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.}, |
|
|
volume={2}, |
|
|
pages={II--104}, |
|
|
year={2004}, |
|
|
organization={IEEE} |
|
|
} |
|
|
``` |