The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
[BMVC2025] An Exploratory Study on Abstract Images and Visual Representations Learned from Them
HAID (Hierarchical Abstraction Image Dataset) is a collection of SVG images generated from existing raster datasets at multiple levels of abstraction (controlled by the number of geometric primitives). HAID is designed to enable systematic study of how abstraction and vectorized representations affect learned visual features and downstream vision tasks.
Highlights
- One-to-one correspondence between each SVG and its original raster image.
- Multiple abstraction levels (number of shapes) per image so you can study the effect of abstracted vision.
- Subsets derived from three standard datasets: MiniImageNet, Caltech-256, and CIFAR-10.
Contents & structure
The repository follows a clear, hierarchical layout.
Important: For HAID-MiniImageNet and HAID-Caltech-256, the next-level folders are class folders, and inside each class folder are the different abstraction-level folders (e.g., 10_shapes_mode0, 30_shapes_mode0, ..., which, the mode indicates the type of shapes used to generate the images).
Exception: HAID-CIFAR-10 uses train_cifar10/test_cifar10 as the next-level split (matching CIFAR), and inside train_cifar10/ and test_cifar10/ you will find abstraction-level folders.
HAID/
ββ HAID-MiniImageNet/
β ββ n01532829/
β β ββ 10_shapes_mode0/
β β β ββ n01532829_28.svg
β β β ββ n01532829_47.svg
β β β ββ ...
β β ββ 10_shapes_mode1/
β β ββ ...
β ββ 01558993/
β β ββ 10_shapes_mode0/
β β ββ ...
β ββ ...
ββ HAID-Caltech-256/
β ββ 001.ak47/
β β ββ 10_shapes_mode0/
β β β ββ 001_0001.svg
β β β ββ ...
β β ββ 10_shapes_mode1/
β β ββ ...
β ββ ...
ββ HAID-CIFAR-10/
β ββ train_cifar10/
β β ββ 10_shapes_mode0/
β β β ββ 010029.svg
β β β ββ ...
β β ββ 10_shapes_mode1/
β β ββ ...
β ββ ...
β ββ test_cifar10/
β β ββ 10_shapes_mode0/
β β ββ ...
β ββ ...
ββ ...
Notes:
Each X_shapes_modeY directory contains the SVG files for that class, datasets share the same configuration with the original datasets from the following aspects:
- the original raster image ID (one-to-one mapping),
- the original label/class.
We name the folder to show the abstract information:
- generation setting (number of shapes and generation mode).
- folder naming follows a consistent convention
{number of shapes}_shapes_mode{mode number}.
Subset details
HAID-MiniImageNet
Derived from MiniImageNet. Contains 60,000 images across 100 classes. Abstraction levels: 10, 30, 50, 100, 500, 1000 shapes (Please note that the 500 and 1000 shapes are in the seperate file/folder named HAID_MiniImageNet_500+1000), each contains mode 0, 1, 2, 5, 7 and 8.
HAID-Caltech-256
Derived from Caltech-256. Abstraction levels: 10, 30, 50, 100 shapes, each contains mode 0, 1, 2, 5, 7 and 8.
HAID-CIFAR-10
Derived from CIFAR-10. Abstraction levels: 10, 30, 50, 100 shapes. Directory first splits by train_cifar10/ and test_cifar10/ (matching CIFAR), and each of those contains abstraction-level folders.
Size of datasets:
- HAID-MiniImageNet: ~16GB
- HAID-MiniImageNet_500+1000: ~64GB
- HAID-caltech-256: ~8.1GB
- HAID-cifar-10: ~4GB
How the SVGs were generated
HAID SVGs were generated using the Primitive (a primitive-shape based reconstruction tool). Primitive iteratively adds geometric primitives to a canvas to approximate the original raster image; the number of primitives controls the abstraction level. Two generation modes were produced in some experiments: (1) all primitive types (mode 0), and (2) triangle-only (mode 1). See the paper and supplementary material for more generation details and algorithm settings.
Usage
Fully zipped file
HAID-MiniImageNet.zip, HAID-MiniImageNet_500+1000.zip, HAID-caltech.zip, and HAID-cifar.zip are the full set of our datasets, which contain all the abstract levels of primitive based images. If you want to use all the different abstract levels of the images, just download these zip files and extract.
Zipped by the level
We also provide another version of our dataset where each abstract level is zipped separately. To use them, please download the folders named XXX_zipped_by_level. We also provide a script which you can use to extract the images with a specific abstract level. Below shows the example usage for this script:
pip install huggingface_hub
mkdir datasets
cd datasets
hf download --repo-type dataset Froink/HAID_zipped HAID-MiniImageNet_zipped_by_level --local-dir .
bash unzip_by_level.sh ./HAID-MiniImageNet_zipped_by_level --shapes 100 --mode 0 --dest ./HAID-MiniImageNet
Use shapes and mode to indicate which abstract level of images you want to extract, and use dest to indicate the target path of extracted images. For training and evaluating the models from these images, please check our GitHub page.
Recommended uses
- Study representation differences between raster and vector/abstract images.
- Pretraining / transfer-learning experiments (classification β segmentation / detection).
- Research on efficient visual encoding, SVG-aware models, or transmission-efficient learning.
- Human perception / psychophysics studies of abstraction vs. recognizability.
Key empirical findings (summary)
These are results from experiments using HAID (details in the associated paper):
Representation gap is driven by fine-grained details: as the number of primitives increases, learned representations from SVGs move steadily closer to raster-trained representations. In HAID-MiniImageNet, 500β1000 shape levels produce embeddings that are largely overlapping with raster embeddings (near-parity in semantic content).
Downstream tasks: representations learned from abstract images can transfer to segmentation and detection. In some detection experiments (e.g., Faster R-CNN), pretraining on moderately abstract images (around 100 shapes) focused attention on object geometry and improved localization compared to random initialization; this effect weakens as abstraction increases further.
Human perceptual results: mean opinion scores (MOS) for recognizability increase monotonically with the number of primitives; "easy" images require fewer shapes for recognition than "hard" images.
(Please cite the dataset/paper when using HAID β see the Citation section below.)
Citation
If you use HAID in your research, please cite the paper that introduces the dataset:
@inproceedings{li2025explorative,
title={An Explorative Study on Abstract Images and Visual Representations Learned from Them},
author={Li, Haotian and Jiao, Jianbo},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year = {2025}
}
Contact & project page
- Project page:
https://fronik-lihaotian.github.io/HAID_page/. - For questions, bug reports, or collaborations, please open an issue in this repository or contact the authors listed in the paper.
- Downloads last month
- 1,603