Update README.md
Browse files
README.md
CHANGED
|
@@ -4,9 +4,134 @@ task_categories:
|
|
| 4 |
- image-classification
|
| 5 |
---
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
-
We will fill in the details later
|
|
|
|
| 4 |
- image-classification
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# [BMVC2025] An Exploratory Study on Abstract Images and Visual Representations Learned from Them
|
| 8 |
|
| 9 |
+
**HAID** (Hierarchical Abstraction Image Dataset) is a collection of SVG images generated from existing raster datasets at multiple levels of abstraction (controlled by the number of geometric primitives). HAID is designed to enable systematic study of how abstraction and vectorized representations affect learned visual features and downstream vision tasks.
|
| 10 |
|
| 11 |
+
---
|
| 12 |
+
## Highlights
|
| 13 |
+
|
| 14 |
+
- One-to-one correspondence between each SVG and its original raster image.
|
| 15 |
+
- Multiple abstraction levels (number of shapes) per image so you can study the effect of abstracted vision.
|
| 16 |
+
- Subsets derived from three standard datasets: [MiniImageNet](https://www.kaggle.com/datasets/arjunashok33/miniimagenet/data), [Caltech-256](https://data.caltech.edu/records/nyy15-4j048), and [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html).
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## Contents & structure
|
| 21 |
+
|
| 22 |
+
The repository follows a clear, hierarchical layout.
|
| 23 |
+
**Important:** For HAID-MiniImageNet and HAID-Caltech-256, the *next-level folders are class folders*, and inside each class folder are the different abstraction-level folders (e.g., `10_shapes_mode0`, `30_shapes_mode0`, ..., which, the mode indicates the type of shapes used to generate the images).
|
| 24 |
+
**Exception:** `HAID-CIFAR-10` uses `train_cifar10/test_cifar10` as the next-level split (matching CIFAR), and inside `train_cifar10/` and `test_cifar10/` you will find abstraction-level folders.
|
| 25 |
+
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
HAID/
|
| 29 |
+
├─ HAID-MiniImageNet/
|
| 30 |
+
│ ├─ n01532829/
|
| 31 |
+
│ │ ├─ 10_shapes_mode0/
|
| 32 |
+
│ │ │ ├─ n01532829_28.svg
|
| 33 |
+
│ │ │ ├─ n01532829_47.svg
|
| 34 |
+
│ │ │ └─ ...
|
| 35 |
+
│ │ ├─ 10_shapes_mode1/
|
| 36 |
+
│ │ └─ ...
|
| 37 |
+
│ ├─ 01558993/
|
| 38 |
+
│ │ ├─ 10_shapes_mode0/
|
| 39 |
+
│ │ └─ ...
|
| 40 |
+
│ └─ ...
|
| 41 |
+
├─ HAID-Caltech-256/
|
| 42 |
+
│ ├─ 001.ak47/
|
| 43 |
+
│ │ ├─ 10_shapes_mode0/
|
| 44 |
+
│ │ │ ├─ 001_0001.svg
|
| 45 |
+
│ │ │ └─ ...
|
| 46 |
+
│ │ ├─ 10_shapes_mode1/
|
| 47 |
+
│ │ └─ ...
|
| 48 |
+
│ └─ ...
|
| 49 |
+
└─ HAID-CIFAR-10/
|
| 50 |
+
│ ├─ train_cifar10/
|
| 51 |
+
│ │ ├─ 10_shapes_mode0/
|
| 52 |
+
│ │ │ ├─ 010029.svg
|
| 53 |
+
│ │ │ └─ ...
|
| 54 |
+
│ │ ├─ 10_shapes_mode1/
|
| 55 |
+
│ │ └─ ...
|
| 56 |
+
│ └─ ...
|
| 57 |
+
│ └─ test_cifar10/
|
| 58 |
+
│ │ ├─ 10_shapes_mode0/
|
| 59 |
+
│ │ └─ ...
|
| 60 |
+
│ └─ ...
|
| 61 |
+
└─ ...
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
Notes:
|
| 65 |
+
Each `X_shapes_modeY` directory contains the SVG files for that class, datasets share the same configuration with the original datasets from the following aspects:
|
| 66 |
+
- the original raster image ID (one-to-one mapping),
|
| 67 |
+
- the original label/class.
|
| 68 |
+
|
| 69 |
+
We name the folder to show the abstract information:
|
| 70 |
+
- generation setting (number of shapes and generation [mode](https://github.com/fogleman/primitive?tab=readme-ov-file#command-line-usage)).
|
| 71 |
+
- folder naming follows a consistent convention `{number of shapes}_shapes_mode{mode number}`.
|
| 72 |
+
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+
## Subset details
|
| 76 |
+
|
| 77 |
+
**HAID-MiniImageNet**
|
| 78 |
+
|
| 79 |
+
Derived from MiniImageNet. Contains ~60,000 images across 100 classes. Abstraction levels: **10, 30, 50, 100, 500, 1000** shapes, each contains mode **0 and 1**.
|
| 80 |
+
|
| 81 |
+
**HAID-Caltech-256**
|
| 82 |
+
|
| 83 |
+
Derived from Caltech-256. Abstraction levels: **10, 30, 50, 100** shapes, each contains mode **0 and 1**.
|
| 84 |
+
|
| 85 |
+
**HAID-CIFAR-10**
|
| 86 |
+
Derived from CIFAR-10. Abstraction levels: **10, 30, 50, 100** shapes. Directory first splits by `train_cifar10/` and `test_cifar10/` (matching CIFAR), and each of those contains abstraction-level folders.
|
| 87 |
+
|
| 88 |
+
## How the SVGs were generated
|
| 89 |
+
|
| 90 |
+
HAID SVGs were generated using the [**Primitive**](https://github.com/fogleman/primitive) (a primitive-shape based reconstruction tool). Primitive iteratively adds geometric primitives to a canvas to approximate the original raster image; the number of primitives controls the abstraction level. Two generation modes were produced in some experiments: (1) all primitive types (**mode 0**), and (2) triangle-only (**mode 1**). See the paper and supplementary material for more generation details and algorithm settings.
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
## Recommended uses
|
| 95 |
+
|
| 96 |
+
- Study representation differences between raster and vector/abstract images.
|
| 97 |
+
- Pretraining / transfer-learning experiments (classification → segmentation / detection).
|
| 98 |
+
- Research on efficient visual encoding, SVG-aware models, or transmission-efficient learning.
|
| 99 |
+
- Human perception / psychophysics studies of abstraction vs. recognizability. :contentReference[oaicite:9]{index=9}
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
## Key empirical findings (summary)
|
| 104 |
+
|
| 105 |
+
These are results from experiments using HAID (details in the associated paper):
|
| 106 |
+
|
| 107 |
+
* **Representation gap is driven by fine-grained details:** as the number of primitives increases, learned representations from SVGs move steadily closer to raster-trained representations. In HAID-MiniImageNet, **500–1000 shape** levels produce embeddings that are largely overlapping with raster embeddings (near-parity in semantic content).
|
| 108 |
+
|
| 109 |
+
* **Downstream tasks:** representations learned from abstract images can transfer to segmentation and detection. In some detection experiments (e.g., Faster R-CNN), pretraining on moderately abstract images (around 100 shapes) focused attention on object geometry and improved localization compared to random initialization; this effect weakens as abstraction increases further.
|
| 110 |
+
|
| 111 |
+
* **Human perceptual results:** mean opinion scores (MOS) for recognizability increase monotonically with the number of primitives; "easy" images require fewer shapes for recognition than "hard" images.
|
| 112 |
+
|
| 113 |
+
*(Please cite the dataset/paper when using HAID — see the Citation section below.)*
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
## Citation
|
| 118 |
+
|
| 119 |
+
If you use HAID in your research, please cite the paper that introduces the dataset:
|
| 120 |
+
|
| 121 |
+
```bibtex
|
| 122 |
+
@inproceedings{li2025explorative,
|
| 123 |
+
title={An Explorative Study on Abstract Images and Visual Representations Learned from Them},
|
| 124 |
+
author={Li, Haotian and Jiao, Jianbo},
|
| 125 |
+
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025},
|
| 126 |
+
publisher = {BMVA},
|
| 127 |
+
year = {2025}
|
| 128 |
+
}
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
## Contact & project page
|
| 134 |
+
|
| 135 |
+
* Project page : `https://fronik-lihaotian.github.io/HAID_page/`.
|
| 136 |
+
* For questions, bug reports, or collaborations, please open an issue in this repository or contact the authors listed in the paper.
|
| 137 |
|
|
|