Update README.md
Browse files
README.md
CHANGED
|
@@ -84,10 +84,37 @@ Derived from Caltech-256. Abstraction levels: **10, 30, 50, 100** shapes, each c
|
|
| 84 |
**HAID-CIFAR-10**
|
| 85 |
Derived from CIFAR-10. Abstraction levels: **10, 30, 50, 100** shapes. Directory first splits by `train_cifar10/` and `test_cifar10/` (matching CIFAR), and each of those contains abstraction-level folders.
|
| 86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
## How the SVGs were generated
|
| 88 |
|
| 89 |
HAID SVGs were generated using the [**Primitive**](https://github.com/fogleman/primitive) (a primitive-shape based reconstruction tool). Primitive iteratively adds geometric primitives to a canvas to approximate the original raster image; the number of primitives controls the abstraction level. Two generation modes were produced in some experiments: (1) all primitive types (**mode 0**), and (2) triangle-only (**mode 1**). See the paper and supplementary material for more generation details and algorithm settings.
|
| 90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
---
|
| 92 |
|
| 93 |
## Recommended uses
|
|
@@ -95,7 +122,7 @@ HAID SVGs were generated using the [**Primitive**](https://github.com/fogleman/p
|
|
| 95 |
- Study representation differences between raster and vector/abstract images.
|
| 96 |
- Pretraining / transfer-learning experiments (classification → segmentation / detection).
|
| 97 |
- Research on efficient visual encoding, SVG-aware models, or transmission-efficient learning.
|
| 98 |
-
- Human perception / psychophysics studies of abstraction vs. recognizability.
|
| 99 |
|
| 100 |
---
|
| 101 |
|
|
@@ -131,5 +158,5 @@ year = {2025}
|
|
| 131 |
|
| 132 |
## Contact & project page
|
| 133 |
|
| 134 |
-
* Project page
|
| 135 |
* For questions, bug reports, or collaborations, please open an issue in this repository or contact the authors listed in the paper.
|
|
|
|
| 84 |
**HAID-CIFAR-10**
|
| 85 |
Derived from CIFAR-10. Abstraction levels: **10, 30, 50, 100** shapes. Directory first splits by `train_cifar10/` and `test_cifar10/` (matching CIFAR), and each of those contains abstraction-level folders.
|
| 86 |
|
| 87 |
+
Size of datasets:
|
| 88 |
+
|
| 89 |
+
- HAID-MiniImageNet: ~16GB
|
| 90 |
+
- HAID-MiniImageNet_500+1000: ~64GB
|
| 91 |
+
- HAID-caltech-256: ~8.1GB
|
| 92 |
+
- HAID-cifar-10: ~4GB
|
| 93 |
+
|
| 94 |
## How the SVGs were generated
|
| 95 |
|
| 96 |
HAID SVGs were generated using the [**Primitive**](https://github.com/fogleman/primitive) (a primitive-shape based reconstruction tool). Primitive iteratively adds geometric primitives to a canvas to approximate the original raster image; the number of primitives controls the abstraction level. Two generation modes were produced in some experiments: (1) all primitive types (**mode 0**), and (2) triangle-only (**mode 1**). See the paper and supplementary material for more generation details and algorithm settings.
|
| 97 |
|
| 98 |
+
## Usage
|
| 99 |
+
|
| 100 |
+
### Fully zipped file
|
| 101 |
+
|
| 102 |
+
`HAID-MiniImageNet.zip`, `HAID-MiniImageNet_500+1000.zip`, `HAID-caltech.zip`, and `HAID-cifar.zip` are the full set of our datasets, which contain all the abstract levels of primitive based images. If you want to use all the different abstract levels of the images, just download these zip files and extract.
|
| 103 |
+
|
| 104 |
+
### Zipped by the level
|
| 105 |
+
|
| 106 |
+
We also provide another version of our dataset where each abstract level is zipped separately. To use them, please download the folders named `XXX_zipped_by_level`. We also provide a script which you can use to extract the images with a specific abstract level. Below shows the example usage for this script:
|
| 107 |
+
|
| 108 |
+
```aiignore
|
| 109 |
+
pip install huggingface_hub
|
| 110 |
+
mkdir datasets
|
| 111 |
+
cd datasets
|
| 112 |
+
hf download --repo-type dataset Froink/HAID_zipped HAID-MiniImageNet_zipped_by_level --local-dir .
|
| 113 |
+
bash unzip_by_level.sh ./HAID-MiniImageNet_zipped_by_level --shapes 100 --mode 0 --dest ./HAID-MiniImageNet
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
Use `shapes` and `mode` to indicate which abstract level of images you want to extract, and use `dest` to indicate the target path of extracted images. For training and evaluating the models from these images, please check our (GitHub page)[https://github.com/Fronik-Lihaotian/HAID].
|
| 117 |
+
|
| 118 |
---
|
| 119 |
|
| 120 |
## Recommended uses
|
|
|
|
| 122 |
- Study representation differences between raster and vector/abstract images.
|
| 123 |
- Pretraining / transfer-learning experiments (classification → segmentation / detection).
|
| 124 |
- Research on efficient visual encoding, SVG-aware models, or transmission-efficient learning.
|
| 125 |
+
- Human perception / psychophysics studies of abstraction vs. recognizability.
|
| 126 |
|
| 127 |
---
|
| 128 |
|
|
|
|
| 158 |
|
| 159 |
## Contact & project page
|
| 160 |
|
| 161 |
+
* Project page: `https://fronik-lihaotian.github.io/HAID_page/`.
|
| 162 |
* For questions, bug reports, or collaborations, please open an issue in this repository or contact the authors listed in the paper.
|