File size: 5,854 Bytes
2a1fad0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
---
license: mit
---
# HC-Bench
**HC-Bench** is a compact multi-part image benchmark for evaluating recognition and prompting robustness, especially in **hidden-content** scenes. It contains:
- **object/** — 56 base images and 56 *hidden* variants of the same lemmas, plus prompts and metadata.
- **text/** — 56 Latin/English and 56 Chinese lemma–description pairs with matching PNGs.
- **wild/** — 53 in-the-wild images for additional generalization checks.
---
## Repository structure
```
HC-Bench/
├─ object/
│ ├─ base/ # 56 base images (7 types × 8 lemmas)
│ ├─ hidden/ # 56 hidden-content variants (same lemmas)
│ ├─ image\_base.txt # 7 types and their 8 lemmas each
│ ├─ image\_generate\_prompts.txt# per-lemma scene prompts used for generation
│ └─ lemmas\_descriptions.json # \[{Type, Lemma, Description}] × 56
├─ text/
│ ├─ Latin/ # 28 English PNGs
│ ├─ Chinese/ # 28 Chinese PNGs
│ ├─ English\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
│ └─ Chinese\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
└─ wild/ # 53 PNGs
````
---
## Contents
### `object/`
- **`base/`**: Canonical image per lemma (e.g., `Apple.jpg`, `Einstein.png`).
- **`hidden/`**: Composite/camouflaged image for the *same* lemma set (e.g., `apple.png`, `einstein.png`).
- **`image_base.txt`**: The 7 high-level types and their 8 lemmas each (Humans, Species, Buildings, Cartoon, Furniture, Transports, Food).
- **`image_generate_prompts.txt`**: Per-lemma prompts used to compose/generate scenes (e.g., *“A monorail cutting through a futuristic city with elevated walkways”* for `notredame`).
- **`lemmas_descriptions.json`**: Minimal metadata with `{Type, Lemma, Description}` aligned 1:1 with the 56 lemmas.
### `text/`
- **`Latin/`** & **`Chinese/`**: 28 images each (total 56).
- **`English_text.json`** & **`Chinese_text.json`**: 56-entry lists pairing lemmas to descriptions in two languages.
(Note: The `English_text.json`/`Chinese_text.json` files include extra fields `Length` and `Rarity` for flexibility.)
### `wild/`
- 53 natural/urban scenes for robustness and transfer evaluation.
---
## Quick start (🤗 Datasets)
> HC-Bench uses the **ImageFolder**/“imagefolder” style. Class labels are inferred from directory names when present (e.g., `base`, `hidden`). If you prefer raw images without labels, pass `drop_labels=True`.
### Load **object/base** and **object/hidden**
```python
from datasets import load_dataset
base = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
split="train",
drop_labels=True, # drop automatic label inference
)
hidden = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
split="train",
drop_labels=True,
)
````
### Load **wild/**
```python
wild = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/wild/*",
split="train",
drop_labels=True,
)
```
### Load the **JSON** metadata (English/Chinese)
```python
from datasets import load_dataset
en = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/English_text.json",
split="train",
)
zh = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/Chinese_text.json",
split="train",
)
```
> Docs reference: `load_dataset` for JSON & files, and ImageFolder for image datasets.
---
## Pairing base/hidden with metadata
Filenames differ in casing/spaces between `base/` (`Apple.jpg`) and `hidden/` (`apple.png`). Use `object/lemmas_descriptions.json` as the canonical list of 56 lemmas and join by `Lemma`:
```python
import pandas as pd
from datasets import load_dataset
# 1) Canonical lemma list
lemmas = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/lemmas_descriptions.json",
split="train",
).to_pandas()
# 2) Build (lemma -> file) maps
def to_lemma(name): # normalize filenames to lemma
import re, os
stem = os.path.splitext(os.path.basename(name))[0]
return re.sub(r"\s+", "", stem).lower()
base_ds = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
split="train",
drop_labels=True,
)
hidden_ds = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
split="train",
drop_labels=True,
)
import os
base_map = {to_lemma(x["image"].filename): x["image"] for x in base_ds}
hidden_map= {to_lemma(x["image"].filename): x["image"] for x in hidden_ds}
# 3) Join
lemmas["base_image"] = lemmas["Lemma"].apply(lambda L: base_map.get(L.lower()))
lemmas["hidden_image"] = lemmas["Lemma"].apply(lambda L: hidden_map.get(L.lower()))
```
---
---
## Statistics
* `object/base`: 56 images
* `object/hidden`: 56 images
* `text/Latin`: 28 images
* `text/Chinese`: 28 images
* `wild`: 53 images
---
## Citation
If you use **HC-Bench**, please cite:
```bibtex
@misc{li2025semvinkadvancingvlmssemantic,
title={SemVink: Advancing VLMs' Semantic Understanding of Optical Illusions via Visual Global Thinking},
author={Sifan Li and Yujun Cai and Yiwei Wang},
year={2025},
eprint={2506.02803},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.02803},
}
```
---
|