Datasets:
Languages:
English
Size:
10K - 100K
Tags:
VLM-evaluation
fine-grained-visual-perception
fine-grained-visual-reasoning
text-in-the-wild
scene-text-recognition
License:
| language: | |
| - en | |
| license: apache-2.0 | |
| task_categories: | |
| - visual-question-answering | |
| - image-classification | |
| - image-to-text | |
| pretty_name: FineSightBench-Large | |
| size_categories: | |
| - 10K<n<100K | |
| tags: | |
| - VLM-evaluation | |
| - fine-grained-visual-perception | |
| - fine-grained-visual-reasoning | |
| - text-in-the-wild | |
| - scene-text-recognition | |
| splits: | |
| - name: perception | |
| num_examples: 42000 | |
| - name: reasoning | |
| num_examples: 39200 | |
| dataset_info: | |
| features: | |
| - name: image | |
| dtype: image | |
| - name: image_id | |
| dtype: string | |
| - name: task_type | |
| dtype: string | |
| - name: question | |
| dtype: string | |
| - name: answer | |
| dtype: string | |
| - name: difficulty | |
| dtype: string | |
| - name: metadata | |
| dtype: string | |
| splits: | |
| - name: perception | |
| num_bytes: 2269804611 | |
| num_examples: 42000 | |
| - name: reasoning | |
| num_bytes: 4913242781 | |
| num_examples: 39200 | |
| download_size: 7117057625 | |
| dataset_size: 7183047392 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: perception | |
| path: data/perception-* | |
| - split: reasoning | |
| path: data/reasoning-* | |
| # FineSightBench-Large | |
| **FineSightBench-Large** is a **10× scaled** edition of [FineSightBench](https://huggingface.co/datasets/Volavion/FineSightBench) — identical task design, difficulty sweep, answer schemas, and image regimes, with every base sample count multiplied by ten for higher statistical power and robust per-(task, size, count) evaluation. | |
| **FineSightBench** is a fine-grained visual benchmark for evaluating Vision-Language Models (VLMs) on pixel-level perception and reasoning tasks. It combines two complementary image regimes: | |
| 1. **Synthetic canvas** — controlled white-background images with precisely-sized geometric/semantic targets (letters, animals, shapes, blocks, dots). | |
| 2. **Text in the wild** (SynthText-style) — English words rendered onto real natural-scene photographs from the [SynthText](https://github.com/ankush-me/SynthText) `bg_img` set, with **pixel-accurate control of character cap-height**. | |
| All images are **448 × 448 px**. The primary difficulty axis is the **target pixel size** (cap-height for text), swept over `[4, 8, 12, 16, 24, 32, 48]`, mapped to `extreme / hard / medium / easy`. | |
| ## Dataset Summary | |
| | Split | #Samples | #Task types | Regimes | | |
| |-------|---------:|:-----------:|---------| | |
| | `perception` | 42 000 | 6 | synthetic canvas + text-in-the-wild | | |
| | `reasoning` | 39 200 | 6 | synthetic canvas + text-in-the-wild | | |
| ## Dataset Structure | |
| ### `perception` split — 42 000 samples | |
| Single-target identification tasks. 7 000 samples per task, 1 000 samples per pixel size × 7 sizes. | |
| | `task_type` | Description | Source | | |
| |-------------|-------------|--------| | |
| | `letter_recognition` | Identify a rendered uppercase letter (A–Z) | synthetic canvas | | |
| | `animal_recognition` | Identify an animal silhouette (cat/dog/fish/bird/rabbit/turtle) | synthetic canvas | | |
| | `shape_recognition` | Identify a geometric shape (circle/triangle/square/star/diamond/pentagon/hexagon/cross) | synthetic canvas | | |
| | `block_recognition` | Detect / count square blocks | synthetic canvas | | |
| | `color_block_recognition` | Identify the color of a block | synthetic canvas | | |
| | `text_recognition` | Read a single English word overlaid on a natural scene | **text in the wild** | | |
| ### `reasoning` split — 39 200 samples | |
| Chain-reasoning tasks requiring counting, ordering, and spatial reasoning across multiple targets. | |
| | `task_type` | Description | Source | | |
| |-------------|-------------|--------| | |
| | `spatial_chain` | List all objects left→right or top→bottom | synthetic canvas | | |
| | `comparison_chain` | List all objects smallest→largest by size | synthetic canvas | | |
| | `counting_chain` | Count objects per type + total | synthetic canvas | | |
| | `blur_chain` | Count objects on a blurred/textured background | synthetic canvas | | |
| | `text_reading_chain` | Read multiple overlaid words in left→right / top→bottom order | **text in the wild** | | |
| | `text_counting_chain` | Total word count + # words containing a queried letter | **text in the wild** | | |
| ### Difficulty levels | |
| | Difficulty | Target / cap-height | | |
| |------------|---------------------| | |
| | `extreme` | ≤ 5 px | | |
| | `hard` | 6–12 px | | |
| | `medium` | 13–24 px | | |
| | `easy` | 25–48 px | | |
| ## Fields | |
| | Field | Type | Description | | |
| |-------|------|-------------| | |
| | `image` | Image | 448×448 PNG | | |
| | `image_id` | string | Unique identifier (encodes task, size, count) | | |
| | `task_type` | string | See tables above | | |
| | `question` | string | Prompt for the VLM (asks for a structured JSON answer) | | |
| | `answer` | string | Ground-truth answer. JSON-encoded (see below) | | |
| | `difficulty` | string | `easy` / `medium` / `hard` / `extreme` | | |
| | `metadata` | string | JSON with canvas size, target pixel size, positions, colors, bounding boxes, sub-answers, etc. | | |
| ### Answer schemas (examples) | |
| | Task | Answer JSON | | |
| |------|-------------| | |
| | `letter_recognition` | `{"letter": "H"}` | | |
| | `animal_recognition` | `{"animal": "rabbit"}` | | |
| | `shape_recognition` | `{"shape": "triangle"}` | | |
| | `color_block_recognition` | `{"color": "blue"}` | | |
| | `text_recognition` | `{"word": "HOME"}` | | |
| | `spatial_chain` | `{"objects": ["red A", "blue K", ...]}` | | |
| | `comparison_chain` | `{"objects": ["blue dog", "magenta bird"]}` | | |
| | `counting_chain` | `{"counts": {"red": 2, "blue": 1}, "total": 3}` | | |
| | `blur_chain` | `{"counts": {"circle": 1, "square": 2}, "total": 3}` | | |
| | `text_reading_chain` | `{"words": ["HOME", "CITY", "EXIT"]}` | | |
| | `text_counting_chain`| `{"total": 6, "with_letter": 3}` | | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("Volavion/FineSightBench-Large") | |
| print(ds) | |
| # DatasetDict({ | |
| # perception: Dataset({features: [...], num_rows: 42000}), | |
| # reasoning: Dataset({features: [...], num_rows: 39200}) | |
| # }) | |
| sample = ds["perception"][0] | |
| sample["image"].show() | |
| print(sample["question"]) | |
| print(sample["answer"]) # JSON string, e.g. '{"letter": "A"}' | |
| ``` | |
| Filter by task or difficulty: | |
| ```python | |
| text_subset = ds["perception"].filter(lambda x: x["task_type"] == "text_recognition") | |
| extreme = ds["perception"].filter(lambda x: x["difficulty"] == "extreme") | |
| ``` | |
| ## Design Philosophy | |
| * **Pixel-size is the primary difficulty axis.** Targets (objects or characters) are rendered at exact cap-heights across `[4, 8, 12, 16, 24, 32, 48]` px so that the same semantic task can be probed from *easily readable* to *near-imperceptible* scales on a single fixed 448×448 canvas. | |
| * **Controlled composition.** Every sample exposes pixel-precise target positions, bounding boxes, colors (with RGB), and sub-answers in `metadata`, enabling per-task, per-size, per-color, and positional analyses. | |
| * **Two image regimes.** The synthetic canvas removes distribution confounders, while the SynthText-style text-in-the-wild regime stresses models with the same text task on varied, real photographs. | |
| ## Generation | |
| Generated with the [FineSightBench repository](https://github.com/Volavion/FineSightBench): | |
| ```bash | |
| # 10× base counts (perception: --num-per-config 1000, reasoning: N_PER_CONFIG=200) | |
| python scripts/generate_large_dataset.py # FSB_LARGE_SCALE=10 by default | |
| ``` | |
| **Text-in-the-wild backgrounds**: the first ~1 500 JPEGs from the SynthText `bg_img.tar.gz` set ([mirror](https://thor.robots.ox.ac.uk/scenetext/preproc/bg_img.tar.gz)) are center-cropped and resized to 448×448. Text glyphs use system sans-serif fonts; cap-height is calibrated per render to match the requested pixel size exactly. | |
| ## Citation | |
| If you use FineSightBench, please cite the repository and the SynthText background source: | |
| ```bibtex | |
| @misc{finesightbench_large2026, | |
| title = {FineSightBench-Large: 10 imes Scaled Fine-grained Visual Perception \& Reasoning Benchmark for VLMs}, | |
| year = {2026}, | |
| url = {https://huggingface.co/datasets/Volavion/FineSightBench-Large} | |
| } | |
| @inproceedings{Gupta16, | |
| author = {A. Gupta and A. Vedaldi and A. Zisserman}, | |
| title = {Synthetic Data for Text Localisation in Natural Images}, | |
| booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, | |
| year = {2016} | |
| } | |
| ``` | |
| ## License | |
| Apache-2.0 for the FineSightBench benchmark code, annotations, and synthetic images. The natural-scene backgrounds for the text-in-the-wild tasks are derived from the SynthText `bg_img` set; please refer to the original SynthText dataset for the background-image license/terms. | |