tylerxdurden commited on
Commit
fd39e9b
·
verified ·
1 Parent(s): b7fc6b4

docs: add comprehensive dataset card

Browse files
Files changed (1) hide show
  1. README.md +203 -3
README.md CHANGED
@@ -1,3 +1,203 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-classification
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - benchmark
11
+ - visual-reasoning
12
+ - puzzle
13
+ - multimodal
14
+ - evaluation
15
+ - generative
16
+ - discriminative
17
+ - deterministic
18
+ size_categories:
19
+ - 100K<n<1M
20
+ pretty_name: "TACIT Benchmark"
21
+ configs:
22
+ - config_name: default
23
+ data_files:
24
+ - split: test
25
+ path: "task_*/**/*.png"
26
+ ---
27
+
28
+ # TACIT Benchmark v0.1.0
29
+
30
+ **Transformation-Aware Capturing of Implicit Thought**
31
+
32
+ A programmatic visual reasoning benchmark for evaluating generative and discriminative capabilities of multimodal models across 10 tasks and 6 reasoning domains.
33
+
34
+ **Author:** [Daniel Nobrega Medeiros](https://www.linkedin.com/in/daniel-nobrega-187272124)
35
+ &nbsp;|&nbsp; [arXiv paper](https://arxiv.org/abs/2602.07061)
36
+ &nbsp;|&nbsp; [GitHub](https://github.com/danielxmed/tacit-benchmark)
37
+
38
+ ## Overview
39
+
40
+ TACIT presents visual puzzles that require genuine spatial, logical, and structural reasoning — not pattern matching on text. Each puzzle is generated programmatically with deterministic seeding, ensuring full reproducibility. Evaluation is **programmatic** (no LLM-as-judge): solutions are verified through computer vision algorithms (pixel sampling, SSIM, BFS path detection, color counting).
41
+
42
+ ### Key Features
43
+
44
+ - **6,000 puzzles** across 10 tasks and 3 difficulty levels
45
+ - **Dual-track evaluation**: generative (produce a solution image) and discriminative (select from candidates)
46
+ - **Multi-resolution**: every puzzle rendered at 512px, 1024px, and 2048px
47
+ - **Deterministic**: seeded generation (seed=42) for exact reproducibility
48
+ - **Programmatic verification**: CV-based solution checking, no subjective evaluation
49
+
50
+ ## Tasks
51
+
52
+ | # | Task | Domain | Easy | Medium | Hard |
53
+ |---|------|--------|------|--------|------|
54
+ | 01 | **Multi-layer Mazes** | Spatial Reasoning | 8×8, 1 layer | 16×16, 2 layers, 2 portals | 32×32, 3 layers, 5 portals |
55
+ | 02 | **Raven's Progressive Matrices** | Abstract Reasoning | 1 rule | 2 rules | 3 rules, compositional |
56
+ | 03 | **Cellular Automata Forward** | Causal Reasoning | 8×8, 1 step | 16×16, 3 steps | 32×32, 5 steps |
57
+ | 04 | **Cellular Automata Inverse** | Causal Reasoning | 8×8, 4 rules | 16×16, 8 rules | 32×32, 16 rules |
58
+ | 05 | **Visual Logic Grids** | Logical Reasoning | 4×4, 6 constraints | 5×5, 10 constraints | 6×6, 16 constraints |
59
+ | 06 | **Planar Graph k-Coloring** | Graph Theory | 6 nodes, k=4 | 12 nodes, k=4 | 20 nodes, k=3 |
60
+ | 07 | **Graph Isomorphism** | Graph Theory | 5 nodes | 8 nodes | 12 nodes |
61
+ | 08 | **Unknot Detection** | Topology | 3 crossings | 6 crossings | 10 crossings |
62
+ | 09 | **Orthographic Projection** | Spatial Reasoning | 6 faces | 10 faces, 1 concavity | 16 faces, 3 concavities |
63
+ | 10 | **Isometric Reconstruction** | Spatial Reasoning | 6 faces | 10 faces, 1 ambiguity | 16 faces, 2 ambiguities |
64
+
65
+ Each task has **200 puzzles per difficulty level** (easy / medium / hard) = **600 per task**, **6,000 total**.
66
+
67
+ ## Evaluation Tracks
68
+
69
+ ### Track 1 — Generative
70
+
71
+ The model receives a puzzle image and must **produce a solution image** (e.g., a solved maze, colored graph, completed matrix). Verification is fully programmatic using computer vision:
72
+
73
+ | Task | Verification Method |
74
+ |------|-------------------|
75
+ | Maze | BFS path detection on rendered solution |
76
+ | Raven | SSIM comparison (threshold 0.997) |
77
+ | CA Forward / Inverse | Pixel sampling of cell states |
78
+ | Logic Grid | Pixel sampling of grid cells |
79
+ | Graph Coloring | Occlusion-aware node color sampling |
80
+ | Graph Isomorphism | Color counting + structural validation |
81
+ | Unknot | Color region counting |
82
+ | Ortho Projection | Pixel sampling of projection views |
83
+ | Iso Reconstruction | SSIM comparison (threshold 0.99999) |
84
+
85
+ ### Track 2 — Discriminative
86
+
87
+ The model receives a puzzle image plus **4 distractor images** and **1 correct solution**, and must identify the correct answer. This is a 5-way multiple-choice visual task.
88
+
89
+ ## Dataset Structure
90
+
91
+ ```
92
+ snapshot/
93
+ ├── metadata.json # Generation config and parameters
94
+ ├── README.md # This file
95
+ ├── task_01_maze/
96
+ │ ├── task_info.json # Task parameters
97
+ │ ├── easy/
98
+ │ │ ├── 512/ # 512px resolution
99
+ │ │ │ ├── puzzle_0000.png
100
+ │ │ │ ├── solution_0000.png
101
+ │ │ │ ├── distractors_0000/
102
+ │ │ │ │ ├── distractor_00.png
103
+ │ │ │ │ ├── distractor_01.png
104
+ │ │ │ │ ├── distractor_02.png
105
+ │ │ │ │ └── distractor_03.png
106
+ │ │ │ ├── puzzle_0001.png
107
+ │ │ │ ├── solution_0001.png
108
+ │ │ │ ├── distractors_0001/
109
+ │ │ │ │ └── ...
110
+ │ │ │ └── ... (200 puzzles)
111
+ │ │ ├── 1024/ # 1024px resolution
112
+ │ │ │ ��── ... (same structure)
113
+ │ │ └── 2048/ # 2048px resolution
114
+ │ │ └── ... (same structure)
115
+ │ ├── medium/
116
+ │ │ └── ... (same structure)
117
+ │ └── hard/
118
+ │ └── ... (same structure)
119
+ ├── task_02_raven/
120
+ │ └── ...
121
+ └── ... (10 tasks total)
122
+ ```
123
+
124
+ ### File Naming Convention
125
+
126
+ - `puzzle_NNNN.png` — the input puzzle image
127
+ - `solution_NNNN.png` — the ground-truth solution (Track 1 target)
128
+ - `distractors_NNNN/distractor_0X.png` — 4 wrong answers (Track 2 candidates)
129
+
130
+ ### Statistics
131
+
132
+ | Metric | Value |
133
+ |--------|-------|
134
+ | Total puzzles | 6,000 |
135
+ | Total PNG files | 108,008 |
136
+ | Resolutions | 512, 1024, 2048 px |
137
+ | Difficulties | easy, medium, hard |
138
+ | Distractors per puzzle | 4 |
139
+ | Dataset size | ~3.9 GB |
140
+ | Generation seed | 42 |
141
+
142
+ ## Usage
143
+
144
+ ### Loading with Hugging Face
145
+
146
+ ```python
147
+ from datasets import load_dataset
148
+
149
+ # Load full dataset
150
+ ds = load_dataset("tylerxdurden/TACIT-benchmark")
151
+
152
+ # Or download specific files
153
+ from huggingface_hub import hf_hub_download
154
+
155
+ puzzle = hf_hub_download(
156
+ repo_id="tylerxdurden/TACIT-benchmark",
157
+ filename="task_01_maze/easy/1024/puzzle_0000.png",
158
+ repo_type="dataset",
159
+ )
160
+ ```
161
+
162
+ ### Using the Evaluation Harness
163
+
164
+ ```python
165
+ from tacit.registry import GENERATORS
166
+
167
+ # Regenerate a specific puzzle (deterministic)
168
+ gen = GENERATORS["maze"]
169
+ puzzle = gen.generate(seed=42, difficulty="easy", index=0)
170
+
171
+ # Verify a candidate solution (Track 1)
172
+ is_correct = gen.verify(puzzle, candidate_png=model_output_bytes)
173
+ ```
174
+
175
+ See the [GitHub repository](https://github.com/danielxmed/tacit-benchmark) for full evaluation documentation.
176
+
177
+ ## Reasoning Domains
178
+
179
+ The 10 tasks span **6 reasoning domains**, chosen to probe different aspects of visual cognition:
180
+
181
+ 1. **Spatial Reasoning** — Mazes, orthographic projection, isometric reconstruction
182
+ 2. **Abstract Reasoning** — Raven's progressive matrices
183
+ 3. **Causal Reasoning** — Cellular automata (forward prediction and inverse inference)
184
+ 4. **Logical Reasoning** — Visual logic grids
185
+ 5. **Graph Theory** — Graph coloring, graph isomorphism
186
+ 6. **Topology** — Unknot detection
187
+
188
+ ## Citation
189
+
190
+ ```bibtex
191
+ @misc{medeiros_2026,
192
+ author = {Daniel Nobrega Medeiros},
193
+ title = {TACIT-benchmark},
194
+ year = 2026,
195
+ url = {https://huggingface.co/datasets/tylerxdurden/TACIT-benchmark},
196
+ doi = {10.57967/hf/7904},
197
+ publisher = {Hugging Face}
198
+ }
199
+ ```
200
+
201
+ ## License
202
+
203
+ Apache 2.0