ilijalichkovski commited on
Commit
b93095f
·
verified ·
1 Parent(s): 1534ec5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md CHANGED
@@ -32,3 +32,198 @@ configs:
32
  - split: validation
33
  path: data/validation-*
34
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  - split: validation
33
  path: data/validation-*
34
  ---
35
+
36
+ # Dataset Card for ESCAIP
37
+
38
+ <!-- Provide a quick summary of the dataset. -->
39
+
40
+ **ESCAIP** (Evaluating Symbolic Compositionality from Aligned Inductive Priors) is a procedurally-generated benchmark dataset designed to evaluate the compositional reasoning capabilities of language models through systematic string manipulation tasks. Each problem requires models to learn abstract symbolic operations from the provably minimal set of examples, and compose them to solve target expressions with more operations than the demonstrations. This length generalization is a critical capability for achieving robust generalization in AI systems.
41
+
42
+ NOTE: This dataset is a preview featuring only 3 operations and no modifiers.
43
+
44
+ ## Motivation
45
+
46
+ The goal is to provide a generalization benchmark that tests compositionality in an in-context learning setting. Benchmarks like ARC-AGI also do this, but human performance is predicated upon smuggling human inductive priors from our physical, embodied, multimodal intuitions. By being a text-only string-manipulation tasks, we seek to present a task with greater alignment between human and LLM inductive priors in order to more faithfully measure performance.
47
+
48
+ We hope to aid the research community in answering a fundamental challenge in AI: can models learn to compose learned primitives in novel ways?
49
+
50
+ ## Dataset Details
51
+
52
+ ### Dataset Description
53
+
54
+ ESCAIP presents models with symbolic string manipulation puzzles where they must:
55
+ 1. **Learn operations** from minimal examples (e.g., `A+B = concatenation`)
56
+ 2. **Understand symbols** that map to concrete strings (e.g., `$ = "abc"`)
57
+ 3. **Compose operations** to solve complex target expressions (e.g., `$+%^&+$ = ?`)
58
+
59
+ The dataset systematically controls compositional complexity through:
60
+ - **Number of symbols** (3-5): Basic building blocks
61
+ - **Number of operations** (2-3): Available transformations (concatenation, reverse concatenation, reverse-then-concatenate)
62
+ - **Operation count in targets** (3-5): Length of compositional chains
63
+ - **String lengths** (3-5 characters): Concrete symbol mappings
64
+
65
+ The mappings between characters and strings and characters and operations is randomized in order to ensure pure in-context governs performance.
66
+
67
+ - **Created by:** Ilija Lichkovski
68
+ - **License:** MIT
69
+ - **Language:** English (symbolic reasoning)
70
+ - **Size:** 1,440 problems (1,296 train, 144 validation)
71
+
72
+ ### Dataset Sources
73
+
74
+ - **Repository:** [upcoming]
75
+ - **Paper:** [upcoming]
76
+ - **HuggingFace:** https://huggingface.co/datasets/ilijalichkovski/compositional-puzzles
77
+
78
+ ## Uses
79
+
80
+ ### Direct Use
81
+
82
+ **Primary Use Cases:**
83
+ - **Compositional reasoning evaluation** for language models
84
+ - **Length generalization** testing (can models handle longer compositions than seen in training?)
85
+ - **Abstract reasoning** benchmarks for AI systems
86
+ - **Few-shot learning** evaluation (learning operations from minimal examples)
87
+ - **Systematic generalization** research in neural networks
88
+
89
+ **Ideal for RL Researchers:**
90
+ - Testing whether models can compose learned "skills" (operations) in novel ways
91
+ - Evaluating systematic generalization beyond training distribution
92
+ - Studying how models learn abstract rules from concrete examples
93
+ - Benchmarking few-shot learning capabilities critical for adaptive RL agents
94
+
95
+ ### Out-of-Scope Use
96
+
97
+ This dataset focuses specifically on symbolic reasoning and may not directly evaluate:
98
+ - Natural language understanding
99
+ - Mathematical reasoning beyond string operations
100
+ - Visual or multimodal reasoning
101
+ - Real-world task performance
102
+
103
+ ## Dataset Structure
104
+
105
+ ### Data Fields
106
+
107
+ - **`problem`** (string): The complete puzzle text including definitions, operation examples, and target expression
108
+ - **`answer`** (string): The correct solution to the target expression
109
+ - **`num_symbols`** (int): Number of symbols available (3-5)
110
+ - **`num_operations`** (int): Number of operations available (2-3)
111
+ - **`num_modifiers`** (int): Number of modifiers (always 0 in this dataset)
112
+ - **`target_ops_count`** (int): Number of operations in the target expression (3-5)
113
+ - **`string_length`** (string): Length range of symbol mappings (e.g., "3-4")
114
+
115
+ ### Data Splits
116
+
117
+ | Split | Examples |
118
+ |-------|----------|
119
+ | train | 1,296 |
120
+ | validation | 144 |
121
+
122
+ ### Example Problem
123
+
124
+ ```
125
+ Definitions:
126
+ % = pzm
127
+ [ = imkw
128
+ { = rmpj
129
+
130
+ Operations:
131
+ %^[ = mzpimkw
132
+ %~[ = pzmimkw
133
+
134
+ Solve: {~{^%^[ = ?
135
+ ```
136
+
137
+ **Answer:** `rmpjjpmrmzpimkw`
138
+
139
+ This example demonstrates:
140
+ - **Symbol learning:** `%`, `[`, `{` map to specific strings
141
+ - **Operation learning:** `^` (interleave) and `~` (reverse-concat) from examples
142
+ - **Composition:** Target requires chaining 3 operations with proper left-to-right parsing
143
+
144
+ ## Dataset Creation
145
+
146
+ ### Curation Rationale
147
+
148
+ The dataset addresses a critical gap in AI evaluation: **systematic compositional reasoning**. While models excel at pattern matching, they often fail when required to compose learned concepts in novel ways—a fundamental requirement for generally intelligent systems.
149
+
150
+ **Key Design Principles:**
151
+ 1. **Minimal examples:** Each operation demonstrated with just enough examples for unique identification
152
+ 2. **Systematic variation:** Controlled complexity progression across symbol count, operation count, and composition length
153
+ 3. **Unambiguous parsing:** Left-to-right evaluation removes parsing complexity, isolating compositional reasoning
154
+ 4. **Random assignment:** Operation symbols randomly assigned to prevent memorization
155
+
156
+ ### Source Data
157
+
158
+ #### Data Collection and Processing
159
+
160
+ **Generation Process:**
161
+ 1. **DAG-based optimization:** Uses dependency graphs to find minimal definition sets required for solvability
162
+ 2. **Systematic sampling:** Covers all combinations of (symbols: 3-5, operations: 2-3, target length: 3-5)
163
+ 3. **Verification:** Each puzzle verified for unique solvability given its definitions
164
+ 4. **Clean generation:** No modifiers included (simpler baseline for initial evaluation)
165
+
166
+ **Operations Available:**
167
+ - **Concatenation:** `A + B → AB`
168
+ - **Reverse concatenation:** `A + B → BA`
169
+ - **Reverse-then-concatenate:** `A + B → A_reversed + B`
170
+
171
+ #### Who are the source data producers?
172
+
173
+ Generated algorithmically using a DAG-based puzzle generator. No human annotation required due to deterministic symbolic nature.
174
+
175
+ ### Annotations
176
+
177
+ No additional annotations beyond the algorithmic generation process. Each puzzle is automatically verified for correctness and minimal solvability.
178
+
179
+ #### Personal and Sensitive Information
180
+
181
+ The dataset contains only randomly generated symbolic strings (e.g., "pzm", "imkw"). No personal, sensitive, or private information is included.
182
+
183
+ ## Bias, Risks, and Limitations
184
+
185
+ **Limitations:**
186
+ - **Scope:** Limited to string operations; doesn't test broader reasoning
187
+ - **Symbolic only:** Abstract symbols may not transfer to real-world reasoning
188
+ - **Left-to-right parsing:** Simplified parsing may not reflect natural language complexity
189
+ - **Operation set:** Limited to 3 string operations
190
+
191
+ **For RL Applications:**
192
+ - Results may not directly predict performance on continuous control or complex environments
193
+ - String domain may not capture spatial or temporal reasoning critical for many RL tasks
194
+ - Success here doesn't guarantee compositional reasoning in other modalities
195
+
196
+ ### Recommendations
197
+
198
+ **Best Practices:**
199
+ - Use alongside other reasoning benchmarks for comprehensive evaluation
200
+ - Focus on compositional patterns rather than absolute performance scores
201
+ - Consider this a necessary but not sufficient condition for general reasoning
202
+ - Examine model behavior on out-of-distribution composition lengths
203
+
204
+ **For RL Researchers:**
205
+ - Treat as a controlled testbed for compositional reasoning principles
206
+ - Use to validate architectural choices before deploying in complex environments
207
+ - Consider performance here as a lower bound on compositional capabilities
208
+
209
+ ## Citation
210
+
211
+ **BibTeX:**
212
+
213
+ ```bibtex
214
+ @dataset{lichkovski2024escaip,
215
+ title={Evaluation of Symbolic Compositionality from Aligned Inductive Priors},
216
+ author={Lichkovski, Ilija},
217
+ year={2024},
218
+ url={https://huggingface.co/datasets/ilijalichkovski/compositional-puzzles},
219
+ note={1,440 symbolic reasoning problems testing compositional generalization}
220
+ }
221
+ ```
222
+
223
+ ## Dataset Card Authors
224
+
225
+ Ilija Lichkovski
226
+
227
+ ## Dataset Card Contact
228
+
229
+ ilija@manifold.mk