ilijalichkovski's picture
Update README.md
b93095f verified
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: num_symbols
dtype: int64
- name: num_operations
dtype: int64
- name: num_modifiers
dtype: int64
- name: target_ops_count
dtype: int64
- name: string_length
dtype: string
splits:
- name: train
num_bytes: 240459
num_examples: 1296
- name: validation
num_bytes: 26664
num_examples: 144
download_size: 134208
dataset_size: 267123
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for ESCAIP
<!-- Provide a quick summary of the dataset. -->
**ESCAIP** (Evaluating Symbolic Compositionality from Aligned Inductive Priors) is a procedurally-generated benchmark dataset designed to evaluate the compositional reasoning capabilities of language models through systematic string manipulation tasks. Each problem requires models to learn abstract symbolic operations from the provably minimal set of examples, and compose them to solve target expressions with more operations than the demonstrations. This length generalization is a critical capability for achieving robust generalization in AI systems.
NOTE: This dataset is a preview featuring only 3 operations and no modifiers.
## Motivation
The goal is to provide a generalization benchmark that tests compositionality in an in-context learning setting. Benchmarks like ARC-AGI also do this, but human performance is predicated upon smuggling human inductive priors from our physical, embodied, multimodal intuitions. By being a text-only string-manipulation tasks, we seek to present a task with greater alignment between human and LLM inductive priors in order to more faithfully measure performance.
We hope to aid the research community in answering a fundamental challenge in AI: can models learn to compose learned primitives in novel ways?
## Dataset Details
### Dataset Description
ESCAIP presents models with symbolic string manipulation puzzles where they must:
1. **Learn operations** from minimal examples (e.g., `A+B = concatenation`)
2. **Understand symbols** that map to concrete strings (e.g., `$ = "abc"`)
3. **Compose operations** to solve complex target expressions (e.g., `$+%^&+$ = ?`)
The dataset systematically controls compositional complexity through:
- **Number of symbols** (3-5): Basic building blocks
- **Number of operations** (2-3): Available transformations (concatenation, reverse concatenation, reverse-then-concatenate)
- **Operation count in targets** (3-5): Length of compositional chains
- **String lengths** (3-5 characters): Concrete symbol mappings
The mappings between characters and strings and characters and operations is randomized in order to ensure pure in-context governs performance.
- **Created by:** Ilija Lichkovski
- **License:** MIT
- **Language:** English (symbolic reasoning)
- **Size:** 1,440 problems (1,296 train, 144 validation)
### Dataset Sources
- **Repository:** [upcoming]
- **Paper:** [upcoming]
- **HuggingFace:** https://huggingface.co/datasets/ilijalichkovski/compositional-puzzles
## Uses
### Direct Use
**Primary Use Cases:**
- **Compositional reasoning evaluation** for language models
- **Length generalization** testing (can models handle longer compositions than seen in training?)
- **Abstract reasoning** benchmarks for AI systems
- **Few-shot learning** evaluation (learning operations from minimal examples)
- **Systematic generalization** research in neural networks
**Ideal for RL Researchers:**
- Testing whether models can compose learned "skills" (operations) in novel ways
- Evaluating systematic generalization beyond training distribution
- Studying how models learn abstract rules from concrete examples
- Benchmarking few-shot learning capabilities critical for adaptive RL agents
### Out-of-Scope Use
This dataset focuses specifically on symbolic reasoning and may not directly evaluate:
- Natural language understanding
- Mathematical reasoning beyond string operations
- Visual or multimodal reasoning
- Real-world task performance
## Dataset Structure
### Data Fields
- **`problem`** (string): The complete puzzle text including definitions, operation examples, and target expression
- **`answer`** (string): The correct solution to the target expression
- **`num_symbols`** (int): Number of symbols available (3-5)
- **`num_operations`** (int): Number of operations available (2-3)
- **`num_modifiers`** (int): Number of modifiers (always 0 in this dataset)
- **`target_ops_count`** (int): Number of operations in the target expression (3-5)
- **`string_length`** (string): Length range of symbol mappings (e.g., "3-4")
### Data Splits
| Split | Examples |
|-------|----------|
| train | 1,296 |
| validation | 144 |
### Example Problem
```
Definitions:
% = pzm
[ = imkw
{ = rmpj
Operations:
%^[ = mzpimkw
%~[ = pzmimkw
Solve: {~{^%^[ = ?
```
**Answer:** `rmpjjpmrmzpimkw`
This example demonstrates:
- **Symbol learning:** `%`, `[`, `{` map to specific strings
- **Operation learning:** `^` (interleave) and `~` (reverse-concat) from examples
- **Composition:** Target requires chaining 3 operations with proper left-to-right parsing
## Dataset Creation
### Curation Rationale
The dataset addresses a critical gap in AI evaluation: **systematic compositional reasoning**. While models excel at pattern matching, they often fail when required to compose learned concepts in novel ways—a fundamental requirement for generally intelligent systems.
**Key Design Principles:**
1. **Minimal examples:** Each operation demonstrated with just enough examples for unique identification
2. **Systematic variation:** Controlled complexity progression across symbol count, operation count, and composition length
3. **Unambiguous parsing:** Left-to-right evaluation removes parsing complexity, isolating compositional reasoning
4. **Random assignment:** Operation symbols randomly assigned to prevent memorization
### Source Data
#### Data Collection and Processing
**Generation Process:**
1. **DAG-based optimization:** Uses dependency graphs to find minimal definition sets required for solvability
2. **Systematic sampling:** Covers all combinations of (symbols: 3-5, operations: 2-3, target length: 3-5)
3. **Verification:** Each puzzle verified for unique solvability given its definitions
4. **Clean generation:** No modifiers included (simpler baseline for initial evaluation)
**Operations Available:**
- **Concatenation:** `A + B → AB`
- **Reverse concatenation:** `A + B → BA`
- **Reverse-then-concatenate:** `A + B → A_reversed + B`
#### Who are the source data producers?
Generated algorithmically using a DAG-based puzzle generator. No human annotation required due to deterministic symbolic nature.
### Annotations
No additional annotations beyond the algorithmic generation process. Each puzzle is automatically verified for correctness and minimal solvability.
#### Personal and Sensitive Information
The dataset contains only randomly generated symbolic strings (e.g., "pzm", "imkw"). No personal, sensitive, or private information is included.
## Bias, Risks, and Limitations
**Limitations:**
- **Scope:** Limited to string operations; doesn't test broader reasoning
- **Symbolic only:** Abstract symbols may not transfer to real-world reasoning
- **Left-to-right parsing:** Simplified parsing may not reflect natural language complexity
- **Operation set:** Limited to 3 string operations
**For RL Applications:**
- Results may not directly predict performance on continuous control or complex environments
- String domain may not capture spatial or temporal reasoning critical for many RL tasks
- Success here doesn't guarantee compositional reasoning in other modalities
### Recommendations
**Best Practices:**
- Use alongside other reasoning benchmarks for comprehensive evaluation
- Focus on compositional patterns rather than absolute performance scores
- Consider this a necessary but not sufficient condition for general reasoning
- Examine model behavior on out-of-distribution composition lengths
**For RL Researchers:**
- Treat as a controlled testbed for compositional reasoning principles
- Use to validate architectural choices before deploying in complex environments
- Consider performance here as a lower bound on compositional capabilities
## Citation
**BibTeX:**
```bibtex
@dataset{lichkovski2024escaip,
title={Evaluation of Symbolic Compositionality from Aligned Inductive Priors},
author={Lichkovski, Ilija},
year={2024},
url={https://huggingface.co/datasets/ilijalichkovski/compositional-puzzles},
note={1,440 symbolic reasoning problems testing compositional generalization}
}
```
## Dataset Card Authors
Ilija Lichkovski
## Dataset Card Contact
ilija@manifold.mk