YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ARC Prize 2026 - ARC-AGI-2 Solver
Multi-strategy ensemble solver for the ARC Prize 2026 Kaggle competition.
Architecture
Three-pronged ensemble:
DSL Solver β 32 primitive transforms (rotations, flips, cropping, border extraction, hole filling, object detection, color operations) + depth-2 composition. Solves geometric/structural tasks in milliseconds.
Object Solver β Connected components detection: extracts largest objects, converts object sets to color bars.
TTT Solver β 236K-parameter encoder-decoder Transformer trained from scratch per task using test-time training. 16 augmentations per example (D8 symmetries + color permutations).
Ensemble logic: DSL β Object β TTT β Identity fallback
Performance
- DSL accuracy on ARC-AGI-2 training: ~1.8% (18/1000 tasks)
- TTT model: 236K parameters, ~5-10s per task on L4 GPU
- Optimized for: Kaggle 4ΓL4 GPU, 12-hour time limit
Usage
- Download
kaggle_notebook.py - Upload to Kaggle as a notebook
- Run β it reads from
/kaggle/input/arc-prize-2026/and writes/kaggle/working/submission.json
Files
| File | Description |
|---|---|
kaggle_notebook.py |
Complete Kaggle submission notebook |
kaggle_solver.py |
Standalone Python module (more features) |
train_sft_barc.py |
Optional: SFT pre-training on BARC dataset |
References
Built on research from:
- NVARC (2025 winner, 24% on ARC-AGI-2): Test-time training + heavy augmentation
- Product-of-Experts (Franzen et al., 2025): DFS + probability threshold + PoE scoring
- SOAR (2025, 2nd place paper): Self-improving evolutionary program synthesis
- MARC (NeurIPS 2025): The Surprising Effectiveness of TTT
- CompressARC (2025, 3rd place paper): MDL-based code golf, 76K parameters
License
MIT β free to use, modify, and distribute.