Datasets:
license: apache-2.0
task_categories:
- table-question-answering
tags:
- arc
- agi
- arc-agi
pretty_name: ARC AGI v1
size_categories:
- 1K<n<10K
ARC-AGI-V1 Dataset (A Take On Format)
This dataset is a reorganized version of the ARC-AGI v1 (Abstraction and Reasoning Corpus) benchmark, formatted for HuggingFace Datasets.
Dataset Structure
The original ARC-AGI dataset has been transformed from its file-based JSON structure into a standardized HuggingFace dataset with two splits:
- train (400 examples): Tasks from the original
trainingdirectory - test (400 examples): Tasks from the original
evaluationdirectory
Original Structure
The original ARC-AGI dataset consisted of:
- A
trainingdirectory with JSON files (one per task) - An
evaluationdirectory with JSON files (one per task) - Each JSON file named with a task ID (e.g.,
007bbfb7.json) - Each file containing:
train: Array of input/output example pairs for learning the patterntest: Array of input/output pairs representing the actual task to solve
Transformed Structure
Each row in this dataset represents a single ARC-AGI task with the following schema:
{
"id": string, // Task ID from the original filename
"list": [ // Combined training examples and test inputs
[ // Training example inputs (from original 'train')
[[int]], [[int]], ...
],
[ // Training example outputs (from original 'train')
[[int]], [[int]], ...
],
[ // Test inputs (from original 'test')
[[int]], [[int]], ...
]
],
"label": [ // Test outputs (from original 'test')
[[int]], [[int]], ...
]
}
Field Descriptions
id: The unique task identifier from the original filenamelist: A nested list containing three components in order:- Example inputs (
list[0]): All input grids from the originaltrainarray - Example outputs (
list[1]): All output grids from the originaltrainarray (paired with example inputs) - Test inputs (
list[2]): All input grids from the originaltestarray
- Example inputs (
label: The correct output grids for the test inputs (from originaltestarray outputs)
Data Format
Each grid is represented as a 2D array of integers (0-9), where:
- Values range from 0 to 9 (representing different colors/states)
- Grid dimensions vary from 1×1 to 30×30
- Each integer represents a colored cell in the grid
Example
{
"id": "007bbfb7",
"list": [
[
[[0, 7, 7], // Example input 1
[7, 7, 7], //
[0, 7, 7]], //
[[4, 0, 4], [0, 0, 0], [0, 4, 0]], // Example input 2
[[0, 0, 0], [0, 0, 2], [2, 0, 2]] // Example input 3
],
[
[[0, 0, 0, 0, 7, 7, 0, 7, 7], // Example output 1
[0, 0, 0, 7, 7, 7, 7, 7, 7],
[0, 0, 0, 0, 7, 7, 0, 7, 7],
[0, 7, 7, 0, 7, 7, 0, 7, 7],
[7, 7, 7, 7, 7, 7, 7, 7, 7],
[0, 7, 7, 0, 7, 7, 0, 7, 7],
[0, 0, 0, 0, 7, 7, 0, 7, 7],
[0, 0, 0, 7, 7, 7, 7, 7, 7],
[0, 0, 0, 0, 7, 7, 0, 7, 7]],
[[], [], [], [], [], [], [], [], []], // etc..
],
[
[[7, 0, 7], [7, 0, 7], [7, 7, 0]] // Test input 1
]
],
"label": [
[[7, 0, 7, 0, 0, 0, 7, 0, 7], // Test output 1 (ground truth)
[7, 0, 7, 0, 0, 0, 7, 0, 7],
[7, 7, 0, 0, 0, 0, 7, 7, 0],
[7, 0, 7, 0, 0, 0, 7, 0, 7],
[7, 0, 7, 0, 0, 0, 7, 0, 7],
[7, 7, 0, 0, 0, 0, 7, 7, 0],
[7, 0, 7, 7, 0, 7, 0, 0, 0],
[7, 0, 7, 7, 0, 7, 0, 0, 0],
[7, 7, 0, 7, 7, 0, 0, 0, 0]]
]
}
Usage Philosophy
pprint(dataset['train']['list'][0][0][0]) pprint(dataset['train']['list'][0][1][0]) print('') pprint(dataset['train']['list'][0][2][0]) pprint(dataset['train']['label'][0][0])
This ARC-AGI dataset format allows (me at least) to think about the tasks in this way:
- Learn from examples: Study the input/output pairs:
- input:
dataset['train']['list'][0][0][0] - output:
dataset['train']['list'][0][1][0] - input:
dataset['train']['list'][0][0][1] - output:
dataset['train']['list'][0][1][1] - where:
- 1st num:
task number - 2nd num:
either 0: example input || 1: example output - 3rd num:
which example?
- 1st num:
- input:
- Then 'Get the tests':
dataset['train']['list'][0][2][0]
- Apply the pattern: Use the learned rule to make your two guesses
- Evaluate performance: Compare model predictions against the
labelfielddataset['train']['label'][0][0]
Training Split
- Contains all tasks from the original
trainingdirectory - Intended for model training and development
- Both example pairs and test solutions are provided
Test Split
- Contains all tasks from the original
evaluationdirectory - Intended for final model evaluation
- In competition settings, test labels may be withheld
Dataset Features
Features({
'id': Value('string'),
'list': List(List(List(List(Value('int64'))))),
'label': List(List(List(Value('int64'))))
})
Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("ardea/arc_agi_v1")
# Access splits
train_data = dataset['train']
test_data = dataset['test']
# Example: Get a single task
task = train_data[0]
task_id = task['id']
example_inputs = task['list'][0]
example_outputs = task['list'][1]
test_inputs = task['list'][2]
test_outputs = task['label']
# Example: Get a task by id
task = list(filter(lambda t: t['id'] == '007bbfb7', train_data))
Transparency
I've left the script I used on the original dataset here as arc_to_my_hf.py
Citation
If you use this dataset, please cite the original ARC-AGI work:
@misc{chollet2019measure,
title={On the Measure of Intelligence},
author={François Chollet},
year={2019},
eprint={1911.01547},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
License
This dataset maintains the Apache 2.0 license from the original ARC-AGI corpus.