Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
License:

getting the data right 1

#2
by Sinjhin - opened
.claude/settings.local.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(tree:*)",
5
+ "Bash(find:*)"
6
+ ],
7
+ "deny": [],
8
+ "ask": []
9
+ }
10
+ }
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .venv
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.12
README.md CHANGED
@@ -1,3 +1,196 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # ARC-AGI-V1 Dataset (A Take On Format)
6
+
7
+ This dataset is a reorganized version of the [ARC-AGI v1](https://github.com/fchollet/ARC-AGI) (Abstraction and Reasoning Corpus) benchmark, formatted for HuggingFace Datasets.
8
+
9
+ ## Dataset Structure
10
+
11
+ The original ARC-AGI dataset has been transformed from its file-based JSON structure into a standardized HuggingFace dataset with two splits:
12
+
13
+ - **train** (400 examples): Tasks from the original `training` directory
14
+ - **test** (400 examples): Tasks from the original `evaluation` directory
15
+
16
+ ### Original Structure
17
+
18
+ The original ARC-AGI dataset consisted of:
19
+ - A `training` directory with JSON files (one per task)
20
+ - An `evaluation` directory with JSON files (one per task)
21
+ - Each JSON file named with a task ID (e.g., `007bbfb7.json`)
22
+ - Each file containing:
23
+ - `train`: Array of input/output example pairs for learning the pattern
24
+ - `test`: Array of input/output pairs representing the actual task to solve
25
+
26
+ ### Transformed Structure
27
+
28
+ Each row in this dataset represents a single ARC-AGI task with the following schema:
29
+
30
+ ```
31
+ {
32
+ "id": string, // Task ID from the original filename
33
+ "list": [ // Combined training examples and test inputs
34
+ [ // Training example inputs (from original 'train')
35
+ [[int]], [[int]], ...
36
+ ],
37
+ [ // Training example outputs (from original 'train')
38
+ [[int]], [[int]], ...
39
+ ],
40
+ [ // Test inputs (from original 'test')
41
+ [[int]], [[int]], ...
42
+ ]
43
+ ],
44
+ "label": [ // Test outputs (from original 'test')
45
+ [[int]], [[int]], ...
46
+ ]
47
+ }
48
+ ```
49
+
50
+ #### Field Descriptions
51
+
52
+ - **`id`**: The unique task identifier from the original filename
53
+ - **`list`**: A nested list containing three components in order:
54
+ 1. **Example inputs** (`list[0]`): All input grids from the original `train` array
55
+ 2. **Example outputs** (`list[1]`): All output grids from the original `train` array (paired with example inputs)
56
+ 3. **Test inputs** (`list[2]`): All input grids from the original `test` array
57
+ - **`label`**: The correct output grids for the test inputs (from original `test` array outputs)
58
+
59
+ ### Data Format
60
+
61
+ Each grid is represented as a 2D array of integers (0-9), where:
62
+ - Values range from 0 to 9 (representing different colors/states)
63
+ - Grid dimensions vary from 1×1 to 30×30
64
+ - Each integer represents a colored cell in the grid
65
+
66
+ ### Example
67
+
68
+ ```json
69
+ {
70
+ "id": "007bbfb7",
71
+ "list": [
72
+ [
73
+ [[0, 7, 7], // Example input 1
74
+ [7, 7, 7], //
75
+ [0, 7, 7]], //
76
+ [[4, 0, 4], [0, 0, 0], [0, 4, 0]], // Example input 2
77
+ [[0, 0, 0], [0, 0, 2], [2, 0, 2]] // Example input 3
78
+ ],
79
+ [
80
+ [[0, 0, 0, 0, 7, 7, 0, 7, 7], // Example output 1
81
+ [0, 0, 0, 7, 7, 7, 7, 7, 7],
82
+ [0, 0, 0, 0, 7, 7, 0, 7, 7],
83
+ [0, 7, 7, 0, 7, 7, 0, 7, 7],
84
+ [7, 7, 7, 7, 7, 7, 7, 7, 7],
85
+ [0, 7, 7, 0, 7, 7, 0, 7, 7],
86
+ [0, 0, 0, 0, 7, 7, 0, 7, 7],
87
+ [0, 0, 0, 7, 7, 7, 7, 7, 7],
88
+ [0, 0, 0, 0, 7, 7, 0, 7, 7]],
89
+ [[], [], [], [], [], [], [], [], []], // etc..
90
+ ],
91
+ [
92
+ [[7, 0, 7], [7, 0, 7], [7, 7, 0]] // Test input 1
93
+ ]
94
+ ],
95
+ "label": [
96
+ [[7, 0, 7, 0, 0, 0, 7, 0, 7], // Test output 1 (ground truth)
97
+ [7, 0, 7, 0, 0, 0, 7, 0, 7],
98
+ [7, 7, 0, 0, 0, 0, 7, 7, 0],
99
+ [7, 0, 7, 0, 0, 0, 7, 0, 7],
100
+ [7, 0, 7, 0, 0, 0, 7, 0, 7],
101
+ [7, 7, 0, 0, 0, 0, 7, 7, 0],
102
+ [7, 0, 7, 7, 0, 7, 0, 0, 0],
103
+ [7, 0, 7, 7, 0, 7, 0, 0, 0],
104
+ [7, 7, 0, 7, 7, 0, 0, 0, 0]]
105
+ ]
106
+ }
107
+ ```
108
+
109
+ ## Usage Philosophy
110
+
111
+ pprint(dataset['train']['list'][0][0][0])
112
+ pprint(dataset['train']['list'][0][1][0])
113
+ print('')
114
+ pprint(dataset['train']['list'][0][2][0])
115
+ pprint(dataset['train']['label'][0][0])
116
+
117
+ This ARC-AGI dataset format allows (me at least) to think about the tasks in this way:
118
+ 1. **Learn from examples**: Study the input/output pairs:
119
+ - input: `dataset['train']['list'][0][0][0]`
120
+ - output: `dataset['train']['list'][0][1][0]`
121
+ - input: `dataset['train']['list'][0][0][1]`
122
+ - output: `dataset['train']['list'][0][1][1]`
123
+ - where:
124
+ - 1st num: `task number`
125
+ - 2nd num: `either 0: example input || 1: example output`
126
+ - 3rd num: `which example?`
127
+ 2. **Then 'Get the tests'**:
128
+ - `dataset['train']['list'][0][2][0]`
129
+ 3. **Apply the pattern**: Use the learned rule to make your two guesses
130
+ 4. **Evaluate performance**: Compare model predictions against the `label` field
131
+ - `dataset['train']['label'][0][0]`
132
+
133
+ ### Training Split
134
+ - Contains all tasks from the original `training` directory
135
+ - Intended for model training and development
136
+ - Both example pairs and test solutions are provided
137
+
138
+ ### Test Split
139
+ - Contains all tasks from the original `evaluation` directory
140
+ - Intended for final model evaluation
141
+ - In competition settings, test labels may be withheld
142
+
143
+ ## Dataset Features
144
+
145
+ ```python
146
+ Features({
147
+ 'id': Value('string'),
148
+ 'list': List(List(List(List(Value('int64'))))),
149
+ 'label': List(List(List(Value('int64'))))
150
+ })
151
+ ```
152
+
153
+ ## Loading the Dataset
154
+
155
+ ```python
156
+ from datasets import load_dataset
157
+
158
+ dataset = load_dataset("ardea/arc_agi_v1")
159
+
160
+ # Access splits
161
+ train_data = dataset['train']
162
+ test_data = dataset['test']
163
+
164
+ # Example: Get a single task
165
+ task = train_data[0]
166
+ task_id = task['id']
167
+ example_inputs = task['list'][0]
168
+ example_outputs = task['list'][1]
169
+ test_inputs = task['list'][2]
170
+ test_outputs = task['label']
171
+
172
+ # Example: Get a task by id
173
+ task = list(filter(lambda t: t['id'] == '007bbfb7', train_data))
174
+ ```
175
+
176
+ ## Transparency
177
+ I've left the script I used on the original dataset here as `arc_to_my_hf.py`
178
+
179
+ ## Citation
180
+
181
+ If you use this dataset, please cite the original ARC-AGI work:
182
+
183
+ ```bibtex
184
+ @misc{chollet2019measure,
185
+ title={On the Measure of Intelligence},
186
+ author={François Chollet},
187
+ year={2019},
188
+ eprint={1911.01547},
189
+ archivePrefix={arXiv},
190
+ primaryClass={cs.AI}
191
+ }
192
+ ```
193
+
194
+ ## License
195
+
196
+ This dataset maintains the Apache 2.0 license from the original ARC-AGI corpus.
arc_to_my_hf.py ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # requires-python = ">=3.12,<3.14"
4
+ # dependencies = [
5
+ # "datasets",
6
+ # "pyarrow",
7
+ # ]
8
+ # ///
9
+
10
+ import json
11
+ from pathlib import Path
12
+ from typing import Dict
13
+ import argparse
14
+
15
+ from datasets import Dataset, DatasetDict, load_dataset
16
+
17
+
18
+ class ARCToHFConverter:
19
+ """Converts ARC-AGI task JSON files to HuggingFace Arrow format."""
20
+
21
+ def __init__(self, input_dir: Path):
22
+ self.input_dir = Path(input_dir)
23
+ self.output_dir = self.input_dir.parent / f"hf_{self.input_dir.name}"
24
+
25
+ def load_task(self, json_path: Path) -> Dict:
26
+ """Load single task JSON file."""
27
+ with open(json_path, 'r') as f:
28
+ return json.load(f)
29
+
30
+ def convert_task(self, task_data: Dict, task_id: str) -> Dict:
31
+ """Convert single task to HF schema.
32
+
33
+ Returns:
34
+ {
35
+ "id": str,
36
+ "list": [
37
+ [grid, grid, ...], # example inputs
38
+ [grid, grid, ...], # example outputs
39
+ [grid, ...] # test inputs
40
+ ],
41
+ "label": [grid, ...] # test outputs
42
+ }
43
+ """
44
+ return {
45
+ "id": task_id,
46
+ "list": [
47
+ [ex["input"] for ex in task_data["train"]], # index 0: example inputs
48
+ [ex["output"] for ex in task_data["train"]], # index 1: example outputs
49
+ [ex["input"] for ex in task_data["test"]] # index 2: test inputs
50
+ ],
51
+ "label": [ex["output"] for ex in task_data["test"]] # test outputs
52
+ }
53
+
54
+ def convert_directory(self, subdir_name: str) -> Dataset:
55
+ """Convert all JSON files in a subdirectory to HF Dataset."""
56
+ subdir = self.input_dir / subdir_name
57
+ json_files = sorted(subdir.glob("*.json"))
58
+
59
+ print(f"Converting {subdir_name}/ directory ({len(json_files)} tasks)...")
60
+ tasks = []
61
+ for json_path in json_files:
62
+ task_id = json_path.stem # filename without .json
63
+ task_data = self.load_task(json_path)
64
+ converted = self.convert_task(task_data, task_id)
65
+ tasks.append(converted)
66
+
67
+ return Dataset.from_list(tasks)
68
+
69
+ def convert_all(self) -> DatasetDict:
70
+ """Convert both training and evaluation subdirectories."""
71
+ train_dataset = self.convert_directory("training")
72
+ test_dataset = self.convert_directory("evaluation")
73
+
74
+ return DatasetDict({
75
+ "train": train_dataset,
76
+ "test": test_dataset
77
+ })
78
+
79
+ def save(self, dataset_dict: DatasetDict):
80
+ """Save dataset to disk in Parquet format for HuggingFace Hub."""
81
+ # Create output directory structure
82
+ self.output_dir.mkdir(parents=True, exist_ok=True)
83
+ data_dir = self.output_dir / "data"
84
+ data_dir.mkdir(exist_ok=True)
85
+
86
+ # Export to parquet files (HuggingFace Hub standard format)
87
+ print(f"Saving train split to {data_dir / 'train-00000-of-00001.parquet'}...")
88
+ dataset_dict['train'].to_parquet(data_dir / 'train-00000-of-00001.parquet')
89
+
90
+ print(f"Saving test split to {data_dir / 'test-00000-of-00001.parquet'}...")
91
+ dataset_dict['test'].to_parquet(data_dir / 'test-00000-of-00001.parquet')
92
+
93
+ print(f"\n✓ Dataset saved to {self.output_dir}")
94
+ print(f" - Train: {len(dataset_dict['train'])} examples")
95
+ print(f" - Test: {len(dataset_dict['test'])} examples")
96
+
97
+
98
+ def look_at_data():
99
+ # Load the dataset from parquet files
100
+ print("Loading dataset from parquet files...")
101
+ dataset = load_dataset('parquet', data_files={
102
+ 'train': 'data/train-00000-of-00001.parquet',
103
+ 'test': 'data/test-00000-of-00001.parquet'
104
+ })
105
+
106
+ print("\nDataset loaded successfully!")
107
+ print(f"Splits: {list(dataset.keys())}")
108
+ print(f"Train size: {len(dataset['train'])}")
109
+ print(f"Test size: {len(dataset['test'])}")
110
+ print(f"\nFeatures: {dataset['train'].features}")
111
+ print(f"\nFirst example ID: {dataset['train'][0]['id']}")
112
+
113
+
114
+
115
+ def main():
116
+ parser = argparse.ArgumentParser(
117
+ description="Convert ARC-AGI JSON tasks to HuggingFace dataset"
118
+ )
119
+ parser.add_argument(
120
+ "input_dir",
121
+ type=str,
122
+ help="Parent directory containing training/ and evaluation/ subdirectories"
123
+ )
124
+
125
+ args = parser.parse_args()
126
+
127
+ print(f"Input directory: {args.input_dir}")
128
+ converter = ARCToHFConverter(args.input_dir)
129
+ print(f"Output directory: {converter.output_dir}\n")
130
+
131
+ dataset_dict = converter.convert_all()
132
+ converter.save(dataset_dict)
133
+
134
+
135
+ if __name__ == "__main__":
136
+ main()
137
+
138
+
train/data-00000-of-00001.arrow → data/test-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f17dbaa408fe9b62da0b22e1bf5209641af2dbc109da398205b0522b66f2e89
3
- size 3706560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d19223191e69d1a79cae0aa86cb93a55a5f3a1bd4b4454267ad77f040b83d4a3
3
+ size 204509
test/data-00000-of-00001.arrow → data/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ca84d7ed723e549ba97e53b616ed84e937a621e22c9e2c0c4fe6daaf7e3a697
3
- size 6145248
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadee44b84e36ebc7d1cb6841a0d9fb1e542373c1e58e7cff415f841e38b0151
3
+ size 129834
dataset_dict.json DELETED
@@ -1 +0,0 @@
1
- {"splits": ["train", "test"]}
 
 
test/dataset_info.json DELETED
@@ -1,41 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "string",
7
- "_type": "Value"
8
- },
9
- "list": {
10
- "feature": {
11
- "feature": {
12
- "feature": {
13
- "feature": {
14
- "dtype": "int64",
15
- "_type": "Value"
16
- },
17
- "_type": "List"
18
- },
19
- "_type": "List"
20
- },
21
- "_type": "List"
22
- },
23
- "_type": "List"
24
- },
25
- "label": {
26
- "feature": {
27
- "feature": {
28
- "feature": {
29
- "dtype": "int64",
30
- "_type": "Value"
31
- },
32
- "_type": "List"
33
- },
34
- "_type": "List"
35
- },
36
- "_type": "List"
37
- }
38
- },
39
- "homepage": "",
40
- "license": ""
41
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "8a2c8ef92930ac7c",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train/dataset_info.json DELETED
@@ -1,41 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "string",
7
- "_type": "Value"
8
- },
9
- "list": {
10
- "feature": {
11
- "feature": {
12
- "feature": {
13
- "feature": {
14
- "dtype": "int64",
15
- "_type": "Value"
16
- },
17
- "_type": "List"
18
- },
19
- "_type": "List"
20
- },
21
- "_type": "List"
22
- },
23
- "_type": "List"
24
- },
25
- "label": {
26
- "feature": {
27
- "feature": {
28
- "feature": {
29
- "dtype": "int64",
30
- "_type": "Value"
31
- },
32
- "_type": "List"
33
- },
34
- "_type": "List"
35
- },
36
- "_type": "List"
37
- }
38
- },
39
- "homepage": "",
40
- "license": ""
41
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "9bdcf8a374134b57",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
uv.lock ADDED
The diff for this file is too large to render. See raw diff