Datasets:
language:
- en
pretty_name: COGITAO
tags:
- COGITAO
- compositionality
- generalization
- visual reasoning
license: cc-by-4.0
task_categories:
- text2text-generation
- image-to-image
annotations_creators:
- machine-generated
source_datasets:
- original
size_categories:
- 10K<n<100K
author: Yassine Taoudi Benchekroun
Before ARC Dataset
This dataset contains .parquet files organized in nested subfolders under COGITAO, split into two main categories: generalization and compositionality. Each category contains data for different experiment settings and experiments, with JSON files for training, validation, and testing splits. The nested structure is intentional for clarity.
Dataset Structure
COGITAO/: Root foldergeneralization/: Data for generalization experimentsexperiment_settings[1-5]/: Five settings (e.g., different conditions or parameters)experiment[1-5]/: Four experiments per settingtrain.parquet: Training datatrain_val.parquet: Training validation datatest_val.parquet: Test validation datatest.parquet: Test data
compositionality/: Data for compositionality experimentsexperiment_settings[1-5]/: Five settings (e.g. different combination of transformations)experiment[N]/: N experiments per setting (N changes per experiment setting)train.parquet: Training datatrain_val.parquet: Training validation datatest_val.parquet: Test validation datatest.parquet: Test data
We provide instruction on how to read the JSON file on the open_data.ipynb notebook, as well as in the original repo which was used to create this dataset.
Content
Each .parquet file is a dict containing the following keys: 'input', 'output', 'transformation_suite', 'task_key'. The input is the input grid, while the output is the output grid subject to the transformation_suite. the task_key is simply an identifier for the task instance. NOTE: In the compositionality study, we provide an additional demo_input and demo_output for demonstration examples of the task. This is in case the user would like to pass a demonstration example (in-context learning style) as opposed to simply the transformation_suite to specify which transformation the model should apply.
Usage
Load the dataset using the datasets library:
from datasets import load_dataset
gen_exps3_exp2_test = load_dataset("yassinetb/COGITAO", data_files={"data": "generalization/exp_setting_3/experiment_2/test.parquet"})
print(dataset["data"][0].keys()) # Prints the keys of the first sample from the chosen dataset. Should output: dict_keys(['input', 'output', 'transformation_suite', 'task_key'])