Datasets:
File size: 4,521 Bytes
8fd0f67 5720a46 9d049a8 5720a46 8bcccbf 5720a46 8fd0f67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
language:
- en
tags:
- VLMs
- Reasoning
- Language
- Vision
- Image
- Understanding
pretty_name: FLIP Reasoning Challenge
---
# FLIP Reasoning Challenge Dataset
This repository contains the FLIP dataset, a benchmark for evaluating AI reasoning capabilities based on human verification tasks from the Idena blockchain. The dataset focuses on testing sequential reasoning, visual storytelling, and common sense understanding in multimodal AI systems.
Paper: https://arxiv.org/abs/2504.12256.
## Dataset Description
FLIP challenges present users with two orderings (stacks) of 4 images, requiring them to identify which ordering forms a coherent story. These tasks are designed to test complex reasoning abilities rather than simple recognition.
Key features of the FLIP dataset:
- Created from human-generated and human-verified tasks from the Idena blockchain
- Tests sequential reasoning and visual storytelling abilities
- Provides clear ground truth, making it easy to diagnose model failures
- High human performance baseline (95.3% accuracy)
## Dataset Structure and Overview
```
flip_dataset/
├── train/
│ ├── images/
│ │ ├── image1.png
│ │ ├── image2.png
│ │ └── ...
│ └── tasks/
│ ├── task1.json
│ ├── task2.json
│ └── ...
├── validation/
│ ├── images/
│ └── tasks/
└── test/
├── images/
└── tasks/
```
---
dataset_info:
features:
- name: task_id
dtype: string
- name: task_data
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1381093867.37
num_examples: 34210
- name: test
num_bytes: 289834313.958
num_examples: 7354
- name: validation
num_bytes: 297405372.216
num_examples: 7317
download_size: 1834376645
dataset_size: 1968333553.544
---
### Task Format
Each task is stored as a JSON file with the following structure:
```json
{
"task_id": "_flip_bafkreianuvtem5nababzw5z4iscr5ocvgaviilmemwn3o73jkak7bqrjde",
"images": {
"0": "46efd91c-be17-42b8-8f5e-2a84b96d21af",
"1": "9d1fac84-0c9f-4ab7-9d3b-a3b4c61dc390",
"2": "ceecdc8b-840c-46d7-b694-74f05839447f",
"3": "cbdf27d1-aa84-405b-86db-cb336d0bc4a7"
},
"left_stack": ["2", "3", "1", "0"],
"right_stack": ["3", "0", "2", "1"],
"agreed_answer": ["Right", "Strong"],
"votes": {"Left": "1", "Right": "4", "Reported": "0"},
"details": {
"Author:": "0x63f7aa6C19A0f7D4BBB4177000Af671ED212e490",
"Epoch:": "#0027",
"Size:": "86140 bytes",
"Created:": "12/24/2019 13:23:51",
"Block:": "669858",
"Tx:": "0xdbca60c3d10770f4bc2f73fd9119d9509117a8db08196f128382bffbf3d8c79f"
}
}
```
When processing tasks:
- The task ID is derived from the `name` field by replacing "/" with "_"
- Image IDs are extracted by removing the prefix "blob:https://scan.idena.io/"
- The dataset stores the image orderings as "left stack" and "right stack"
- Images are shuffled to prevent any accidental ordering cues
## Dataset Statistics
- Total flips: 11,674
- Train set: 3,502 flips (30%)
- Validation set: 3,502 flips (30%)
- Test set: 4,670 flips (40%)
- Small subsets are also available for computationally intensive experimentation
Solutions are nearly evenly distributed between Left (49.4%) and Right (50.6%), with most challenges having strong consensus (95.7%).
## Research Findings
The FLIP dataset has been used to evaluate various state-of-the-art AI models:
- Best open-source models achieve 75.5% accuracy in zero-shot settings
- Best closed-source models reach 77.9% accuracy
- Human performance is 95.3% accurate
- Captioning models aid reasoning models by providing text descriptions
- Ensemble methods can boost performance to 85.2%
These findings highlight the gap between current AI capabilities and human-level reasoning on complex multimodal tasks.
## Citation
If you use this dataset in your research, please cite:
```
@inproceedings{plesner2025flip,
title={FLIP Reasoning Challenge},
author={Plesner, Andreas and Kuzhagaliyev, Turlan and Wattenhofer, Roger},
booktitle={First Workshop on Open Science for Foundation Models at ICLR 2025},
year={2025}
}
```
## Acknowledgements
This dataset is derived from the Idena blockchain. We thank the Idena community for creating and validating these challenges.
## Contact
For questions or feedback, please contact:
- Andreas Plesner (aplesner@ethz.ch) |