aplesner-eth commited on
Commit
5720a46
·
verified ·
1 Parent(s): 8bcccbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -9
README.md CHANGED
@@ -1,3 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  features:
@@ -21,13 +58,82 @@ dataset_info:
21
  num_examples: 7317
22
  download_size: 1834376645
23
  dataset_size: 1968333553.544
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
- - split: test
30
- path: data/test-*
31
- - split: validation
32
- path: data/validation-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # FLIP Reasoning Challenge Dataset
4
+
5
+ This repository contains the FLIP dataset, a benchmark for evaluating AI reasoning capabilities based on human verification tasks from the Idena blockchain. The dataset focuses on testing sequential reasoning, visual storytelling, and common sense understanding in multimodal AI systems.
6
+ Paper: [https://arxiv.org/abs/2504.12256].
7
+
8
+ ## Dataset Description
9
+
10
+ FLIP challenges present users with two orderings (stacks) of 4 images, requiring them to identify which ordering forms a coherent story. These tasks are designed to test complex reasoning abilities rather than simple recognition.
11
+
12
+ Key features of the FLIP dataset:
13
+ - Created from human-generated and human-verified tasks from the Idena blockchain
14
+ - Tests sequential reasoning and visual storytelling abilities
15
+ - Provides clear ground truth, making it easy to diagnose model failures
16
+ - High human performance baseline (95.3% accuracy)
17
+
18
+ ## Dataset Structure and Overview
19
+
20
+ ```
21
+ flip_dataset/
22
+ ├── train/
23
+ │ ├── images/
24
+ │ │ ├── image1.png
25
+ │ │ ├── image2.png
26
+ │ │ └── ...
27
+ │ └── tasks/
28
+ │ ├── task1.json
29
+ │ ├── task2.json
30
+ │ └── ...
31
+ ├── validation/
32
+ │ ├── images/
33
+ │ └── tasks/
34
+ └── test/
35
+ ├── images/
36
+ └── tasks/
37
+ ```
38
  ---
39
  dataset_info:
40
  features:
 
58
  num_examples: 7317
59
  download_size: 1834376645
60
  dataset_size: 1968333553.544
 
 
 
 
 
 
 
 
 
61
  ---
62
+
63
+ ### Task Format
64
+
65
+ Each task is stored as a JSON file with the following structure:
66
+
67
+ ```json
68
+ {
69
+ "task_id": "_flip_bafkreianuvtem5nababzw5z4iscr5ocvgaviilmemwn3o73jkak7bqrjde",
70
+ "images": {
71
+ "0": "46efd91c-be17-42b8-8f5e-2a84b96d21af",
72
+ "1": "9d1fac84-0c9f-4ab7-9d3b-a3b4c61dc390",
73
+ "2": "ceecdc8b-840c-46d7-b694-74f05839447f",
74
+ "3": "cbdf27d1-aa84-405b-86db-cb336d0bc4a7"
75
+ },
76
+ "left_stack": ["2", "3", "1", "0"],
77
+ "right_stack": ["3", "0", "2", "1"],
78
+ "agreed_answer": ["Right", "Strong"],
79
+ "votes": {"Left": "1", "Right": "4", "Reported": "0"},
80
+ "details": {
81
+ "Author:": "0x63f7aa6C19A0f7D4BBB4177000Af671ED212e490",
82
+ "Epoch:": "#0027",
83
+ "Size:": "86140 bytes",
84
+ "Created:": "12/24/2019 13:23:51",
85
+ "Block:": "669858",
86
+ "Tx:": "0xdbca60c3d10770f4bc2f73fd9119d9509117a8db08196f128382bffbf3d8c79f"
87
+ }
88
+ }
89
+ ```
90
+
91
+ When processing tasks:
92
+ - The task ID is derived from the `name` field by replacing "/" with "_"
93
+ - Image IDs are extracted by removing the prefix "blob:https://scan.idena.io/"
94
+ - The dataset stores the image orderings as "left stack" and "right stack"
95
+ - Images are shuffled to prevent any accidental ordering cues
96
+
97
+ ## Dataset Statistics
98
+
99
+ - Total flips: 11,674
100
+ - Train set: 3,502 flips (30%)
101
+ - Validation set: 3,502 flips (30%)
102
+ - Test set: 4,670 flips (40%)
103
+ - Small subsets are also available for computationally intensive experimentation
104
+
105
+ Solutions are nearly evenly distributed between Left (49.4%) and Right (50.6%), with most challenges having strong consensus (95.7%).
106
+
107
+ ## Research Findings
108
+
109
+ The FLIP dataset has been used to evaluate various state-of-the-art AI models:
110
+
111
+ - Best open-source models achieve 75.5% accuracy in zero-shot settings
112
+ - Best closed-source models reach 77.9% accuracy
113
+ - Human performance is 95.3% accurate
114
+ - Captioning models aid reasoning models by providing text descriptions
115
+ - Ensemble methods can boost performance to 85.2%
116
+
117
+ These findings highlight the gap between current AI capabilities and human-level reasoning on complex multimodal tasks.
118
+
119
+ ## Citation
120
+
121
+ If you use this dataset in your research, please cite:
122
+
123
+ ```
124
+ @inproceedings{plesner2025flip,
125
+ title={FLIP Reasoning Challenge},
126
+ author={Plesner, Andreas and Kuzhagaliyev, Turlan and Wattenhofer, Roger},
127
+ booktitle={First Workshop on Open Science for Foundation Models at ICLR 2025},
128
+ year={2025}
129
+ }
130
+ ```
131
+
132
+ ## Acknowledgements
133
+
134
+ This dataset is derived from the Idena blockchain. We thank the Idena community for creating and validating these challenges.
135
+
136
+ ## Contact
137
+
138
+ For questions or feedback, please contact:
139
+ - Andreas Plesner (aplesner@ethz.ch)