SamuelOrt25 commited on
Commit
1824c8e
·
verified ·
1 Parent(s): ddd92a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -84
README.md CHANGED
@@ -44,32 +44,22 @@ Building upon previous DocVQA benchmarks, this evaluation dataset introduces cha
44
 
45
  By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
46
 
47
- # Datasets
48
 
49
- This dataset card corresponds to the **DocVQA 2026** benchmark used in the **ICDAR 2026 competition on multimodal reasoning over documents in multiple domains**.
50
 
51
- The benchmark includes:
52
-
53
- - **Validation set** contains public answers and is intended for local development and experimentation.
54
- - **Test set** — contains private answers and is used for the official competition ranking.
55
-
56
- The official competition is hosted on the **Robust Reading Competition (RRC)** platform: https://rrc.cvc.uab.es/?ch=34
57
-
58
- Participants interested in the leaderboard and official submissions should register and submit their predictions through the RRC server.
59
-
60
- ### Validation Set
61
-
62
- The validation split includes public ground-truth answers and can be evaluated:
63
-
64
- - **Locally**, using the official evaluation code: https://github.com/VLR-CVC/DocVQA2026
65
-
66
- - **Online**, by submitting predictions to the RRC platform: https://rrc.cvc.uab.es/?ch=34&com=mymethods&task=1
67
 
68
- ### Test Set
69
 
70
- The test split contains **private answers** and therefore can only be evaluated through the official RRC platform: https://rrc.cvc.uab.es/?ch=34&com=mymethods&task=1
 
71
 
72
- # Participation Requirements
73
 
74
  To participate in the competition:
75
 
@@ -78,11 +68,11 @@ To participate in the competition:
78
 
79
  These reports will be included in the competition publication in the proceedings of the **International Conference on Document Analysis and Recognition (ICDAR)**, held in **Vienna, Austria**.
80
 
81
- # Competition Categories
82
 
83
  There are **three participation categories**, depending on the total number of parameters of the submitted method.
84
 
85
- This count must include, all parameters whether active or not, and all parameters across all models used in agentic systems
86
 
87
  Categories:
88
 
@@ -90,9 +80,7 @@ Categories:
90
  - **Over 8B parameters and up to 35B**
91
  - **Over 35B parameters**
92
 
93
- ---
94
-
95
- # Load & Inspect the Data
96
 
97
  ```python
98
  from datasets import load_dataset
@@ -111,7 +99,7 @@ print(f"Document ID: {doc_id} ({category})")
111
  # 'document' is a list of PIL Images (one for each page)
112
  images = sample["document"]
113
  print(f"Number of pages: {len(images)}")
114
- images[0].show()
115
 
116
  # 4. Access Questions and Answers
117
  questions = sample["questions"]
@@ -126,12 +114,11 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
126
  print("-" * 50)
127
  ```
128
 
129
-
130
- # Structure of a Sample
131
 
132
  <details>
133
  <summary><b>Click to expand the JSON structure</b></summary>
134
-
135
  ```json
136
  {
137
  "doc_id": "maps_2",
@@ -173,13 +160,10 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
173
  ]
174
  }
175
  }
176
-
177
  ```
178
  </details>
179
 
180
- ---
181
-
182
- # Results
183
 
184
  <p align="center">
185
  <img src="./assets/results_chart.jpg" alt="DocVQA 2026 Results Chart" width="80%">
@@ -188,71 +172,135 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
188
  </p>
189
 
190
  <div align="center">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
 
192
- | Category | Gemini 3 Pro Preview | GPT-5.2 | Gemini 3 Flash Preview | GPT-5 Mini |
193
- |---|---|---|---|---|
194
- | **Overall Accuracy** | **0.375** | 0.350 | 0.3375 | 0.225 |
195
- | Business Report | 0.400 | **0.600** | 0.200 | 0.300 |
196
- | Comics | 0.300 | 0.200 | **0.400** | 0.100 |
197
- | Engineering Drawing | 0.300 | 0.300 | **0.500** | 0.200 |
198
- | Infographics | **0.700** | 0.600 | 0.500 | 0.500 |
199
- | Maps | 0.000 | **0.200** | 0.000 | 0.100 |
200
- | Science Paper | 0.300 | 0.400 | **0.500** | 0.100 |
201
- | Science Poster | **0.300** | 0.000 | 0.200 | 0.000 |
202
- | Slide | **0.700** | 0.500 | 0.400 | 0.500 |
203
 
204
- </div>
 
205
 
206
- > **Evaluation Parameters**
207
- >
208
- > - GPT models: "High thinking" enabled, temperature = 1.0
209
- > - Gemini models: "High thinking" enabled, temperature = 0.0
210
 
211
- > **API Constraints**
212
- >
213
- > Both model families were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure.
214
- > For example, several comics exceed the 50MB file input limit present in some API models.
215
 
216
- ---
 
 
 
 
 
 
 
 
 
217
 
218
- # Answer Formatting Rules
219
- - **Source Adherence**
220
- Only provide answers found directly within the document.
221
- If the question is unanswerable given the provided image, the response must be exactly:
222
- `Unknown`
223
 
224
- - **Multiple Answers**
225
- List multiple answers in their order of appearance, separated by a comma and a single space.
226
- Do not use the word "and".
227
- Example: `Answer A, Answer B`
228
 
229
- - **Numbers & Units**
230
- Convert units to standardized abbreviations (`kg`, `m`, etc.).
231
- Always place a single space between number and unit.
232
- Example: `50 kg`, `10 USD`
233
 
234
- - **Percentages**
235
- Attach `%` directly to the number with no space.
236
- Example: `50%`
237
 
238
- - **Dates**
239
- Convert dates to `YYYY-MM-DD`.
240
- Example: `Jan 1st 24` → `2024-01-01`
241
 
242
- - **Decimals**
243
- Use a period `.` as decimal separator.
244
- Example: `3.14`
245
 
246
- - **Thousands Separator**
247
- Do not use commas.
248
- Example: `1000`
249
 
250
- - **No Filler Text**
251
- Output only the requested data.
252
 
253
- ---
 
 
 
 
254
 
255
- # Contact
256
 
257
  For questions, technical support, or inquiries regarding the DocVQA 2026 dataset and competition framework: **docvqa@cvc.uab.cat**
258
 
 
44
 
45
  By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
46
 
47
+ ## 🏆 Competition Hosting & Datasets
48
 
49
+ The official DocVQA 2026 competition is hosted on the **Robust Reading Competition (RRC)** platform, which provides the standardized framework for our leaderboards, submissions, and result tracking.
50
 
51
+ <p align="center">
52
+ <a href="https://rrc.cvc.uab.es/?ch=34" style="background-color: #007bff; color: white; padding: 12px 24px; text-decoration: none; border-radius: 6px; font-weight: bold; font-size: 18px; display: inline-block;">
53
+ Join the Challenge on the RRC Platform
54
+ </a>
55
+ </p>
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ The benchmark includes:
58
 
59
+ - **Validation set** contains public answers and is intended for local development and experimentation. It can be evaluated locally using the official evaluation code or online via the RRC platform.
60
+ - **Test set** — contains **private answers** and is used for the official competition ranking. It can only be evaluated through the official RRC platform.
61
 
62
+ ## 📋 Participation Requirements
63
 
64
  To participate in the competition:
65
 
 
68
 
69
  These reports will be included in the competition publication in the proceedings of the **International Conference on Document Analysis and Recognition (ICDAR)**, held in **Vienna, Austria**.
70
 
71
+ ## 📊 Competition Categories
72
 
73
  There are **three participation categories**, depending on the total number of parameters of the submitted method.
74
 
75
+ This count must include, all parameters whether active or not, and all parameters across all models used in agentic systems.
76
 
77
  Categories:
78
 
 
80
  - **Over 8B parameters and up to 35B**
81
  - **Over 35B parameters**
82
 
83
+ ## Load & Inspect the Data
 
 
84
 
85
  ```python
86
  from datasets import load_dataset
 
99
  # 'document' is a list of PIL Images (one for each page)
100
  images = sample["document"]
101
  print(f"Number of pages: {len(images)}")
102
+ images[0].show()
103
 
104
  # 4. Access Questions and Answers
105
  questions = sample["questions"]
 
114
  print("-" * 50)
115
  ```
116
 
117
+ ## Structure of a Sample
 
118
 
119
  <details>
120
  <summary><b>Click to expand the JSON structure</b></summary>
121
+
122
  ```json
123
  {
124
  "doc_id": "maps_2",
 
160
  ]
161
  }
162
  }
 
163
  ```
164
  </details>
165
 
166
+ ## Results
 
 
167
 
168
  <p align="center">
169
  <img src="./assets/results_chart.jpg" alt="DocVQA 2026 Results Chart" width="80%">
 
172
  </p>
173
 
174
  <div align="center">
175
+ <table>
176
+ <thead>
177
+ <tr>
178
+ <th align="left">Category</th>
179
+ <th align="center">Gemini 3 Pro Preview</th>
180
+ <th align="center">GPT-5.2</th>
181
+ <th align="center">Gemini 3 Flash Preview</th>
182
+ <th align="center">GPT-5 Mini</th>
183
+ </tr>
184
+ </thead>
185
+ <tbody>
186
+ <tr>
187
+ <td align="left"><b>Overall Accuracy</b></td>
188
+ <td align="center"><b>0.375</b></td>
189
+ <td align="center">0.350</td>
190
+ <td align="center">0.3375</td>
191
+ <td align="center">0.225</td>
192
+ </tr>
193
+ <tr>
194
+ <td align="left">Business Report</td>
195
+ <td align="center">0.400</td>
196
+ <td align="center"><b>0.600</b></td>
197
+ <td align="center">0.200</td>
198
+ <td align="center">0.300</td>
199
+ </tr>
200
+ <tr>
201
+ <td align="left">Comics</td>
202
+ <td align="center">0.300</td>
203
+ <td align="center">0.200</td>
204
+ <td align="center"><b>0.400</b></td>
205
+ <td align="center">0.100</td>
206
+ </tr>
207
+ <tr>
208
+ <td align="left">Engineering Drawing</td>
209
+ <td align="center">0.300</td>
210
+ <td align="center">0.300</td>
211
+ <td align="center"><b>0.500</b></td>
212
+ <td align="center">0.200</td>
213
+ </tr>
214
+ <tr>
215
+ <td align="left">Infographics</td>
216
+ <td align="center"><b>0.700</b></td>
217
+ <td align="center">0.600</td>
218
+ <td align="center">0.500</td>
219
+ <td align="center">0.500</td>
220
+ </tr>
221
+ <tr>
222
+ <td align="left">Maps</td>
223
+ <td align="center">0.000</td>
224
+ <td align="center"><b>0.200</b></td>
225
+ <td align="center">0.000</td>
226
+ <td align="center">0.100</td>
227
+ </tr>
228
+ <tr>
229
+ <td align="left">Science Paper</td>
230
+ <td align="center">0.300</td>
231
+ <td align="center">0.400</td>
232
+ <td align="center"><b>0.500</b></td>
233
+ <td align="center">0.100</td>
234
+ </tr>
235
+ <tr>
236
+ <td align="left">Science Poster</td>
237
+ <td align="center"><b>0.300</b></td>
238
+ <td align="center">0.000</td>
239
+ <td align="center">0.200</td>
240
+ <td align="center">0.000</td>
241
+ </tr>
242
+ <tr>
243
+ <td align="left">Slide</td>
244
+ <td align="center"><b>0.700</b></td>
245
+ <td align="center">0.500</td>
246
+ <td align="center">0.400</td>
247
+ <td align="center">0.500</td>
248
+ </tr>
249
+ </tbody>
250
+ </table>
251
+ </div>
252
 
253
+ > [!NOTE]
254
+ > **Evaluation Parameters:**
255
+ > * **GPT Models:** "High thinking" enabled, temperature set to `1.0`.
256
+ > * **Gemini Models:** "High thinking" enabled, temperature set to `1.0`.
 
 
 
 
 
 
 
257
 
258
+ > [!WARNING]
259
+ > **API Constraints:** Both models were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure. For example, the file input limit for OpenAI models is 50MB, and several comics in this dataset surpass that threshold.
260
 
261
+ --------
 
 
 
262
 
263
+ ## 📝 Submission Guidelines & Formatting Rules
 
 
 
264
 
265
+ To ensure fair and accurate evaluation across all participants, submissions are evaluated using automated metrics. Therefore, all model outputs must strictly adhere to the following formatting rules:
266
+
267
+ * **Source Adherence:** Only provide answers found directly within the document. If the question is unanswerable given the provided image, the response must be exactly: `"Unknown"`.
268
+ * **Multiple Answers:** List multiple answers in their order of appearance, separated by a comma and a single space. **Do not** use the word "and". *(Example: `Answer A, Answer B`)*
269
+ * **Numbers & Units:** Convert units to their standardized abbreviations (e.g., use `kg` instead of "kilograms", `m` instead of "meters"). Always place a single space between the number and the unit. *(Example: `50 kg`, `10 USD`)*
270
+ * **Percentages:** Attach the `%` symbol directly to the number with no space. *(Example: `50%`)*
271
+ * **Dates:** Convert all dates to the standardized `YYYY-MM-DD` format. *(Example: "Jan 1st 24" becomes `2024-01-01`)*
272
+ * **Decimals:** Use a single period (`.`) as a decimal separator, never a comma. *(Example: `3.14`)*
273
+ * **Thousands Separator:** Do not use commas to separate large numbers. *(Example: `1000`, not `1,000`)*
274
+ * **No Filler Text:** Output **only** the requested data. Do not frame your answer in full sentences (e.g., avoid "The answer is...").
275
 
276
+ **Final Output Format:** When generating the final extracted data, your system must prefix the response with the following exact phrasing:
 
 
 
 
277
 
278
+ ```text
279
+ FINAL ANSWER: [Your formatted answer]
280
+ ```
 
281
 
282
+ ---------
283
+ ## Evaluation Code & Baselines
 
 
284
 
285
+ To ensure consistency and fairness, all submissions are evaluated using our official automated evaluation pipeline. This pipeline handles the extraction of your model's answers and applies both strict formatting checks (for numbers, dates, and units) and relaxed text matching (ANLS) for text-based answers.
 
 
286
 
287
+ You can find the complete, ready-to-use evaluation script in our official GitHub repository:
288
+ 🖥️ **[VLR-CVC/DocVQA2026 GitHub Repository](https://github.com/VLR-CVC/DocVQA2026)**
 
289
 
290
+ ### What you will find in the repository:
 
 
291
 
292
+ * **The Evaluator Script:** The core logic used to parse your model's outputs and calculate the final scores. You can use this script to test and evaluate your predictions locally before making an official submission.
293
+ * **The Baseline Master Prompt:** We have included the exact prompt structure (`get_evaluation_prompt()`) used for our baseline experiments. This prompt is heavily engineered to enforce the competition's mandatory reasoning protocols and strict output formatting.
 
294
 
295
+ We highly recommend reviewing both the evaluation script and the Master Prompt. You are welcome to use the provided prompt out-of-the-box or adapt it to better guide your own custom models!
 
296
 
297
+ ## Dataset Structure
298
+
299
+ The dataset consists of:
300
+ 1. **Images:** High-resolution PNG renders of document pages located in the `images/` directory.
301
+ 2. **Annotations:** A Parquet file (`val.parquet`) containing the questions, answers, and references to the image paths.
302
 
303
+ ## Contact
304
 
305
  For questions, technical support, or inquiries regarding the DocVQA 2026 dataset and competition framework: **docvqa@cvc.uab.cat**
306