Improve dataset card: Add task category, language, tags, and full abstract

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +186 -1
README.md CHANGED
@@ -1,5 +1,19 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  configs:
4
  - config_name: default
5
  data_files:
@@ -71,7 +85,7 @@ KAIST
71
 
72
  ## Abstract
73
 
74
- Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. We present ViExam, a benchmark containing 2,548 multimodal Vietnamese exam questions across 7 academic domains. State-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy, both underperforming average human test-takers (66.54%). Only the thinking VLM o3 (74.07%) exceeds human average performance, yet still falls substantially short of human best performance (99.60%).
75
 
76
  ## Dataset Overview
77
 
@@ -97,4 +111,175 @@ The ViExam dataset comprises 2,548 multimodal Vietnamese exam questions across *
97
  | Driving Test | 367 | Traffic scenarios, road signs, situational judgment |
98
  | IQ Test | 240 | Pattern recognition, logical sequences, spatial reasoning |
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - vi
7
+ - en
8
+ tags:
9
+ - multimodal
10
+ - vietnamese
11
+ - exam
12
+ - education
13
+ - vlm
14
+ - benchmark
15
+ - question-answering
16
+ - low-resource
17
  configs:
18
  - config_name: default
19
  data_files:
 
85
 
86
  ## Abstract
87
 
88
+ Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at: this https URL .
89
 
90
  ## Dataset Overview
91
 
 
111
  | Driving Test | 367 | Traffic scenarios, road signs, situational judgment |
112
  | IQ Test | 240 | Pattern recognition, logical sequences, spatial reasoning |
113
 
114
+ ---
115
+
116
+ ## 👋 Trying out our questions on your model
117
+
118
+ Use these challenging Vietnamese multimodal exam questions where most tested models fail to answer correctly. Each question requires understanding both Vietnamese text and visual elements like diagrams, charts, and illustrations.
119
+
120
+ ---
121
+
122
+ ## 🧪 Dataset Overview
123
+
124
+ | Subject | #Questions |
125
+ |---|---|
126
+ | Mathematics | 456 |
127
+ | Physics | 361 |
128
+ | Chemistry | 302 |
129
+ | Biology | 341 |
130
+ | Geography | 481 |
131
+ | Driving Test | 367 |
132
+ | IQ Test | 240 |
133
+ | **Total** | **2,548** |
134
+
135
+ > Each question is an image containing both Vietnamese text and visuals. Most are 4-option multiple-choice questions. No screenshots of text-only questions are included — all questions are genuinely **multimodal**.
136
+
137
+ ## 🚀 Quick Start Guide
138
+
139
+ ### Option 1: Use Pre-built Dataset (Recommended for evaluating your models)
140
+
141
+ **If you just want to evaluate VLMs on our Vietnamese exam questions:**
142
+
143
+ 🔥 **Download the complete dataset from Hugging Face** with full images and annotations:
144
+ - Go to our [Hugging Face dataset](https://huggingface.co/datasets/anvo25/viexam)
145
+ - Download ready-to-use Vietnamese multimodal exam questions
146
+
147
+ This is the fastest way to get started with evaluation.
148
+
149
+ ### Option 2: Reproduce/Generate Dataset
150
+
151
+ **If you want to reproduce our data pipeline or create custom variations:**
152
+
153
+ Please follow the installation and generation steps below to run the complete pipeline locally.
154
+
155
+ ---
156
+
157
+ ## 💻 Getting Started
158
+
159
+ ```bash
160
+ git clone https://github.com/TuongVy20522176/ViExam.git
161
+ cd viexam
162
+ pip install -r requirements.txt
163
+ ```
164
+
165
+ ## 📊 Tasks
166
+
167
+ ViExam spans **7 distinct domains** representative of Vietnamese educational assessments:
168
+
169
+ ### Academic Subjects (Tasks 1-5)
170
+ - **Mathematics**: Function analysis, calculus, geometry (456 questions)
171
+ - **Physics**: Mechanics, waves, thermodynamics (361 questions)
172
+ - **Chemistry**: Organic chemistry, electrochemistry (302 questions)
173
+ - **Biology**: Genetics, molecular biology (341 questions)
174
+ - **Geography**: Data visualization, economic geography (481 questions)
175
+
176
+ ### Practical Assessments (Tasks 6-7)
177
+ - **[Driving Test](dataset/question_image/driving/)**: Traffic rules, road signs, safety scenarios (367 questions)
178
+ - **[IQ Test](dataset/question_image/iq/)**: Pattern recognition, logical reasoning (240 questions)
179
+
180
+ *All questions integrate Vietnamese text with visual elements (diagrams, charts, illustrations) at multiple resolutions.*
181
+
182
+ ---
183
+
184
+ ## 🚀 Quickstart
185
+
186
+ ### 1. Install requirements
187
+
188
+ ```bash
189
+ pip install -r requirements.txt
190
+ ```
191
+
192
+ ### 2. Run evaluation on VLMs
193
+
194
+ ```bash
195
+ # Prepare evaluation batches
196
+ python batch_api_code/main_batch_prepare.py \
197
+ --model claude-sonnet-4-20250514 \
198
+ --input-file dataset/metadata/full_vqa.json \
199
+ --prompt_language vn
200
+
201
+ # Execute batch evaluation
202
+ python batch_api_code/main_batch_api.py
203
+ ```
204
+
205
+ Or for individual models:
206
+
207
+ ```bash
208
+ # Evaluate single model
209
+ python api_code/main_api.py \
210
+ --model o3-2025-04-16 \
211
+ --prompt_language vn \
212
+ --input-file dataset/metadata/cropped_random_subset_vqa_description.json
213
+
214
+ # Cross-lingual evaluation
215
+ python api_code/main_api.py \
216
+ --model gpt-4.1-2025-04-14 \
217
+ --prompt_language en \
218
+ --input-file dataset/metadata/full_vqa.json
219
+ ```
220
+
221
+ ### 3. Analyze results
222
+
223
+ ```bash
224
+ python src/result.py
225
+ ```
226
+
227
+ ---
228
+
229
+ ## ✏️ Human-in-the-loop Enhancement
230
+
231
+ We provide web-based tools for:
232
+
233
+ * **Question Selection**: `src/choose_question.html` - Filter and select questions
234
+ * **OCR Verification**: `src/ocr_ground_truth.html` - Edit OCR results and descriptions
235
+ * **Quality Control**: `src/check_question.html` - Manual verification interface
236
+
237
+ ---
238
+
239
+ ## 🗂️ Repository Structure
240
+
241
+ ```
242
+ viexam/
243
+ ├── api_code/ # Individual VLM evaluation
244
+ │ ├── api_handlers/ # API wrapper for VLMs
245
+ │ ├── main_api.py # Main API call logic
246
+ │ └── main_api_qwen.py # Qwen-specific evaluation
247
+
248
+ ├── batch_api_code/ # Batch processing for large-scale evaluation
249
+ │ ├── main_batch_prepare.py # Prepare evaluation batches
250
+ │ ├── main_batch_api.py # Execute batch evaluation
251
+ │ └── handlers/ # Batch processing utilities
252
+
253
+ ├── dataset/
254
+ │ ├── question_image/ # Individual exam questions by domain
255
+ │ ├── metadata/ # Question annotations and ground truth
256
+ │ └── images/ # Dataset overview images
257
+
258
+ ├── src/ # Full pipeline for data extraction
259
+ │ ├── cut_question.py # Question boundary detection
260
+ │ ├── convert_pdf_to_image.py # PDF → PNG conversion
261
+ │ ├── check_question.html # Manual verification interface
262
+ │ ├── choose_question.html # Question selection tool
263
+ │ ├── ocr_ground_truth.html # OCR verification tool
264
+ │ └── result.py # Accuracy analysis
265
+
266
+ └── api_key/ # API credentials (not tracked)
267
+ ├── claude_key.txt
268
+ ├── openai_key.txt
269
+ └── ...
270
+ ```
271
+
272
+ ---
273
+
274
+ ## 📈 Key Findings
275
+
276
+ Our evaluation reveals several important insights:
277
 
278
+ 1. **Strong OCR Performance:** VLMs achieve strong OCR performance on Vietnamese text (6% CER and 9% WER), confirming that poor performance stems from multimodal reasoning challenges rather than basic text recognition failures
279
+ 2. **Performance Gap:** SOTA VLMs achieve only 57% mean accuracy across 7 domains, with Geography most accessible (72%) and Physics most challenging (44%)
280
+ 3. **Thinking Models Excel:** The thinking VLM o3 substantially outperforms non-thinking VLMs (74% vs. 48-59%)
281
+ 4. **Option B Bias:** VLMs exhibit significant bias toward option B (31%) in multiple-choice questions, suggesting failures are not purely due to reasoning limitations but may be partially attributable to training data bias
282
+ 5. **Multimodal Challenge:** VLMs perform better on text-only questions (70%) versus multimodal questions (61%), confirming that multimodal integration poses fundamental challenges
283
+ 6. **Open-source Gap:** Open-source VLMs achieve substantially lower performance than closed-source/SOTA VLMs (27.7% vs. 57%)
284
+ 7. **Cross-lingual Mixed Results:** Cross-lingual prompting shows mixed results - improving open-source VLMs (+2.9%) while hurting SOTA VLMs (-1.0%)
285
+ 8. **Human-AI Collaboration:** Human-in-the-loop collaboration provides modest gains with OCR help (+0.48%) but substantial improvement with full text and image editing (+5.71%)