Datasets:
Improve ViExam dataset card: correct GitHub link, add badges and citation
#3
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
task_categories:
|
| 4 |
-
- image-text-to-text
|
| 5 |
language:
|
| 6 |
- vi
|
| 7 |
- en
|
|
|
|
|
|
|
|
|
|
| 8 |
tags:
|
| 9 |
- multimodal
|
| 10 |
- vietnamese
|
|
@@ -69,13 +69,13 @@ dataset_info:
|
|
| 69 |
|
| 70 |
# ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?
|
| 71 |
|
| 72 |
-
by <a href="https://www.linkedin.com/in/dang-thi-tuong-vy-00a357278/">Vy Tuong Dang</a><sup>*</sup>, <a href="https://anvo25.github.io/">An Vo</a><sup>*</sup>, <a href="https://www.linkedin.com/in/quang-tau-a708b4238/?originalSubdomain=kr">Quang Tau</a>, <a href="#">Duc Dm</a>, <a href="https://www.resl.kaist.ac.kr/members/director">Daeyoung Kim</a>
|
| 73 |
|
| 74 |
<sup>*</sup>Equal contribution
|
| 75 |
|
| 76 |
KAIST
|
| 77 |
|
| 78 |
-
[](https://vi-exam.github.io) [](https://arxiv.org/abs/2508.13680) [](https://github.com/
|
| 79 |
|
| 80 |
</div>
|
| 81 |
|
|
@@ -85,7 +85,7 @@ KAIST
|
|
| 85 |
|
| 86 |
## Abstract
|
| 87 |
|
| 88 |
-
Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at:
|
| 89 |
|
| 90 |
## Dataset Overview
|
| 91 |
|
|
@@ -93,11 +93,11 @@ The ViExam dataset comprises 2,548 multimodal Vietnamese exam questions across *
|
|
| 93 |
|
| 94 |
### Key Features
|
| 95 |
|
| 96 |
-
-
|
| 97 |
-
-
|
| 98 |
-
-
|
| 99 |
-
-
|
| 100 |
-
-
|
| 101 |
|
| 102 |
### Domain Distribution
|
| 103 |
|
|
@@ -167,15 +167,15 @@ pip install -r requirements.txt
|
|
| 167 |
ViExam spans **7 distinct domains** representative of Vietnamese educational assessments:
|
| 168 |
|
| 169 |
### Academic Subjects (Tasks 1-5)
|
| 170 |
-
-
|
| 171 |
-
-
|
| 172 |
-
-
|
| 173 |
-
-
|
| 174 |
-
-
|
| 175 |
|
| 176 |
### Practical Assessments (Tasks 6-7)
|
| 177 |
-
-
|
| 178 |
-
-
|
| 179 |
|
| 180 |
*All questions integrate Vietnamese text with visual elements (diagrams, charts, illustrations) at multiple resolutions.*
|
| 181 |
|
|
@@ -282,4 +282,17 @@ Our evaluation reveals several important insights:
|
|
| 282 |
5. **Multimodal Challenge:** VLMs perform better on text-only questions (70%) versus multimodal questions (61%), confirming that multimodal integration poses fundamental challenges
|
| 283 |
6. **Open-source Gap:** Open-source VLMs achieve substantially lower performance than closed-source/SOTA VLMs (27.7% vs. 57%)
|
| 284 |
7. **Cross-lingual Mixed Results:** Cross-lingual prompting shows mixed results - improving open-source VLMs (+2.9%) while hurting SOTA VLMs (-1.0%)
|
| 285 |
-
8. **Human-AI Collaboration:** Human-in-the-loop collaboration provides modest gains with OCR help (+0.48%) but substantial improvement with full text and image editing (+5.71%)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- vi
|
| 4 |
- en
|
| 5 |
+
license: mit
|
| 6 |
+
task_categories:
|
| 7 |
+
- image-text-to-text
|
| 8 |
tags:
|
| 9 |
- multimodal
|
| 10 |
- vietnamese
|
|
|
|
| 69 |
|
| 70 |
# ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?
|
| 71 |
|
| 72 |
+
by <a href="https://www.linkedin.com/in/dang-thi-tuong-vy-00a357278/">Vy Tuong Dang</a><sup>*</sup>, <a href="https://anvo25.github.io/">An Vo</a><sup>*</sup>, <a href="https://www.linkedin.com/in/quang-tau-a708b4238/?originalSubdomain=kr">Quang Tau</a>, <a href="https://www.resl.kaist.ac.kr/members/master-student#h.fiaa4al7sz8u">Duc Dm</a>, <a href="https://www.resl.kaist.ac.kr/members/director">Daeyoung Kim</a>
|
| 73 |
|
| 74 |
<sup>*</sup>Equal contribution
|
| 75 |
|
| 76 |
KAIST
|
| 77 |
|
| 78 |
+
[](https://vi-exam.github.io) [](https://arxiv.org/abs/2508.13680) [](https://huggingface.co/datasets/anvo25/viexam) [](https://github.com/TuongVy20522176/ViExam) [](LICENSE)
|
| 79 |
|
| 80 |
</div>
|
| 81 |
|
|
|
|
| 85 |
|
| 86 |
## Abstract
|
| 87 |
|
| 88 |
+
Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at: https://github.com/TuongVy20522176/ViExam.
|
| 89 |
|
| 90 |
## Dataset Overview
|
| 91 |
|
|
|
|
| 93 |
|
| 94 |
### Key Features
|
| 95 |
|
| 96 |
+
- **Language**: Vietnamese (low-resource language with 100+ million speakers)
|
| 97 |
+
- **Modality**: True multimodal (visual + textual integration)
|
| 98 |
+
- **Domains**: 7 comprehensive academic and practical domains
|
| 99 |
+
- **Question Types**: Multiple-choice (88%), multiple-answer (1%), variable options (11%)
|
| 100 |
+
- **Difficulty**: Real Vietnamese exam standards requiring complex reasoning
|
| 101 |
|
| 102 |
### Domain Distribution
|
| 103 |
|
|
|
|
| 167 |
ViExam spans **7 distinct domains** representative of Vietnamese educational assessments:
|
| 168 |
|
| 169 |
### Academic Subjects (Tasks 1-5)
|
| 170 |
+
- **Mathematics**: Function analysis, calculus, geometry (456 questions)
|
| 171 |
+
- **Physics**: Mechanics, waves, thermodynamics (361 questions)
|
| 172 |
+
- **Chemistry**: Organic chemistry, electrochemistry (302 questions)
|
| 173 |
+
- **Biology**: Genetics, molecular biology (341 questions)
|
| 174 |
+
- **Geography**: Data visualization, economic geography (481 questions)
|
| 175 |
|
| 176 |
### Practical Assessments (Tasks 6-7)
|
| 177 |
+
- **[Driving Test](dataset/question_image/driving/)**: Traffic rules, road signs, safety scenarios (367 questions)
|
| 178 |
+
- **[IQ Test](dataset/question_image/iq/)**: Pattern recognition, logical reasoning (240 questions)
|
| 179 |
|
| 180 |
*All questions integrate Vietnamese text with visual elements (diagrams, charts, illustrations) at multiple resolutions.*
|
| 181 |
|
|
|
|
| 282 |
5. **Multimodal Challenge:** VLMs perform better on text-only questions (70%) versus multimodal questions (61%), confirming that multimodal integration poses fundamental challenges
|
| 283 |
6. **Open-source Gap:** Open-source VLMs achieve substantially lower performance than closed-source/SOTA VLMs (27.7% vs. 57%)
|
| 284 |
7. **Cross-lingual Mixed Results:** Cross-lingual prompting shows mixed results - improving open-source VLMs (+2.9%) while hurting SOTA VLMs (-1.0%)
|
| 285 |
+
8. **Human-AI Collaboration:** Human-in-the-loop collaboration provides modest gains with OCR help (+0.48%) but substantial improvement with full text and image editing (+5.71%)
|
| 286 |
+
|
| 287 |
+
---
|
| 288 |
+
|
| 289 |
+
## Citation
|
| 290 |
+
If you find our dataset or model useful for your research and applications, please cite using this BibTeX:
|
| 291 |
+
```bibtex
|
| 292 |
+
@article{dang2025viexam,
|
| 293 |
+
title={ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?},
|
| 294 |
+
author={Dang, Vy Tuong and Vo, An and Tau, Quang and Dm, Duc and Kim, Daeyoung},
|
| 295 |
+
journal={arXiv preprint arXiv:2508.13680},
|
| 296 |
+
year={2025}
|
| 297 |
+
}
|
| 298 |
+
```
|