Datasets:
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
language:
- zh
- en
SpecVQA: A Benchmark for Spectral Understanding and Visual Question Answering in Scientific Images
1. Introduction
Multimodal Large Language Models (MLLMs) have achieved notable progress in visual–language understanding and cross-modal reasoning, yet their capabilities remain limited when applied to the highly specialized task of spectral understanding. These limitations are further obscured by existing benchmarks, which either emphasize general object recognition or focus on simple chart-based data retrieval, lacking the scientific grounding needed to accurately assess or diagnose model performance in this domain.
To address this gap, we introduce SpecVQA, an expert-curated benchmark that targets the key failure modes of MLLMs in spectral interpretation. By focusing on seven essential spectrum types, it enables concise yet rigorous evaluation of scientific accuracy and domain-knowledge usage, providing clearer guidance for developing more domain-aware multimodal models. The seven spectrum types are:
- NMR (Nuclear Magnetic Resonance)
- IR (Infrared Absorption Spectroscopy)
- XRD (X-ray Diffraction)
- Raman (Raman Spectroscopy)
- MS (Mass Spectrometry)
- UV-Vis (Ultraviolet-Visible Spectrophotometry)
- XPS (X-ray Photoelectron Spectroscopy)
2. Details
To ensure the scientific validity and representativeness of the benchmark, a PhD team of domain experts manually curated a subset of 620 spectral figures from 20k candidates collected from peer-reviewed journals and open-access scientific databases. The selection process was guided by six key criteria: Spectra Type, Image Structure, Text Completeness, Subplot Correlation, Sample Diversity and Resolution.
For each figure, domain experts carefully designed 5 Question-Answer(QA) pairs. We expect the questions and answers to reflect issues that are genuinely of interest in their research and pose challenges to MLLMs, rather than trivial or nonsensical visual QAs. Therefore, these pairs were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy.
The final 3,100 QA-pairs are classified into two critical categoriesbased on the required cognitive effort. We provided both Chinese version and English version to test the scientific performance of models in different languages.
- Category 1 (L0): Descriptive Question
This category focuses on the ability to directly understand visual information, including:
- Information extraction: extracting titles, labels, legends and x/y-axis information.
- Value localization: locating maximum or minimum points (e.g., strongest peaks) and peak location or range.
- Pattern recognition: identifying entities that meet specific conditions.
- Layout understanding: analyzing multi-panel subplots.
- Classification: classifying features (e.g., peak shapes).
Category 2 (L1): Reasoning Question This category emphasizes the ability to analyze and reason based on image content, including:
- Comparison: comparing multiple entities and drawing conclusions.
- Counting: determining the number of elements that satisfy certain conditions.
- Calculation: performing computations on data presented in the figure.
- Trend analysis: predicting changes in peak shapes or trends.
- Causal analysis: analyzing and understanding the scientific problems reflected in the figure.
3. Benchmark Evaluation
This benchmark evaluates model performance on Visual Question Answering (VQA) tasks. Following the ChartVLM, GPT-o4-mini serves solely as a judge to score the model’s predictions against the ground truth. A predefined error tolerance of 5 percentage points is applied: if the error falls within this range, the answer is considered correct; otherwise, it is marked incorrect. Accuracy is then computed based on these judgments.
Score Prompt
"""Given multiple question-answer pairs and the corresponding predictions, evaluate the correctness of predictions. The output should be only 'True' or 'False'. Note that if the groundtruth answer is a numeric value with/without the unit, impose 5% error tolerance to the answer, e.g., the answer of 95 is marked as correct when groundtruth value is 100 million.
User: <question> What was the incremental increase in revenue from 2020 to 2021? <groundtruth answer> 5 million $ <answer> 20 </s>
A: False
User: <question> What percentage of government spending was allocated to infrastructure in 2020? <groundtruth answer> 10% <answer> 14-4=10 </s>
A: True
User: <question> What is the total production of Wind Energy in the four months from January to April 2021? <groundtruth answer> 2300 MW <answer> The total production of Wind Energy in the four months from January to April 2021 is 2450 MW.
A: True
User: <question> What is the total of manufactured goods for UK and Germany combined? <groundtruth answer> 5 <answer> Five
A: True
User: <question> {QUESTION} <groundtruth answer> {GROUND TRUTH} <answer> {PREDICTION </s>
AI:
"""
4. LeaderBoard
| Model | Think | Weight | API-Version | English (en) | Chinese (zh) | Overall | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| Descriptive Question (L0) | Reasoning Question (L1) | Avg. | Descriptive Question (L0) | Reasoning Question (L1) | Avg. | |||||
| Gemini-3-Flash-Preview | √ | Proprietary | 20251217 | 0.7782 | 0.7759 | 0.7771 | 0.8047 | 0.7900 | 0.7974 | 0.7872 |
| Gemini-3-Pro-Preview | √ | Proprietary | 20251119 | 0.7718 | 0.7759 | 0.7739 | 0.7939 | 0.7731 | 0.7835 | 0.7787 |
| Gemini-2.5-Pro | √ | Proprietary | 20250617 | 0.7645 | 0.7486 | 0.7565 | 0.7758 | 0.7571 | 0.7664 | 0.7615 |
| Gemini-2.5-Flash | √ | Proprietary | 20250617 | 0.7395 | 0.7043 | 0.7219 | 0.7473 | 0.7185 | 0.7329 | 0.7274 |
| GPT-5(high) | √ | Proprietary | 20250807 | 0.7017 | 0.6959 | 0.6988 | 0.7115 | 0.7147 | 0.7131 | 0.7059 |
| GPT-o4mini | √ | Proprietary | 20250416 | 0.7144 | 0.6836 | 0.6990 | 0.7237 | 0.6996 | 0.7117 | 0.7054 |
| GPT-5(medium) | √ | Proprietary | 20250807 | 0.6923 | 0.6977 | 0.6950 | 0.7154 | 0.7100 | 0.7127 | 0.7039 |
| GPT-o3 | √ | Proprietary | 20250416 | 0.6953 | 0.7100 | 0.7026 | 0.7027 | 0.7043 | 0.7035 | 0.7031 |
| GPT-5(low) | √ | Proprietary | 20250807 | 0.6825 | 0.6968 | 0.6897 | 0.7066 | 0.6987 | 0.7026 | 0.6961 |
| GPT-5.1 | × | Proprietary | 20251113 | 0.6776 | 0.6347 | 0.6561 | 0.6899 | 0.6290 | 0.6594 | 0.6578 |
| GPT-5.2 | × | Proprietary | 20251211 | 0.6776 | 0.6328 | 0.6552 | 0.6899 | 0.6158 | 0.6529 | 0.6540 |
| Claude-4.5-Sonnet | √ | Proprietary | 20250929 | 0.6148 | 0.5518 | 0.5833 | 0.6315 | 0.5782 | 0.6048 | 0.5941 |
| Doubao-seed-1-6-flash | √ | Proprietary | 2508280 | 0.6060 | 0.5687 | 0.5874 | 0.6148 | 0.5574 | 0.5861 | 0.5867 |
| Claude-4-Sonnet | √ | Proprietary | 20250514 | 0.5947 | 0.5282 | 0.5615 | 0.5957 | 0.5565 | 0.5761 | 0.5688 |
| Qwen3-VL-8B-Thinking | √ | Open | - | 0.5864 | 0.5348 | 0.5606 | 0.5805 | 0.5508 | 0.5657 | 0.5631 |
| Doubao-seed-1-6-250615 | √ | Proprietary | 250615 | 0.5721 | 0.5499 | 0.5610 | 0.5736 | 0.5452 | 0.5594 | 0.5602 |
| Qwen3-VL-8B-Instruct | × | Open | - | 0.5721 | 0.4256 | 0.4989 | 0.5927 | 0.4727 | 0.5327 | 0.5158 |
| Qwen3-Omni-30B-A3B-Thinking | √ | Open | - | 0.5442 | 0.4868 | 0.5155 | 0.5368 | 0.4765 | 0.5066 | 0.5111 |
| Doubao-seed-1-6 | × | Proprietary | 250615 | 0.5530 | 0.4435 | 0.4982 | 0.5697 | 0.4586 | 0.5141 | 0.5062 |
| DeepSeek-VL2 | × | Open | - | 0.4657 | 0.3183 | 0.3920 | 0.4092 | 0.3079 | 0.3586 | 0.3753 |
