Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,14 +5,12 @@ task_categories:
|
|
| 5 |
language:
|
| 6 |
- zh
|
| 7 |
- en
|
| 8 |
-
size_categories:
|
| 9 |
-
- 1K<n<10K
|
| 10 |
---
|
| 11 |
|
| 12 |
|
| 13 |
# SpecVQA: A Benchmark for Spectral Understanding and Visual Question Answering in Scientific Images
|
| 14 |
|
| 15 |
-

|
| 14 |
|
| 15 |
## 1. Introduction
|
| 16 |
|
|
|
|
| 53 |
|
| 54 |
## 3. Benchmark Evaluation
|
| 55 |
|
| 56 |
+
This benchmark evaluates model performance on Visual Question Answering (VQA) tasks. Following the `ChartVLM`, GPT-o4-mini serves solely as a judge to score the model’s predictions against the ground truth. A predefined error tolerance of 5 percentage points is applied: if the error falls within this range, the answer is considered correct; otherwise, it is marked incorrect. Accuracy is then computed based on these judgments.
|
| 57 |
|
| 58 |
**Score Prompt**
|
| 59 |
```python
|