Datasets:
language:
- ru
- en
license: other
license_name: mixed-research-use
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
- question-answering
task_ids:
- visual-question-answering
pretty_name: RuChartQA
tags:
- chart-understanding
- visual-reasoning
- russian
- benchmark
- vlm-evaluation
configs:
- config_name: chartbasic
data_files: synthetic/chartbasic.jsonl
- config_name: chartreasoning
data_files: synthetic/chartreasoning.jsonl
- config_name: chartperception
data_files: synthetic/chartperception.jsonl
- config_name: chartreal
data_files: chartreal/data.jsonl
RuChartQA
A Russian-language chart question answering benchmark for evaluating Vision-Language Models, with both synthetic and real-world evaluation sets.
Dataset summary
| Split | Examples | Charts | Source |
|---|---|---|---|
| Synthetic ChartBasic | 360 | 90 (×4 variants) | Generated |
| Synthetic ChartReasoning | 480 | 120 (×4 variants) | Generated |
| Synthetic ChartPerception | 360 | 90 (×4 variants) | Generated |
| ChartReal | 242 QA | 96 charts | Rosstat, Bank of Russia (PDF) |
| Total | 1442 QA | 396 unique charts |
The synthetic split has 4 variants per chart: ru_image, en_image, ru_text (text description instead of image), en_text — enabling controlled language and modality ablations. ChartReal is ru_image only.
Why this benchmark
Most chart-QA benchmarks (ChartQA, PlotQA, FigureQA) are English-only. Existing Russian-language chart evaluation has been limited to translated subsets. This benchmark addresses two gaps:
- Language coverage. Native Russian questions, Russian-language axis labels, captions, and currencies (₽).
- Real-world distribution shift. Synthetic-only benchmarks systematically overestimate VLM performance on real-world graphs from government statistics and central bank publications. Our analysis (see
results/leaderboard.csvand the [accompanying paper]) shows performance gaps of +11 to +41 percentage points between synthetic and real-world splits across three modern VLMs.
Loading
from datasets import load_dataset
# Real-world split (the one with the bigger story)
chartreal = load_dataset("romath/RuChartQA", "chartreal")
# Synthetic splits
chartbasic = load_dataset("romath/RuChartQA", "chartbasic")
chartreasoning = load_dataset("romath/RuChartQA", "chartreasoning")
chartperception = load_dataset("romath/RuChartQA", "chartperception")
Schema
Each row has:
| Field | Type | Description |
|---|---|---|
example_id |
string | Unique identifier (e.g. chartreal_007_q2_ru_image) |
subdataset |
string | ChartBasic, ChartReasoning, ChartPerception, or ChartReal |
variant |
string | ru_image, en_image, ru_text, en_text |
language |
string | ru or en |
modality |
string | image or text |
chart_type |
string | bar, line, mixed, pie |
chart_id |
string | Chart identifier (multiple QA may share one chart) |
question_type |
string | lookup, comparison, min, max, difference, conditional |
question |
string | Natural-language question |
answer |
string | Gold answer |
answer_numeric |
float | null | Numeric form if applicable (for tolerance scoring) |
answer_type |
string | numeric or categorical |
image_path |
string | null | Relative path to PNG (for image variants) |
text_description |
string | null | Text description of the chart (for text variants) |
Evaluation
We provide a normalizer (eval/normalize.py) that handles:
- Numeric tolerance (5%, the ChartQA standard) with a year-as-numeric exception requiring exact match (1900–2100)
- Bidirectional substring matching for categorical answers (
gold ⊆ predorpred ⊆ gold), disabled when gold contains compound markers (и,or,,) - Lower/strip/punctuation normalization
Minimal example:
python3 eval/eval_example.py predictions.jsonl chartreal/data.jsonl
A prediction file is JSONL with {"example_id": ..., "prediction_raw": "..."} per line.
Baselines
Predictions on ChartReal from four systems are included in baselines/:
| System | ChartReal Accuracy | Synthetic ru_image |
|---|---|---|
| Qwen3-VL 32B Instruct | 75.2% | 86.3% |
| Gemini 2.5 Flash | 71.1% | 92.7% |
| Nemotron Nano 12B v2 VL | 45.9% | 86.7% |
| OCR + Llama 3.3 70B (text-only baseline) | 34.7% | n/a |
All gaps between systems on ChartReal are statistically significant (95% bootstrap CI) except Qwen vs Gemini (Δ=+4.1pp, CI [−1.2, +9.5], p=0.16). See results/leaderboard.csv.
Construction
Synthetic
Generated from category templates (cities, products, demographics, etc.) with controlled distributions over chart types (bar) and question types. Each chart was rendered in Russian and English; for each language, both an image and a text-description variant exist. This 4-way structure allows clean ablations of language and modality effects.
ChartReal
Charts were extracted from public PDF reports of:
- Rosstat (Russian Federal State Statistics Service) — annual and monthly statistical bulletins
- Bank of Russia (CBR) — financial stability reports, monetary policy commentary
Each chart received 1–4 questions covering different reasoning types. Charts span four types (bar, line, mixed, pie) with realistic noise: small fonts, dense legends, multi-axis scales, and stylistic conventions specific to Russian government publications.
Licenses
This dataset uses mixed licensing:
- Code (
eval/normalize.py,eval/eval_example.py): Apache 2.0 - Synthetic QA + images (
synthetic/): CC-BY 4.0 — author's original work - ChartReal QA annotations (
chartreal/data.jsonl): CC-BY 4.0 — author's original annotations - ChartReal images (
chartreal/images/): research use only, original copyright preserved. These are derivative works (PNG renderings of pages from public-domain government PDFs). Original publishers (Rosstat, Bank of Russia) retain copyright on the visual material. Re-use beyond academic research may require permission from the original publishers.
By using the chartreal/images/ portion, you agree to:
- Use it only for academic / non-commercial research
- Cite both this dataset and the original publisher
- Not redistribute the images independently of the QA annotations
Citation
@dataset{ruchartqa_2026,
title = {RuChartQA: A Russian-Language Chart Question Answering Benchmark with Synthetic and Real-World Splits},
author = {Roman <last name>},
year = {2026},
url = {https://huggingface.co/datasets/romath/RuChartQA},
note = {HSE Bachelor's thesis}
}
Limitations
- ChartReal is image-only. A
text_descriptionvariant for real-world charts is not provided — automatic transcription of complex line/mixed charts to faithful text without losing information turned out to be infeasible in practice. - Bar-bias in synthetic. All synthetic charts are bar-type. Comparison fairness across chart types should use the bar-only subset of ChartReal (n=67) — see
results/leaderboard.csv. - Answer normalizer judgement calls. A small number of answers (≤2pp of total) are influenced by language-drift conventions: yes/no in English vs Russian, Roman vs Cyrillic month numerals. We chose conservative scoring (mismatch counted as wrong); reasonable alternatives exist.
Contact
Questions, errata, or contributions: [your email or GitHub username].