nezhazheng's picture
Update README.md
b5efedf verified
metadata
license: apache-2.0
language:
  - zh

QZhou-Flowchart-QA-Benchmark: Real-World Flowchart Understanding Benchmark

Overview

While the open-source community has various chart and document benchmarks, there is no specialized evaluation set for flowchart understanding. QZhou-Flowchart-QA-Benchmark fills this gap by providing a dedicated benchmark to effectively assess multimodal models' flowchart comprehension abilities.

Dataset Composition

Part 1: Web-Collected Real-World Flowcharts (Public)

Manually curated flowcharts from image search engines, covering actual deployment scenarios including:

  • Government services and administrative processes
  • Banking and financial operations
  • Campus management systems
  • Daily office workflows
  • Financial processing procedures

Quality Diversity: We deliberately control the distribution of image resolution and clarity, introducing varying degrees of blur and size differences to better reflect real-world application environments.

Annotation: All questions and answers are carefully labeled and verified by human annotators.

Part 2: Enterprise Office Flowcharts (Coming Soon)

Real flowcharts from production office environments, including:

  • HR management processes
  • Financial reimbursement workflows
  • Internal approval procedures

Note: This portion is currently undergoing anonymization and will be released in a future update.

Question Diversity

FlowchartBench ensures comprehensive query coverage, considering various questioning angles and possibilities:

  • Upstream and downstream node queries
  • Conditional branch reasoning
  • Path analysis and node relationships
  • Structural understanding
  • Spatial reasoning with X/Y axes

Performance Leaderboard

State-of-the-art results on QZhou-Flowchart-QA-Benchmark:

Model QZhou-Flowchart-QA-Benchmark (%)
QZhou-Flowchart-VL-32B (Ours) 87.83
Qwen3-VL-Plus-Thinking (235B) 86.09
Gemini-2.5-Pro 84.42
doubao-seed-1-6 83.83
GPT-5 79.29
GLM-4.5V 75.97
Qwen2.5-VL-32B 73.90

Comparison with Base Model

Model MMMU CMMU MathVista DocVQA QZhou-Flowchart-QA-Benchmark
Qwen2.5-VL-32B 66.67 76.38 74.20 93.96 73.90
QZhou-Flowchart-VL-32B 67.78 76.46 76.50 93.87 87.83

Usage

from datasets import load_dataset

# Load benchmark
benchmark = load_dataset("Kingsoft-LLM/QZhou-Flowchart-QA-Benchmark", split="test")

# Evaluate your model
for sample in benchmark:
    prediction = model.predict(sample['image'], sample['question'])
    accuracy = evaluate(prediction, sample['answer'])

Evaluation Protocol

  • Answer Matching: Two evaluation methods based on question type:
    • Exact Match: For multiple-choice questions, direct comparison with ground truth
    • Normalized Edit Distance: For open-ended questions, score calculated as 1 - (edit_distance / max_length)
  • Metrics: Overall accuracy, breakdown by question type, domain, and complexity level
  • Submission: Open a GitHub issue with your model predictions to be added to the leaderboard

Key Features

✅ Real-world scenarios - Flowcharts from actual deployments
✅ Manual annotation - Human-verified questions and answers
✅ Quality diversity - Various resolutions, clarity levels, and sizes
✅ Comprehensive coverage - 20+ question types across multiple domains
✅ Rigorous evaluation - Standardized protocol for fair comparison