ogd4all-benchmark / README.md
michael7ma's picture
Update README.md
2056ce9 verified
metadata
license: mit
task_categories:
  - question-answering
  - table-question-answering
language:
  - de
  - en
  - fr
  - it
tags:
  - agent
  - opendata
  - open-government-data
  - ogd
  - gis
  - gpkg
  - csv
  - rag
  - zurich
  - llm
  - geospatial
pretty_name: OGD4All Benchmark
size_categories:
  - n<1K
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: benchmarks/benchmark_german.jsonl

OGD4All Benchmark License: MIT

This is a 199-question benchmark that was used to evaluate the overall performance of OGD4All and different configurations (LLM, orchestration, ...). OGD4All is an LLM-based prototype system enabling an easy-to-use, transparent interaction with Geospatial Open Government Data through natural language. Each question requires GIS, SQL and/or topological operations on zero, one, or multiple datasets in GPKG or CSV formats to be answered.

Tasks

The benchmark can be used to evaluate systems against two main tasks:

  1. Dataset Retrieval: Given actual metadata of 430 City of Zurich datasets and a question, identify the subset of $k$ relevant datasets to answer the question. Note that the case $k=0$ is included

  2. Dataset Analysis: Given a set of relevant datasets, corresponding metadata and a question, appropriately process these datasets (e.g. via generated Python code snippets) and produce a textual answer. Note that OGD4All can accompany this answer with an interactive map, plots and/or tables, but only the textual answer is evaluated.

Evaluation

Metrics

Metric Description
Recall Percentage of relevant datasets that were retrieved.
Precision Percentage of retrieved datasets that are relevant.
Answerability Accuracy of classifying whether a question can be answered with the available data or not.
Correctness Whether the final answer matches the ground-truth answer.
Latency Time elapsed from query submission to dataset output (retrieval) or from dataset submission to final answer (analysis).
Token Consumption Total number of consumed tokens in the retrieval or analysis stage. Can be distinguished into input, output, and reasoning tokens.
API Cost Total cost of retrieval or analysis stage.

Dataset Retrieval

To evaluate dataset retrieval, rely on the "relevant_datasets" list in the "outputs" dict, which gives you the list of relevant titles. You can map between metadata files and titles using the data/dataset_title_to_file.csv.

Dataset Analysis

To evaluate dataset analysis, provide your architecture with relevant datasets specified in the "outputs" dict and the question, then either manually compare the generated answer with the ground-truth answer in the "outputs" dict, or use the LLM judge system prompt given in eval_prompts/LLM_JUDGE_SYSTEM_PROMPT.txt, with the question, reference and predicted answer provided via a subsequent user message.

A few questions were found to have multiple options of valid relevant datasets, and also multiple valid answers. Therefore, your evaluation should consider the attributes alternative_relevant_datasets and alternative_answer if present.

Benchmark Notes

  • benchmark_german.jsonl is the main benchmark, developed in German. All metadata/datasets are always in German.
  • We further provide automatically-translated versions of the questions (via DeepL API) in benchmark_english.jsonl, benchmark_french.jsonl, and benchmark_italian.jsonl.
  • benchmark_template.jsonl is the template that was used for generating the previously mentioned benchmark, with templated questions that can be instantiated with different arguments.
  • benchmarks/gt_scripts contains Python files that were hand-written to generate the ground-truth answer for each question that has relevant datasets. The filename corresponds to the benchmark entry ID.
  • The city of Zurich datasets are under CC-0 license. Recent versions can be downloaded here, but for evaluation you should use the included datasets, as some answers might change otherwise.

Citation

If you use this benchmark in your research, gladly cite our accompanying paper:

@article{siebenmann_ogd4all_2025,
  archivePrefix = {arXiv},
  arxivId = {2602.00012},
  author = {Siebenmann, Michael and S{\'a}nchez-Vaquerizo, Javier Argota and Arisona, Stefan and Samp, Krystian and Gisler, Luis and Helbing, Dirk},
  journal = {arXiv preprint arXiv:2602.00012},
  month = {nov},
  title = {{OGD4All: A Framework for Accessible Interaction with Geospatial Open Government Data Based on Large Language Models}},
  url = {https://arxiv.org/abs/2602.00012},
  year = {2025}
}

Next to the benchmark, this paper (accepted at IEEE CAI 2026) introduces the OGD4All architecture, which achieves high recall and correctness scores, even with "older" frontier models such as GPT-4.1. OGD4All's source code is publicly available: https://github.com/ethz-coss/ogd4all