Michael Siebenmann commited on
Commit
1a0b226
·
1 Parent(s): 94ea162

update README

Browse files
Files changed (1) hide show
  1. README.md +62 -1
README.md CHANGED
@@ -11,6 +11,7 @@ language:
11
  tags:
12
  - agent
13
  - opendata
 
14
  - ogd
15
  - gis
16
  - gpkg
@@ -27,4 +28,64 @@ configs:
27
  data_files:
28
  - split: test
29
  path: benchmarks/benchmark_german.jsonl
30
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  tags:
12
  - agent
13
  - opendata
14
+ - open-government-data
15
  - ogd
16
  - gis
17
  - gpkg
 
28
  data_files:
29
  - split: test
30
  path: benchmarks/benchmark_german.jsonl
31
+ ---
32
+
33
+ # OGD4All Benchmark [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
34
+
35
+ This is a 199-question benchmark that was used to evaluate the overall performance of OGD4All and different configurations (LLM, orchestration, ...).
36
+ OGD4All is an LLM-based prototype system enabling an easy-to-use, transparent interaction with Geospatial Open Government Data through natural language.
37
+ Each question requires GIS, SQL and/or topological operations on zero, one, or multiple datasets in GPKG or CSV formats to be answered.
38
+
39
+ ## Tasks
40
+ The benchmark can be used to evaluate systems against two main tasks:
41
+
42
+ 1. **Dataset Retrieval**: Given actual metadata of 430 City of Zurich datasets and a question, identify the subset of $k$ relevant datasets to answer the question.
43
+ Note that the case $k=0$ is included
44
+
45
+ 2. **Dataset Analysis**: Given a set of *relevant* datasets, corresponding metadata and a question, appropriately process these datasets (e.g. via generated Python code snippets) and produce a textual answer. Note that OGD4All can accompany this answer with an interactive map, plots and/or tables, but only the textual answer is evaluated.
46
+
47
+ ## Evaluation
48
+ ### Metrics
49
+ | Metric | Description |
50
+ |---|---|
51
+ | Recall | Percentage of relevant datasets that were retrieved. |
52
+ | Precision | Percentage of retrieved datasets that are relevant. |
53
+ | Answerability | Accuracy of classifying whether a question can be answered with the available data or not. |
54
+ | Correctness | Whether the final answer matches the ground-truth answer. |
55
+ | Latency | Time elapsed from query submission to dataset output (retrieval) or from dataset submission to final answer (analysis). |
56
+ | Token Consumption | Total number of consumed tokens in the retrieval or analysis stage. Can be distinguished into input, output, and reasoning tokens. |
57
+ | API Cost | Total cost of retrieval or analysis stage. |
58
+
59
+ ### Dataset Retrieval
60
+ To evaluate dataset retrieval, rely on the `"relevant_datasets"` list in the `"outputs"` dict, which gives you the list of relevant titles.
61
+ You can map between metadata files and titles using the `data/dataset_title_to_file.csv`.
62
+
63
+ ### Dataset Analysis
64
+ To evaluate dataset analysis, provide your architecture with relevant datasets specified in the `"outputs"` dict and the question, then either manually compare the generated answer with the ground-truth answer in the `"outputs"` dict, or use the LLM judge system prompt given in `eval_prompts/LLM_JUDGE_SYSTEM_PROMPT.txt`, with the question, reference and predicted answer provided via a subsequent user message.
65
+
66
+ > [!NOTE]
67
+ > A few questions were found to have multiple options of valid relevant datasets, and also multiple valid answers. Therefore, your evaluation should consider the attributes `alternative_relevant_datasets` and `alternative_answer` if present.
68
+
69
+ ## Benchmark Notes
70
+ - **benchmark_german.jsonl** is the main benchmark, developed in German. All metadata/datasets are always in German.
71
+ - We further provide automatically-translated versions of the questions (via DeepL API) in benchmark_english.jsonl, benchmark_french.jsonl, and benchmark_italian.jsonl.
72
+ - benchmark_template.jsonl is the template that was used for generating the previously mentioned benchmark, with templated questions that can be instantiated with different arguments.
73
+ - `benchmarks/gt_scripts` contains Python files that were hand-written to generate the ground-truth answer for each question that has relevant datasets. The filename corresponds to the benchmark entry ID.
74
+
75
+ ## Citation
76
+ If you use this benchmark in your research, gladly cite our accompanying paper:
77
+ ```
78
+ @article{siebenmann_ogd4all_2025,
79
+ archivePrefix = {arXiv},
80
+ arxivId = {2602.00012},
81
+ author = {Siebenmann, Michael and S{\'a}nchez-Vaquerizo, Javier Argota and Arisona, Stefan and Samp, Krystian and Gisler, Luis and Helbing, Dirk},
82
+ journal = {arXiv preprint arXiv:2602.00012},
83
+ month = {nov},
84
+ title = {{OGD4All: A Framework for Accessible Interaction with Geospatial Open Government Data Based on Large Language Models}},
85
+ url = {https://arxiv.org/abs/2602.00012},
86
+ year = {2025}
87
+ }
88
+ ```
89
+
90
+ Next to the benchmark, this paper (accepted at IEEE CAI 2026) introduces the OGD4All architecture, which achieves high recall and correctness scores, even with "older" frontier models such as GPT-4.1.
91
+ OGD4All's source code is publicly available: https://github.com/ethz-coss/ogd4all