Datasets:
File size: 3,317 Bytes
ca612ea | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ---
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: GramQA
size_categories:
- 1K<n<10K
tags:
- linguistics
- agentic AI
- grammatical analysis
- universal dependencies
task_categories:
- question-answering
---
# Corpus-Grounded Evaluation Dataset for Grammatical Question Answering - GramQA
The Corpus-grounded evaluation dataset for grammatical question answering (GramQA) consists of 13 grammatical questions inspired by [WALS](https://wals.info/), the World Atlas of Language Structures, focusing on word order variation across different syntactic constructions (e.g., the typical order of subject, object, and verb in a language). For each question, the dataset provides ground truth values for 179 languages based on [Universal Dependencies](https://universaldependencies.org/) corpora, which can be used for cross-linguistic word order comparison and evaluation of model predictions against corpus evidence. The dataset was originally developed as an evaluation benchmark for an agentic LLM-based grammatical analysis system (i.e. the UD-Agent, described in a [separate paper](https://arxiv.org/abs/2512.00214)), but is released as a standalone resource for broader reuse.
For every question–language pair, the dataset includes (i) the dominant word order pattern (reported as the most frequent attested value in the corpus) and (ii) the full distribution of all attested word order patterns with their relative frequencies. The ground truth values were obtained automatically by applying a series of Python scripts developed by the authors, implementing rule-based extraction procedures over test portions of the UD treebanks (v2.16). The scripts can be accessed at a separate [GitHub repository](https://github.com/matejklemen/ud_llm/).
### Files included:
- **udagent_eval_data.jsonl:** A JSON Lines file containing 1899 entries (one per feature-language pair. Only the feature-language pairs for which at least one valid result was returned by the Python scripts are included). Each entry consists of the WALS feature ID, language information, and the corresponding ground truth value derived from UD data. Each entry contains information about both the most frequent resulting value (dubbed the "short answer") as well as the distribution across all possible values for the associated feature.
- **udagent_eval_metadata.json:** A JSON file with information about the included languages, the UD treebanks used to obtain the ground truth values for each language, the particular question associated with each WALS feature, and set of possible values for each feature.
## Additional Information
### Dataset Authors
Luka Terčon, Kaja Dobrovoljc, Matej Klemen, Tjaša Arčon, and Marko Robnik-Šikonja (See [http://hdl.handle.net/11356/2086](http://hdl.handle.net/11356/2086) for the full entry.)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@misc{klemen2025corpusgroundedagenticllmsmultilingual,
title={Towards Corpus-Grounded Agentic LLMs for Multilingual Grammatical Analysis},
author={Matej Klemen and Tjaša Arčon and Luka Terčon and Marko Robnik-Šikonja and Kaja Dobrovoljc},
year={2025},
eprint={2512.00214},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.00214},
}
```
|