metadata
language:
- de
license: apache-2.0
tags:
- austrian-german
- benchmark
- evaluation
- austria
pretty_name: AT-Bench
task_categories:
- text-classification
- question-answering
extra_gated_prompt: >-
This dataset is in validation phase. Access is granted to verified researchers
and organizations. Please describe your intended use case.
extra_gated_fields:
Full name: text
Organization: text
Intended use: text
I agree to use this data for research purposes only:
type: checkbox
AT-Bench: Austrian German Benchmark
300 multiple-choice questions testing whether an LLM actually understands Austrian German — not just German.
The problem
Every German benchmark treats German as one language. But ask GPT what "Obers" means and half the time it guesses wrong. Ask it about the Bezirksgericht and it describes the German court system. Austrian German is an official language variety spoken by 9 million people, and models consistently get it wrong.
Tasks
| Task | Count | What it tests |
|---|---|---|
| vocabulary | 80 | Do you know Erdapfel = potato? |
| knowledge | 80 | Austrian geography, institutions, culture |
| register | 50 | Is this text AT, DE, or CH? |
| culture | 50 | Traditions, holidays, customs |
| legal_basics | 40 | ABGB, Bezirksgericht, basic legal system |
Difficulty split: 40% easy, 40% medium, 20% hard.
Format
{
"task": "vocabulary",
"question": "Welches Wort verwendet man in Oesterreich fuer 'Sahne'?",
"choices": ["Rahm", "Schmand", "Obers", "Schmetten"],
"correct": "Obers",
"difficulty": "easy"
}
Usage
from datasets import load_dataset
ds = load_dataset("Laborator/austrian-german-benchmark")
What I want to do with this
Run the major open models through it and publish a leaderboard. My hypothesis is that even large models score below 70% on the Austrian-specific questions. If your model does better, open a discussion — I'd genuienly like to know.
License
Apache 2.0