You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
This dataset is in validation phase. Access is granted to verified researchers and organizations. Please describe your intended use case.
Log in or Sign Up to review the conditions and access this dataset content.
AT-Bench: Austrian German Benchmark
300 multiple-choice questions testing whether an LLM actually understands Austrian German — not just German.
The problem
Every German benchmark treats German as one language. But ask GPT what "Obers" means and half the time it guesses wrong. Ask it about the Bezirksgericht and it describes the German court system. Austrian German is an official language variety spoken by 9 million people, and models consistently get it wrong.
Tasks
| Task | Count | What it tests |
|---|---|---|
| vocabulary | 80 | Do you know Erdapfel = potato? |
| knowledge | 80 | Austrian geography, institutions, culture |
| register | 50 | Is this text AT, DE, or CH? |
| culture | 50 | Traditions, holidays, customs |
| legal_basics | 40 | ABGB, Bezirksgericht, basic legal system |
Difficulty split: 40% easy, 40% medium, 20% hard.
Format
{
"task": "vocabulary",
"question": "Welches Wort verwendet man in Oesterreich fuer 'Sahne'?",
"choices": ["Rahm", "Schmand", "Obers", "Schmetten"],
"correct": "Obers",
"difficulty": "easy"
}
Usage
from datasets import load_dataset
ds = load_dataset("Laborator/austrian-german-benchmark")
What I want to do with this
Run the major open models through it and publish a leaderboard. My hypothesis is that even large models score below 70% on the Austrian-specific questions. If your model does better, open a discussion — I'd genuienly like to know.
License
Apache 2.0
- Downloads last month
- 15