Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- nl
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- question-answering
|
| 7 |
+
- text-generation
|
| 8 |
+
size_categories:
|
| 9 |
+
- n<1K
|
| 10 |
+
tags:
|
| 11 |
+
- dutch
|
| 12 |
+
- government
|
| 13 |
+
- benchmark
|
| 14 |
+
- legal
|
| 15 |
+
- evaluation
|
| 16 |
+
- netherlands
|
| 17 |
+
pretty_name: DutchGovBench
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# DutchGovBench v0.1
|
| 21 |
+
|
| 22 |
+
Evaluation benchmark for Dutch government AI systems. 100 questions across 9 categories, testing knowledge of Dutch law and public administration.
|
| 23 |
+
|
| 24 |
+
## What is this?
|
| 25 |
+
|
| 26 |
+
DutchGovBench tests whether AI models can accurately answer questions about Dutch government topics: social support law (Wmo 2015), youth law (Jeugdwet), participation law (Participatiewet), administrative law (Awb), municipal policy, objection procedures, privacy/GDPR, administrative oversight, and subsidies.
|
| 27 |
+
|
| 28 |
+
Each question includes expected legal references and a verified answer.
|
| 29 |
+
|
| 30 |
+
## Dataset structure
|
| 31 |
+
|
| 32 |
+
```json
|
| 33 |
+
{
|
| 34 |
+
"id": "wmo_001",
|
| 35 |
+
"category": "Wmo 2015",
|
| 36 |
+
"question": "Wat is het doel van de Wmo 2015?",
|
| 37 |
+
"expected_refs": ["art. 1.1.1 Wmo 2015"],
|
| 38 |
+
"gold_answer": "De Wmo 2015 regelt...",
|
| 39 |
+
"difficulty": "easy",
|
| 40 |
+
"style": "factual",
|
| 41 |
+
"verifiable": true
|
| 42 |
+
}
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Fields:
|
| 46 |
+
- **id**: Unique question identifier
|
| 47 |
+
- **category**: One of 9 topic categories
|
| 48 |
+
- **question**: Question in Dutch
|
| 49 |
+
- **expected_refs**: Expected legal article references
|
| 50 |
+
- **gold_answer**: Verified reference answer
|
| 51 |
+
- **difficulty**: easy / medium / hard
|
| 52 |
+
- **style**: factual / applied / analytical
|
| 53 |
+
- **verifiable**: Whether answer can be checked against source text
|
| 54 |
+
|
| 55 |
+
## Baseline results
|
| 56 |
+
|
| 57 |
+
| Model | Avg Score (-2 to +2) | Hallucination Rate |
|
| 58 |
+
|-------|---------------------|-------------------|
|
| 59 |
+
| EuroLLM-9B-Instruct | -0.82 | 32% |
|
| 60 |
+
| EuroLLM-22B-Instruct | -0.93 | 55% |
|
| 61 |
+
|
| 62 |
+
## License
|
| 63 |
+
|
| 64 |
+
Apache 2.0
|
| 65 |
+
|
| 66 |
+
## Author
|
| 67 |
+
|
| 68 |
+
CiviQs B.V. (info@civiqs.nl)
|