File size: 4,613 Bytes
0970e1a f9fdd80 0970e1a 1d8a3f6 0970e1a 1d8a3f6 0970e1a 1d8a3f6 0970e1a f9fdd80 0970e1a f9fdd80 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
---
language:
- da
pretty_name: DaLA - Danish Linguistic Acceptability Dataset
tags:
- linguistic-acceptability
- nlp
- danish
- benchmark
- text-classification
- minimal-pairs
task_categories:
- text-classification
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 865253
num_examples: 6124
- name: val
num_bytes: 52430
num_examples: 384
- name: test
num_bytes: 161207
num_examples: 1148
download_size: 628466
dataset_size: 1078890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
size_categories:
- 1K<n<10K
---
# DaLA: Danish Linguistic Acceptability Evaluation Dataset
**DaLA** ([paper][1]) is a benchmark dataset for **linguistic acceptability judgment** in Danish, designed to evaluate how well NLP models, especially large language models (LLMs), understand grammaticality in real-world Danish sentences. The dataset extends previous resources by introducing a broader and more realistic set of error types and providing data splits suitable for evaluation via few-shot or finetuning.
---
## 🔗 Links
- DaLA variants are linked and described below
- [Paper][1]
- [GitHub Repository](https://github.com/N-essuno/DaLA) (code, data generation scripts)
---
## 📖 Overview
In linguistic acceptability tasks, models must distinguish between **grammatically acceptable** and **unacceptable** sentences. The DaLA dataset was created by:
- Analyzing real-world Danish writing errors.
- Designing **14 distinct corruption functions** that reflect common Danish mistakes (e.g., pronoun confusion, suffix errors, interchange of determiners).
- Applying these corruptions to correct Danish sentences from the Universal Dependencies Danish corpus.
- Pairing each corrupted sentence with its correct counterpart.
The dataset includes:
- The original correct sentences (*acceptable*).
- The corrupted sentences (*unacceptable*).
- A binary acceptability label.
- A corruption type identifier.
---
## 📦 Dataset Variants and Splits
There are three variants of the DaLA dataset, each with different sizes and proportions:
| Split Variant | Description | Size (approx.) | Link |
|------------------|-------------|----------------|----------------|
| `dala` | Standard benchmark with proportions comparable to prior Danish acceptability datasets | 3,328 samples | [DaLA Standard](https://huggingface.co/datasets/giannor/dala) |
| `dala_medium` | Expanded version using more available samples | ~6,056 samples | [DaLA Medium](https://huggingface.co/datasets/giannor/dala_medium) |
| `dala_large` | Largest version with the full expanded dataset | ~7,656 samples | [DaLA Large](https://huggingface.co/datasets/giannor/dala_large) |
Each variant includes train, validation, and test splits.
---
## 🧠 Tasks & Usage
DaLA is primarily intended for:
✔ **Model evaluation and benchmarking**: Assessing model competence in grammatical judgment
✔ **Minimal-pair evaluation**: Error type discrimination and fine-grained analysis
You can load the dataset using the Hugging Face `datasets` library as follows:
```python
from datasets import load_dataset
# Standard split
dataset = load_dataset("giannor/dala")
# Medium or large variants
dataset_medium = load_dataset("giannor/dala_medium")
dataset_large = load_dataset("giannor/dala_large")
```
## 📊 Baselines & Model Performance
In the corresponding paper, DaLA was used to benchmark a variety of open-source LLMs and model types. Across many models, performance on DaLA was **lower** than on previous Danish acceptability benchmarks, highlighting DaLA’s **greater difficulty and discriminatory power**. ([DaLA paper][1])
---
## 📄 Citation
If you use this dataset in your work, please cite the following paper:
```bibtex
@misc{barmina2025daladanishlinguisticacceptability,
title={DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors},
author={Gianluca Barmina and Nathalie Carmen Hau Norman and Peter Schneider-Kamp and Lukas Galke},
year={2025},
eprint={2512.04799},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.04799},
}
```
---
## ⚖️ License
This dataset is shared under the **CC BY 4.0** license.
[1]: https://arxiv.org/abs/2512.04799 "DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors" |