dala_label_da / README.md
giannor's picture
Upload dataset
1cf1595 verified
metadata
language:
  - da
pretty_name: DaLA - Danish Linguistic Acceptability Dataset
tags:
  - linguistic-acceptability
  - nlp
  - danish
  - benchmark
  - text-classification
  - minimal-pairs
task_categories:
  - text-classification
license: cc-by-4.0
dataset_info:
  features:
    - name: text
      dtype: string
    - name: corruption_type
      dtype: string
    - name: label_da
      dtype: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 677973
      num_examples: 4592
    - name: validation
      num_bytes: 55377
      num_examples: 386
    - name: test
      num_bytes: 398975
      num_examples: 2678
    - name: full_train
      num_bytes: 796707
      num_examples: 5352
  download_size: 1048229
  dataset_size: 1929032
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
      - split: full_train
        path: data/full_train-*
size_categories:
  - 1K<n<10K

DaLA: Danish Linguistic Acceptability Evaluation Dataset

NOTE: This is a variant of DaLA Standard with labels in Danish language instead of English (as in the original one), the data is the same. The following information are the same contained in the original repository


DaLA (paper) is a benchmark dataset for linguistic acceptability judgment in Danish, designed to evaluate how well NLP models, especially large language models (LLMs), understand grammaticality in real-world Danish sentences. The dataset extends previous resources by introducing a broader and more realistic set of error types and providing data splits suitable for evaluation via few-shot or finetuning.


🔗 Links


📖 Overview

In linguistic acceptability tasks, models must distinguish between grammatically acceptable and unacceptable sentences. The DaLA dataset was created by:

  • Analyzing real-world Danish writing errors.
  • Designing 14 distinct corruption functions that reflect common Danish mistakes (e.g., pronoun confusion, suffix errors, interchange of determiners).
  • Applying a single corruption to each correct Danish sentence creating an incorrect counterpart, resulting in minimal pairs of sentences that differ by only one error.

The dataset includes:

  • The original correct sentences (acceptable).
  • The corrupted sentences (unacceptable).
  • A binary acceptability label.
  • A corruption type identifier.

📦 Dataset Variants and Splits

There are three variants of the DaLA dataset, each with different sizes and proportions:

Split Variant Description Size (approx.) Link
dala Standard benchmark with proportions comparable to prior Danish acceptability datasets 3,328 samples DaLA Standard
dala_medium Expanded version using more available samples ~6,056 samples DaLA Medium
dala_large Largest version with the full expanded dataset ~7,656 samples DaLA Large

Each variant includes train, validation, and test splits.


🧠 Tasks & Usage

DaLA is primarily intended for:

Model evaluation and benchmarking: Assessing model competence in grammatical judgment ✔ Minimal-pair evaluation: Error type discrimination and fine-grained analysis

You can load the dataset using the Hugging Face datasets library as follows:

from datasets import load_dataset

# Standard split
dataset = load_dataset("giannor/dala")

# Medium or large variants
dataset_medium = load_dataset("giannor/dala_medium")
dataset_large = load_dataset("giannor/dala_large")

📊 Baselines & Model Performance

In the corresponding paper, DaLA was used to benchmark a variety of open-source LLMs and model types. Across many models, performance on DaLA was lower than on previous Danish acceptability benchmarks, highlighting DaLA’s greater difficulty and discriminatory power. (DaLA paper)


📄 Citation

If you use this dataset in your work, please cite the following paper:



@misc
{barmina2025daladanishlinguisticacceptability,
title={DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors},
author={Gianluca Barmina and Nathalie Carmen Hau Norman and Peter Schneider-Kamp and Lukas Galke},
year={2025},
eprint={2512.04799},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.04799},
}

⚖️ License

This dataset is shared under the CC BY 4.0 license.