|
|
--- |
|
|
language: |
|
|
- es |
|
|
- it |
|
|
- pt |
|
|
- ru |
|
|
pretty_name: "Syntactic Agreement Test Suites" |
|
|
tags: |
|
|
- syntax |
|
|
- agreement |
|
|
- linguistics |
|
|
- targeted-syntactic-evaluation |
|
|
license: "apache-2.0" |
|
|
task_categories: |
|
|
- other |
|
|
size_categories: |
|
|
- 5K<n<10K |
|
|
language_creators: |
|
|
- expert-generated |
|
|
--- |
|
|
|
|
|
|
|
|
# SyntacticAgreement |
|
|
|
|
|
This dataset provides **manually curated syntactic agreement test suites** for four morphologically rich languages: **Italian, Spanish, Portuguese, and Russian**. |
|
|
It is designed to evaluate the ability of neural language models to capture **hierarchical syntactic dependencies**, with a focus on **agreement phenomena** that go beyond English subject–verb agreement. |
|
|
|
|
|
This dataset is designed for targeted syntactic evaluation, which does not fit standard supervised NLP tasks. For this reason, we use the "other" task category. |
|
|
|
|
|
--- |
|
|
|
|
|
## Motivation |
|
|
|
|
|
Agreement is a key linguistic phenomenon for testing whether models capture **hierarchical structure** rather than relying on surface-level patterns (Linzen et al., 2016; Goldberg, 2019). |
|
|
|
|
|
Unlike English, agreement in Romance and Slavic languages is morphologically richer and involves more diverse features. |
|
|
Our dataset aims to evaluate state-of-the-art models on these features, providing **different agreement tests per language**, each organized into **test suites**, some of which have an **adversarial version**. |
|
|
|
|
|
The test suites were **manually created by linguists** to ensure **grammaticality, semantic plausibility, and lexical diversity**, contrasting with previous approaches relying on automatically generated stimuli. |
|
|
|
|
|
--- |
|
|
|
|
|
## Sample test sentences |
|
|
|
|
|
The following examples from one of our Spanish test suites (Subject - Predicative Complement agreement) illustrate a |
|
|
regular test sentence and its adversarial counterpart: |
|
|
|
|
|
- **Standard example:** |
|
|
Grammatical vs. ungrammatical sentence (gender mismatch) |
|
|
`Las voluntarias cayeron enfermas.` |
|
|
`*Las voluntarias cayeron enfermos.` |
|
|
'The volonteers fell ill.' |
|
|
|
|
|
- **Adversarial:** |
|
|
A relative clause (between brackets) increases distance and introduces an **agreement attractor** |
|
|
`Las voluntarias [que ayudaron a los refugiados] cayeron enfermas.` |
|
|
`*Las voluntarias [que ayudaron a los refugiados] cayeron enfermos.` |
|
|
'The volonteers [who helped the refugees] fell ill.' |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset structure |
|
|
|
|
|
Each language is distributed as a `.zip` file containing JSON test suites. |
|
|
|
|
|
A test suite JSON has the following structure: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"meta": { |
|
|
"name": "attribute_agreement", |
|
|
"metric": "sum", |
|
|
"author": "Alba Táboas García", |
|
|
"reference": "", |
|
|
"language": "Italian", |
|
|
"comment": "Basic suite for testing nominal agreement (number and gender) between subject and attribute in copulative constructions" |
|
|
}, |
|
|
"region_meta": { |
|
|
"1": "Subject", |
|
|
"2": "Copula", |
|
|
"3": "Attribute" |
|
|
}, |
|
|
"predictions": [ |
|
|
{ |
|
|
"type": "formula", |
|
|
"formula": "(3;%match%) < (3;%mismatch_num%)", |
|
|
"comment": "Disagreement in number is more surprising than full agreement" |
|
|
}, |
|
|
{ |
|
|
"type": "formula", |
|
|
"formula": "(3;%match%) < (3;%mismatch_gend%)", |
|
|
"comment": "Disagreement in gender is more surprising than full agreement" |
|
|
}, |
|
|
{ |
|
|
"type": "formula", |
|
|
"formula": "(3;%match%) < (3;%mismatch_num_gend%)", |
|
|
"comment": "Disagreement in gender and number is more surprising than full agreement" |
|
|
} |
|
|
], |
|
|
"items": [ |
|
|
{ |
|
|
"item_number": 1, |
|
|
"conditions": [ |
|
|
{ |
|
|
"condition_name": "match", |
|
|
"regions": [ |
|
|
{ |
|
|
"region_number": 1, |
|
|
"content": "La storia" |
|
|
}, |
|
|
{ |
|
|
"region_number": 2, |
|
|
"content": "era" |
|
|
}, |
|
|
{ |
|
|
"region_number": 3, |
|
|
"content": "lunga." |
|
|
} |
|
|
] |
|
|
}, |
|
|
{ |
|
|
"condition_name": "mismatch_num", |
|
|
"regions": [ |
|
|
{ |
|
|
"region_number": 1, |
|
|
"content": "La storia" |
|
|
}, |
|
|
{ |
|
|
"region_number": 2, |
|
|
"content": "era" |
|
|
}, |
|
|
{ |
|
|
"region_number": 3, |
|
|
"content": "lunghe." |
|
|
} |
|
|
] |
|
|
}, |
|
|
{ |
|
|
"condition_name": "mismatch_gend", |
|
|
"regions": [ |
|
|
{ |
|
|
"region_number": 1, |
|
|
"content": "La storia" |
|
|
}, |
|
|
{ |
|
|
"region_number": 2, |
|
|
"content": "era" |
|
|
}, |
|
|
{ |
|
|
"region_number": 3, |
|
|
"content": "lungo." |
|
|
} |
|
|
] |
|
|
}, |
|
|
{ |
|
|
"condition_name": "mismatch_num_gend", |
|
|
"regions": [ |
|
|
{ |
|
|
"region_number": 1, |
|
|
"content": "La storia" |
|
|
}, |
|
|
{ |
|
|
"region_number": 2, |
|
|
"content": "era" |
|
|
}, |
|
|
{ |
|
|
"region_number": 3, |
|
|
"content": "lunghi." |
|
|
} |
|
|
] |
|
|
} |
|
|
] |
|
|
} |
|
|
|
|
|
|
|
|
] |
|
|
} |
|
|
|
|
|
``` |
|
|
|
|
|
- **meta**: suite-level metadata (name, author, language, description). |
|
|
- **region_meta**: mapping of region indices to linguistic roles. |
|
|
- **predictions**: formulas defining the expected surprisal relations across conditions. |
|
|
- **items**: each test item contains a set of conditions (grammatical vs. systematically ungrammatical variants). |
|
|
|
|
|
This structure follows the same structure as in the [SyntaxGym](https://aclanthology.org/2020.acl-demos.10.pdf) test |
|
|
suites introduced by [Hu et al. (2020)](https://aclanthology.org/2020.acl-main.158.pdf) and extended to Spanish by |
|
|
[Pérez-Mayos et al. (2021)](https://aclanthology.org/2021.findings-acl.333.pdf). |
|
|
|
|
|
--- |
|
|
|
|
|
## Loading the dataset |
|
|
|
|
|
The dataset can be loaded directly from the Hugging Face Hub: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the Spanish test suites |
|
|
ds = load_dataset("albalbalba/SyntacticAgreement", name="spanish", split='train', trust_remote_code=True) |
|
|
|
|
|
# List all the available test suites for the selected language: |
|
|
print(set(ds[:]['suite_name'])) |
|
|
|
|
|
# Select one test suite in particular: attribute agreement |
|
|
attribute_suite = ds.filter(lambda example: example['suite_name'] == 'atribute_agreement') |
|
|
``` |
|
|
|
|
|
Each example has the following schema: |
|
|
|
|
|
- **suite_name** (`string`) |
|
|
- **item_number** (`int32`) |
|
|
- **conditions** (`list`) |
|
|
- **condition_name** (`string`) |
|
|
- **content** (`string`) |
|
|
- **regions** (list of `{region_number, content}`) |
|
|
- **predictions** (`list[string]`) |
|
|
|
|
|
--- |
|
|
|
|
|
## Evaluation methodology |
|
|
|
|
|
We recommend evaluating models with: |
|
|
|
|
|
- **[minicons](https://github.com/kanishkamisra/minicons)** (Misra, 2022) — for surprisal and probability computations. |
|
|
- **Bidirectional models**: use the modified scoring technique by |
|
|
[Kauf & Ivanova (2023)](https://aclanthology.org/2023.acl-short.80.pdf) (masking rightward tokens within |
|
|
the same word). |
|
|
- **Causal models**: apply the correction of |
|
|
[Pimentel & Meister (2024)](https://aclanthology.org/2024.emnlp-main.1020.pdf) to handle tokenization effects. |
|
|
|
|
|
### Recommended scoring metric |
|
|
|
|
|
Instead of binary accuracy, we recommend the **mean probability ratio**: |
|
|
|
|
|
$$ |
|
|
\text{Score(item)} = \frac{1}{n} \sum_{x_i \in I} \frac{p(x_t | c)}{p(x_t | c) + p(x_i | c)} |
|
|
$$ |
|
|
|
|
|
- $x_t$: grammatical target |
|
|
- $x_i$: ungrammatical alternative |
|
|
- $c$: context (left for causal models, both left and right for bidirectional ones) |
|
|
- $I$: set of $n$ incorrect alternatives included in $item$ |
|
|
|
|
|
Values $> 0.5$ indicate the model prefers the grammatical form, with higher values meaning stronger preference. |
|
|
|
|
|
--- |
|
|
|
|
|
### Minimal evaluation pipeline example |
|
|
|
|
|
Coming soon... |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
[Assessing the Agreement Competence of Large Language Models](https://aclanthology.org/2025.depling-1.4/) |
|
|
(Táboas García & Wanner, DepLing-SyntaxFest 2025) |