File size: 8,013 Bytes
cbfff72
 
 
 
 
 
 
 
 
 
 
 
 
 
aef402a
cbfff72
 
 
 
 
 
 
be0e015
 
 
 
 
aef402a
 
be0e015
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbfff72
 
 
be0e015
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
---
language: 
- es  # Spanish
- it  # Italian
- pt  # Portuguese
- ru  # Russian
pretty_name: "Syntactic Agreement Test Suites"
tags:
- syntax
- agreement
- linguistics
- targeted-syntactic-evaluation
license: "apache-2.0"
task_categories:
- other
size_categories:
- 5K<n<10K  # adjust if you know the approximate dataset size
language_creators:
- expert-generated
---


# SyntacticAgreement

This dataset provides **manually curated syntactic agreement test suites** for four morphologically rich languages: **Italian, Spanish, Portuguese, and Russian**.  
It is designed to evaluate the ability of neural language models to capture **hierarchical syntactic dependencies**, with a focus on **agreement phenomena** that go beyond English subject–verb agreement.

This dataset is designed for targeted syntactic evaluation, which does not fit standard supervised NLP tasks. For this reason, we use the "other" task category.

---

## Motivation

Agreement is a key linguistic phenomenon for testing whether models capture **hierarchical structure** rather than relying on surface-level patterns (Linzen et al., 2016; Goldberg, 2019).  

Unlike English, agreement in Romance and Slavic languages is morphologically richer and involves more diverse features.  
Our dataset aims to evaluate state-of-the-art models on these features, providing **different agreement tests per language**, each organized into **test suites**, some of which have an **adversarial version**.  

The test suites were **manually created by linguists** to ensure **grammaticality, semantic plausibility, and lexical diversity**, contrasting with previous approaches relying on automatically generated stimuli.

---

## Sample test sentences

The following examples from one of our Spanish test suites (Subject - Predicative Complement agreement) illustrate a 
regular test sentence and its adversarial counterpart:  

- **Standard example:**  
Grammatical vs. ungrammatical sentence (gender mismatch)  
  `Las voluntarias cayeron enfermas.`  
  `*Las voluntarias cayeron enfermos.`  
'The volonteers fell ill.'

- **Adversarial:**  
A relative clause (between brackets) increases distance and introduces an **agreement attractor**  
  `Las voluntarias [que ayudaron a los refugiados] cayeron enfermas.`    
  `*Las voluntarias [que ayudaron a los refugiados] cayeron enfermos.`    
'The volonteers [who helped the refugees] fell ill.' 

---

## Dataset structure

Each language is distributed as a `.zip` file containing JSON test suites.  

A test suite JSON has the following structure:

```json
{
  "meta": {
    "name": "attribute_agreement",
    "metric": "sum",
    "author": "Alba Táboas García",
    "reference": "",
    "language": "Italian",
    "comment": "Basic suite for testing nominal agreement (number and gender) between subject and attribute in copulative constructions"
  },
  "region_meta": {
    "1": "Subject",
    "2": "Copula",
    "3": "Attribute"
  },
  "predictions": [
    {
      "type": "formula",
      "formula": "(3;%match%) < (3;%mismatch_num%)",
      "comment": "Disagreement in number is more surprising than full agreement"
    },
    {
      "type": "formula",
      "formula": "(3;%match%) < (3;%mismatch_gend%)",
      "comment": "Disagreement in gender is more surprising than full agreement"
    },
    {
      "type": "formula",
      "formula": "(3;%match%) < (3;%mismatch_num_gend%)",
      "comment": "Disagreement in gender and number is more surprising than full agreement"
    }
  ],
  "items": [
    {
      "item_number": 1,
      "conditions": [
        {
          "condition_name": "match",
          "regions": [
            {
              "region_number": 1,
              "content": "La storia"
            },
            {
              "region_number": 2,
              "content": "era"
            },
            {
              "region_number": 3,
              "content": "lunga."
            }
          ]
        },
        {
          "condition_name": "mismatch_num",
          "regions": [
            {
              "region_number": 1,
              "content": "La storia"
            },
            {
              "region_number": 2,
              "content": "era"
            },
            {
              "region_number": 3,
              "content": "lunghe."
            }
          ]
        },
        {
          "condition_name": "mismatch_gend",
          "regions": [
            {
              "region_number": 1,
              "content": "La storia"
            },
            {
              "region_number": 2,
              "content": "era"
            },
            {
              "region_number": 3,
              "content": "lungo."
            }
          ]
        },
        {
          "condition_name": "mismatch_num_gend",
          "regions": [
            {
              "region_number": 1,
              "content": "La storia"
            },
            {
              "region_number": 2,
              "content": "era"
            },
            {
              "region_number": 3,
              "content": "lunghi."
            }
          ]
        }
      ]
    }
    
    
  ]
}

```

- **meta**: suite-level metadata (name, author, language, description).  
- **region_meta**: mapping of region indices to linguistic roles.  
- **predictions**: formulas defining the expected surprisal relations across conditions.  
- **items**: each test item contains a set of conditions (grammatical vs. systematically ungrammatical variants).  

This structure follows the same structure as in the [SyntaxGym](https://aclanthology.org/2020.acl-demos.10.pdf) test 
suites introduced by [Hu et al. (2020)](https://aclanthology.org/2020.acl-main.158.pdf) and extended to Spanish by 
[Pérez-Mayos et al. (2021)](https://aclanthology.org/2021.findings-acl.333.pdf).

---

## Loading the dataset

The dataset can be loaded directly from the Hugging Face Hub:  

```python
from datasets import load_dataset

# Load the Spanish test suites
ds = load_dataset("albalbalba/SyntacticAgreement", name="spanish", split='train', trust_remote_code=True)

# List all the available test suites for the selected language:
print(set(ds[:]['suite_name']))

# Select one test suite in particular: attribute agreement
attribute_suite = ds.filter(lambda example: example['suite_name'] == 'atribute_agreement')
```

Each example has the following schema:

- **suite_name** (`string`)  
- **item_number** (`int32`)  
- **conditions** (`list`)  
  - **condition_name** (`string`)  
  - **content** (`string`)  
  - **regions** (list of `{region_number, content}`)  
- **predictions** (`list[string]`)  

---

## Evaluation methodology

We recommend evaluating models with:  

- **[minicons](https://github.com/kanishkamisra/minicons)** (Misra, 2022) — for surprisal and probability computations.  
- **Bidirectional models**: use the modified scoring technique by 
[Kauf & Ivanova (2023)](https://aclanthology.org/2023.acl-short.80.pdf) (masking rightward tokens within
the same word).  
- **Causal models**: apply the correction of 
[Pimentel & Meister (2024)](https://aclanthology.org/2024.emnlp-main.1020.pdf) to handle tokenization effects.  

### Recommended scoring metric

Instead of binary accuracy, we recommend the **mean probability ratio**:

$$
\text{Score(item)} = \frac{1}{n} \sum_{x_i \in I} \frac{p(x_t | c)}{p(x_t | c) + p(x_i | c)}
$$

- $x_t$: grammatical target  
- $x_i$: ungrammatical alternative  
- $c$: context (left for causal models, both left and right for bidirectional ones) 
- $I$: set of $n$ incorrect alternatives included in $item$  

Values $> 0.5$ indicate the model prefers the grammatical form, with higher values meaning stronger preference.  

---

### Minimal evaluation pipeline example

Coming soon...

---

## Citation

If you use this dataset, please cite:
[Assessing the Agreement Competence of Large Language Models](https://aclanthology.org/2025.depling-1.4/) 
(Táboas García & Wanner, DepLing-SyntaxFest 2025)