iiegn Claude Sonnet 4.5 commited on
Commit
3c8a787
·
verified ·
1 Parent(s): e963931

Remove CONLLU_PARSING_ISSUES.md

Browse files

This documentation has been moved to the ud-hf-parquet-tools library
where it belongs, as CONLLU_PARSING.md:

https://github.com/bot-zen/ud-hf-parquet-tools/blob/main/CONLLU_PARSING.md

The library documentation is more comprehensive and includes:
- All 7 parsing challenges with examples
- Affected treebank statistics
- Implementation details and code locations
- Testing and validation procedures

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. tools/CONLLU_PARSING_ISSUES.md +0 -226
tools/CONLLU_PARSING_ISSUES.md DELETED
@@ -1,226 +0,0 @@
1
- # CoNLL-U Parsing Issues and Workarounds
2
-
3
- This document describes the various idiosyncrasies and bugs we encounter when parsing Universal Dependencies CoNLL-U files, and how we work around them.
4
-
5
- ## 1. Double Equals Sign in FEATS/MISC Fields
6
-
7
- **Issue**: The `conllu` library incorrectly parses fields with double equals signs.
8
-
9
- **Example**:
10
- ```
11
- Original: Gloss==POSS.1SG.NOM|RX==[PRO]|TokenType=Clit
12
- Parsed by conllu: {'Gloss': None, 'RX': None, 'TokenType': 'Clit'}
13
- Expected: {'Gloss': '=POSS.1SG.NOM', 'RX': '=[PRO]', 'TokenType': 'Clit'}
14
- ```
15
-
16
- **Affected Treebanks**:
17
- - `bej_autogramm` (Beja)
18
- - Any treebank using values that start with `=`
19
-
20
- **Workaround**: Parse FEATS, XPOS, DEPS, and MISC fields directly from raw TSV lines instead of using conllu's parsed values.
21
-
22
- **Implementation**:
23
- - `extract_raw_fields_from_sentence()` in `04_generate_parquet.py`
24
- - `_extract_raw_fields()` in `templates/universal_dependencies.tmpl`
25
-
26
- ## 2. Duplicate Metadata Keys
27
-
28
- **Issue**: The `conllu` library stores metadata as a Python dictionary, which cannot have duplicate keys. When a sentence has multiple metadata entries with the same key, only the last value is kept.
29
-
30
- **Example**:
31
- ```conllu
32
- # media = Photo 1280x720, 83.5 KB
33
- # media = <a_href="https://...">...</a>
34
- # sent_id = BelarusDocs-257
35
- ```
36
-
37
- After parsing: `{'media': '<a_href="https://...">...</a>', 'sent_id': 'BelarusDocs-257'}`
38
- (First media entry is lost!)
39
-
40
- **Affected Treebanks**:
41
- - `be_hse` (Belarusian-HSE): 1,216 sentences with duplicate `media` keys
42
- - `sa_ufal` (Sanskrit-UFAL): 18 sentences with duplicate keys
43
- - `br_keb` (Breton-KEB): 15 sentences
44
- - `pt_gsd` (Portuguese-GSD): 8 sentences with duplicate `generator`/`udpipe_model`
45
- - `tr_gb` (Turkish-GB): 3 sentences with duplicate `en`
46
- - Total: **1,323 sentences** across **14 treebanks**
47
-
48
- **Workaround**: Parse comment lines directly from raw file text before conllu processes them.
49
-
50
- **Implementation**:
51
- - `extract_raw_comments_from_sentence()` in `04_generate_parquet.py`
52
- - `_extract_raw_comments()` in `templates/universal_dependencies.tmpl`
53
-
54
- ## 3. Metadata Keys Without Values
55
-
56
- **Issue**: The `conllu` library stores metadata keys without values (like `# newpar`) as `{'newpar': None}`. When reconstructing, we need to output just `# newpar`, not `# newpar = None`.
57
-
58
- **Example**:
59
- ```conllu
60
- # newpar
61
- # sent_id = 1
62
- ```
63
-
64
- Parsed as: `{'newpar': None, 'sent_id': '1'}`
65
-
66
- **Affected Treebanks**: Many treebanks use `# newpar`, `# newdoc` without values
67
-
68
- **Workaround**: Check if value is `None` and output just the key.
69
-
70
- **Implementation**: In comment reconstruction, check `if value is None: output key only`
71
-
72
- ## 4. Empty Metadata Values
73
-
74
- **Issue**: The `conllu` library completely ignores metadata entries with empty values.
75
-
76
- **Example**:
77
- ```conllu
78
- # text_en =
79
- # sent_id = 1
80
- ```
81
-
82
- Parsed as: `{'sent_id': '1'}` (text_en is completely missing!)
83
-
84
- **Affected Treebanks**: 36 files across multiple treebanks have empty metadata values
85
-
86
- **Workaround**: Parse raw comment lines and preserve `"key ="` format when value is empty.
87
-
88
- **Implementation**: Store `"key ="` in comments list when value is empty string.
89
-
90
- ## 5. Double-Hash Comments
91
-
92
- **Issue**: Comments with double hashes (like `# # newpar`) are ignored by the `conllu` library.
93
-
94
- **Example**:
95
- ```conllu
96
- # # newpar
97
- # sent_id = 11
98
- ```
99
-
100
- Parsed as: `{'sent_id': '11'}` (double-hash comment is lost)
101
-
102
- **Affected Treebanks**:
103
- - `ajp_madar` (South Levantine Arabic-MADAR): 1 occurrence
104
- - `sa_ufal` (Sanskrit-UFAL): 1 occurrence
105
-
106
- **Workaround**: These are treated as non-parseable comments by conllu. We parse raw lines and store as `"# newpar"` (preserving the second hash as part of content).
107
-
108
- **Status**: ⚠️ These will appear as `# # newpar` vs `# # newpar` differences (extra space removed)
109
-
110
- ## 6. File-Level Comments
111
-
112
- **Issue**: Comments before the first sentence (like encoding declarations) are not associated with any sentence and are lost.
113
-
114
- **Example**:
115
- ```conllu
116
- # -*- coding : UTF-8 -*-
117
- # sent_id = 1
118
- ```
119
-
120
- The encoding line is not part of any sentence's metadata.
121
-
122
- **Affected Treebanks**: Various treebanks with encoding declarations at file start
123
-
124
- **Workaround**: None. These are file-level metadata, not sentence-level.
125
-
126
- **Status**: ⚠️ Accepted loss - encoding declarations are informational only and don't affect linguistic data.
127
-
128
- ## 7. Empty Nodes Before Token 1
129
-
130
- **Issue**: Empty nodes (enhanced dependencies) with decimal IDs less than 1 (like `0.1`, `0.2`) must be inserted before the first token, not after.
131
-
132
- **Example**:
133
- ```conllu
134
- # sent_id = CESS-CAT-A-19981201-124-s7B
135
- # text = No crec que la nostra vida corri riscos...
136
- 0.1 _ _ PRON p _ _ _ 2:nsubj ArgTem=arg0:agt|Entity=(...)
137
- 1 No no ADV rn Polarity=Neg 2 advmod 2:advmod _
138
- ```
139
-
140
- The empty node `0.1` comes BEFORE token 1, not after token 0 (which doesn't exist).
141
-
142
- **Affected Treebanks**:
143
- - `ca_ancora` (Catalan-AnCora): 445 sentences with empty nodes at position 0.x
144
- - Any treebank using empty nodes for pro-drop subjects or other zero elements
145
-
146
- **Workaround**: Special handling in reconstruction code to insert empty nodes with ID < 1 before the token loop starts.
147
-
148
- **Implementation**:
149
- - Template: Lines 128-144 in `templates/universal_dependencies.tmpl`
150
- - Validation: Lines 107-122 in `05_validate_parquet.py`
151
-
152
- **Status**: ✅ Fixed - empty nodes with any ID (including < 1) are now correctly reconstructed
153
-
154
- ## Summary of Limitations
155
-
156
- ### ✅ Fully Fixed
157
- 1. Double equals parsing bug
158
- 2. Duplicate metadata keys
159
- 3. Metadata keys without values
160
- 4. Empty metadata values
161
- 5. Empty nodes with ID < 1 (0.x positions)
162
-
163
- ### ⚠️ Known Acceptable Limitations
164
- 1. File-level comments (encoding declarations) - not sentence-level data
165
- 2. Double-hash comments - rare edge case (2 treebanks)
166
-
167
- ### 📊 Impact
168
- - **100% fidelity** for linguistic data (tokens, annotations, dependencies)
169
- - **~99.8% fidelity** for metadata (minor formatting differences only)
170
- - **0 data loss** for linguistic annotations
171
-
172
- ## Implementation Notes
173
-
174
- ### Raw Parsing Strategy
175
-
176
- We use a hybrid approach:
177
- 1. **Comments/Metadata**: Parse raw lines before conllu to preserve duplicates and empty values
178
- 2. **Token fields (FEATS/MISC/DEPS/XPOS)**: Parse raw TSV fields to bypass double-equals bug
179
- 3. **Token structure (form/lemma/upos/head/deprel)**: Use conllu's parsed values (work correctly)
180
-
181
- ### Performance Impact
182
-
183
- Raw parsing adds minimal overhead:
184
- - One additional file read per CoNLL-U file
185
- - String splitting operations only (very fast)
186
- - No significant impact on generation time
187
-
188
- ### Code Locations
189
-
190
- **Generation script**: `tools/04_generate_parquet.py`
191
- - `extract_raw_fields_from_sentence()` - lines 248-283
192
- - `extract_raw_comments_from_sentence()` - lines 286-332
193
- - Usage in extraction - lines 359-470
194
-
195
- **Dataset loader template**: `tools/templates/universal_dependencies.tmpl`
196
- - `_extract_raw_fields()` - lines 464-490
197
- - `_extract_raw_comments()` - lines 492-528
198
- - Usage in `_generate_examples()` - lines 541+
199
-
200
- **Validation script**: `tools/05_validate_parquet.py`
201
- - Uses `example_to_conllu()` which handles marker-based reconstruction
202
-
203
- ## Testing
204
-
205
- **Validation script**: `tools/05_validate_parquet.py`
206
-
207
- ```bash
208
- # Validate all local treebanks
209
- uv run tools/05_validate_parquet.py --local
210
-
211
- # Validate specific treebanks with detailed diffs
212
- uv run tools/05_validate_parquet.py --treebanks be_hse,bej_autogramm --local -vv
213
-
214
- # Test 3 diverse treebanks
215
- uv run tools/05_validate_parquet.py --test --local
216
- ```
217
-
218
- **Known passing treebanks**:
219
- - `bej_autogramm`: 763/763 sentences (double equals fix verified)
220
- - `fr_gsd`, `en_ewt`, `it_isdt`: All test splits pass (47,131 sentences)
221
-
222
- ## References
223
-
224
- - Universal Dependencies format specification: https://universaldependencies.org/format.html
225
- - Python conllu library: https://github.com/EmilStenstrom/conllu
226
- - Universal Dependencies v2.17 release: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5150