iiegn Claude Sonnet 4.5 commited on
Commit
33f6a97
·
verified ·
1 Parent(s): a6dc41f

Upgrade to loader v2.0: Parquet format, MWT support, bug fixes

Browse files

Major version upgrade from implicit v1.0 to explicit v2.0 with breaking changes.

## Breaking Changes

- **Requires datasets>=4.0.0** for Parquet format support
- Python dataset scripts deprecated in HuggingFace datasets 4.0.0
- No more trust_remote_code required (security improvement)

- **Token sequences now correctly exclude MWT surface forms**
- v1.0 incorrectly included Multi-Word Token (MWT) surface forms
- Affects ~50+ treebanks with contractions (French, Italian, Portuguese, etc.)
- Token counts now match UD stats.xml 'words' count, not 'tokens' count
- Example: French 'des' (1-2) now correctly yields only ['de', 'les']

## Added

- **Parquet format distribution** (tools/04_generate_parquet.py)
- 5-10x faster loading compared to on-the-fly CoNLL-U parsing
- Compatible with HuggingFace datasets >=4.0.0
- Reduced memory footprint and better compression

- **Multi-Word Token (MWT) support**
- New 'mwt' field in schema: [{id, form, misc}]
- Enables research on contractions, clitics, word segmentation
- MWT statistics now collected from stats.xml (num_fused)

- **Separate semantic versioning**
- Loader version (2.0.0) in pyproject.toml
- UD data version (2.17) tracked separately
- Clear distinction between tooling and data versions

- **Comprehensive documentation**
- CHANGELOG.md: Complete release notes
- MIGRATION.md: v1.x to v2.0 migration guide
- ADDING_NEW_UD_VERSION.md: Guide for future UD releases
- Updated README.md with v2.0 features and examples

## Fixed

- **Critical bug: MWT lines in token sequences**
- tools/templates/universal_dependencies.tmpl:178
- Now filters with: sent.filter(id=lambda x: type(x) is int)
- Excludes MWT lines (tuple IDs) and empty nodes (decimal IDs)
- All token-level fields now correctly aligned

## Changed

- pyproject.toml: version 0.1.0 → 2.0.0
- pyproject.toml: added pyarrow>=14.0.0, datasets>=4.0.0, conllu>=5.0.0
- tools/02_traverse_ud_repos.py: now collects 'fused' counts from stats.xml
- tools/templates/universal_dependencies.tmpl: extended schema, MWT extraction
- README.md: added v2.0 highlights, MWT examples, updated code samples

Repository: commul/universal_dependencies
UD Data Version: 2.17
Loader Version: 2.0.0

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

ADDING_NEW_UD_VERSION.md ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adding a New Universal Dependencies Version
2
+
3
+ This guide explains how to add a new Universal Dependencies release (e.g., UD 2.18, 2.19, etc.) to the `commul/universal_dependencies` HuggingFace dataset.
4
+
5
+ ## Prerequisites
6
+
7
+ - Git repository cloned and up to date
8
+ - Python environment with dependencies installed:
9
+ ```bash
10
+ pip install -r requirements.txt # or use uv
11
+ ```
12
+ - Access to push to `commul/universal_dependencies` on HuggingFace Hub
13
+ - `huggingface-cli` installed and authenticated:
14
+ ```bash
15
+ pip install huggingface_hub
16
+ huggingface-cli login
17
+ ```
18
+
19
+ ## Overview
20
+
21
+ Each UD version (2.7, 2.8, ..., 2.17, 2.18, ...) has its own git branch. The loader version (v2.0) is consistent across branches. When a new UD release is published, you create a new branch and run the pipeline to generate dataset files.
22
+
23
+ ## Step-by-Step Guide
24
+
25
+ ### 1. Check for New UD Release
26
+
27
+ Visit [Universal Dependencies releases](https://universaldependencies.org/) to check for new versions.
28
+
29
+ For this example, we'll add **UD 2.18** (replace with actual version).
30
+
31
+ ### 2. Create New Branch
32
+
33
+ ```bash
34
+ # Ensure you're on the latest main/template branch
35
+ git checkout main
36
+ git pull origin main
37
+
38
+ # Create new branch for UD version
39
+ git checkout -b 2.18
40
+
41
+ # Alternatively, branch from latest UD version if main is stale
42
+ git checkout 2.17
43
+ git pull origin 2.17
44
+ git checkout -b 2.18
45
+ ```
46
+
47
+ ### 3. Update Environment Configuration
48
+
49
+ ```bash
50
+ cd tools
51
+
52
+ # Create or update .env file
53
+ echo "UD_VER=2.18" > .env
54
+
55
+ # Verify
56
+ cat .env
57
+ # Output: UD_VER=2.18
58
+ ```
59
+
60
+ ### 4. Fetch Metadata for New Version
61
+
62
+ ```bash
63
+ # Fetch citation and description from LINDAT/CLARIN
64
+ python 00_fetch_ud_clarin-dspace_metadata.py
65
+
66
+ # This creates:
67
+ # - etc/citation-2.18
68
+ # - etc/description-2.18
69
+ ```
70
+
71
+ **Note:** You may need to update the LINDAT handle ID in the script if the UD project uses a new handle for this release. Check the [UD release page](https://universaldependencies.org/) for the correct handle.
72
+
73
+ ### 5. Fetch Language Codes and Flags
74
+
75
+ ```bash
76
+ # Fetch codes_and_flags.yaml for new version
77
+ ./00_fetch_ud_codes_and_flags.sh
78
+
79
+ # This creates:
80
+ # - etc/codes_and_flags-2.18.yaml
81
+ # - etc/codes_and_flags-latest.yaml (symlink)
82
+ ```
83
+
84
+ **Note:** You may need to update the git commit hash mapping in the script if a new UD version is released.
85
+
86
+ ### 6. Fetch UD Repositories
87
+
88
+ ```bash
89
+ # This discovers all UD repositories on GitHub
90
+ ./01_fetch_ud_repos.sh
91
+
92
+ # This creates:
93
+ # - .UD_submodules_add.commands
94
+
95
+ # The script will tell you to run commands in UD_repos/
96
+ # Follow those instructions:
97
+ cd UD_repos
98
+
99
+ # Initialize git if not already done
100
+ git init
101
+
102
+ # Add submodules (this may take a while - 289 repositories)
103
+ bash ../.UD_submodules_add.commands
104
+
105
+ # Checkout the new release tag (e.g., r2.18)
106
+ git submodule foreach 'git fetch --tags && git checkout r2.18 && touch .tag-r2.18'
107
+
108
+ cd ..
109
+ ```
110
+
111
+ **Expected time:** 30-60 minutes depending on network speed.
112
+
113
+ ### 7. Traverse Repositories and Extract Metadata
114
+
115
+ ```bash
116
+ # Extract metadata from all treebanks
117
+ python 02_traverse_ud_repos.py
118
+
119
+ # This creates:
120
+ # - metadata-2.18.json (contains info for all 339+ treebanks)
121
+
122
+ # Verify the output
123
+ ls -lh metadata-2.18.json
124
+ # Should be ~200-300 KB
125
+
126
+ # Quick check: count treebanks
127
+ python -c "import json; print(len(json.load(open('metadata-2.18.json'))))"
128
+ # Should be 339+ (may increase with new treebanks)
129
+ ```
130
+
131
+ ### 8. Generate Dataset Loader Script
132
+
133
+ ```bash
134
+ # Generate universal_dependencies-2.18 from template
135
+ python 03_fill_universal_dependencies_tamplate.py
136
+
137
+ # This creates:
138
+ # - universal_dependencies-2.18 (Python loader script)
139
+ # - README-2.18 (dataset card)
140
+
141
+ # Verify files were created
142
+ ls -lh universal_dependencies-2.18 README-2.18
143
+ ```
144
+
145
+ ### 9. Generate Parquet Files
146
+
147
+ ```bash
148
+ # Test with a few treebanks first
149
+ python 04_generate_parquet.py --test
150
+
151
+ # If successful, generate for all treebanks (takes 2-4 hours)
152
+ python 04_generate_parquet.py
153
+
154
+ # This creates:
155
+ # - parquet/{treebank_name}/{split}.parquet for all 339+ treebanks
156
+
157
+ # Verify output
158
+ du -sh ../parquet/
159
+ # Should be ~50-80 GB total
160
+ ```
161
+
162
+ **Optional:** Run on a subset first to verify correctness:
163
+ ```bash
164
+ python 04_generate_parquet.py --treebanks "en_ewt,fr_gsd,de_gsd"
165
+ ```
166
+
167
+ ### 10. Copy Files to Repository Root
168
+
169
+ ```bash
170
+ # Copy generated files to root
171
+ cd .. # Back to repository root
172
+
173
+ cp tools/universal_dependencies-2.18 universal_dependencies.py
174
+ cp tools/README-2.18 README.md
175
+ cp tools/metadata-2.18.json metadata.json
176
+
177
+ # Verify files are in place
178
+ ls -lh universal_dependencies.py README.md metadata.json
179
+ ```
180
+
181
+ ### 11. Test the Dataset Loader
182
+
183
+ ```bash
184
+ # Test loading with Python script (for backwards compatibility testing)
185
+ python -c "
186
+ from datasets import load_dataset
187
+ import sys
188
+ sys.path.insert(0, '.')
189
+
190
+ # Test a small treebank
191
+ ds = load_dataset('./universal_dependencies.py', 'en_pronouns', split='test')
192
+ print(f'Loaded {len(ds)} examples')
193
+ print(f'Features: {list(ds.features.keys())}')
194
+ print(f'MWT field present: {\"mwt\" in ds.features}')
195
+ "
196
+
197
+ # Test loading from Parquet
198
+ python -c "
199
+ from datasets import load_dataset
200
+
201
+ # Test Parquet loading
202
+ ds = load_dataset('parquet', data_files='parquet/en_ewt/train.parquet')
203
+ print(f'Loaded {len(ds[\"train\"])} examples from Parquet')
204
+ "
205
+ ```
206
+
207
+ ### 12. Commit Changes to Git
208
+
209
+ ```bash
210
+ # Add generated files
211
+ git add universal_dependencies.py
212
+ git add README.md
213
+ git add metadata.json
214
+ git add tools/metadata-2.18.json
215
+ git add tools/universal_dependencies-2.18
216
+ git add tools/README-2.18
217
+ git add tools/etc/citation-2.18
218
+ git add tools/etc/description-2.18
219
+ git add tools/etc/codes_and_flags-2.18.yaml
220
+ git add tools/.env
221
+
222
+ # Commit with descriptive message
223
+ git commit -m "Add UD 2.18 data with loader v2.0
224
+
225
+ - Generated from Universal Dependencies 2.18 release
226
+ - 339+ treebanks across 186+ languages
227
+ - Includes Parquet files for efficient loading
228
+ - Loader version: 2.0.0
229
+ - MWT support and bug fixes included
230
+
231
+ Generated files:
232
+ - universal_dependencies.py (loader script)
233
+ - README.md (dataset card)
234
+ - metadata.json (treebank metadata)
235
+ - Parquet files in parquet/ directory"
236
+
237
+ # Tag the commit
238
+ git tag -a ud2.18-loader-v2.0 -m "UD 2.18 with Loader v2.0"
239
+
240
+ # Push branch and tags
241
+ git push origin 2.18
242
+ git push origin --tags
243
+ ```
244
+
245
+ ### 13. Upload to HuggingFace Hub
246
+
247
+ ```bash
248
+ # Upload Parquet files to HuggingFace (this may take several hours)
249
+ huggingface-cli upload commul/universal_dependencies ./parquet --repo-type dataset --revision 2.18
250
+
251
+ # Upload main files
252
+ huggingface-cli upload commul/universal_dependencies ./universal_dependencies.py --repo-type dataset --revision 2.18
253
+ huggingface-cli upload commul/universal_dependencies ./README.md --repo-type dataset --revision 2.18
254
+ huggingface-cli upload commul/universal_dependencies ./metadata.json --repo-type dataset --revision 2.18
255
+
256
+ # Alternatively, push everything at once (if you have the repo cloned with git-lfs)
257
+ # git push hf 2.18
258
+ ```
259
+
260
+ **Expected upload time:** 2-6 hours depending on network speed and HuggingFace server load.
261
+
262
+ ### 14. Verify on HuggingFace Hub
263
+
264
+ Visit: https://huggingface.co/datasets/commul/universal_dependencies
265
+
266
+ 1. Check that branch `2.18` exists in the "Branches" dropdown
267
+ 2. Verify files are present:
268
+ - `universal_dependencies.py`
269
+ - `README.md`
270
+ - `metadata.json`
271
+ - `parquet/` directory with subdirectories
272
+
273
+ 3. Test loading:
274
+ ```python
275
+ from datasets import load_dataset
276
+
277
+ # Load from new version
278
+ ds = load_dataset("commul/universal_dependencies", "en_ewt", revision="2.18")
279
+ print(ds)
280
+ ```
281
+
282
+ ### 15. Update Dataset Card (Optional)
283
+
284
+ If this is now the latest version, you may want to update the dataset card to mention it:
285
+
286
+ 1. Edit README.md to add "Latest version: 2.18" at the top
287
+ 2. Update version badges if any
288
+ 3. Commit and push
289
+
290
+ ## Troubleshooting
291
+
292
+ ### Issue: Submodule checkout fails
293
+
294
+ **Problem:** Some repositories don't have the `r2.18` tag.
295
+
296
+ **Solution:**
297
+ ```bash
298
+ cd tools/UD_repos
299
+ git submodule foreach 'git fetch --tags && (git checkout r2.18 || git checkout main) && touch .tag-r2.18'
300
+ ```
301
+
302
+ ### Issue: Metadata extraction fails for a treebank
303
+
304
+ **Problem:** A treebank is malformed or missing expected files.
305
+
306
+ **Solution:**
307
+ - Check the specific treebank in `UD_repos/UD_{Language}-{Treebank}/`
308
+ - Verify it has `.conllu` files and `stats.xml`
309
+ - Skip problematic treebanks by editing `02_traverse_ud_repos.py` if necessary
310
+ - Report issues to the UD project
311
+
312
+ ### Issue: Parquet generation fails for a treebank
313
+
314
+ **Problem:** CoNLL-U parsing error or schema mismatch.
315
+
316
+ **Solution:**
317
+ ```bash
318
+ # Generate Parquet in batches to isolate the problem
319
+ python 04_generate_parquet.py --treebanks "en_ewt" # Test one at a time
320
+
321
+ # Check logs for specific error
322
+ # Fix the problematic CoNLL-U file or skip it temporarily
323
+ ```
324
+
325
+ ### Issue: HuggingFace upload is very slow
326
+
327
+ **Problem:** Large Parquet files + network latency.
328
+
329
+ **Solution:**
330
+ - Use a machine with better network connection
331
+ - Upload during off-peak hours
332
+ - Use `--num-workers` flag if available:
333
+ ```bash
334
+ huggingface-cli upload commul/universal_dependencies ./parquet --repo-type dataset --revision 2.18 --num-workers 4
335
+ ```
336
+
337
+ ## Checklist
338
+
339
+ Before marking the release as complete:
340
+
341
+ - [ ] All metadata files generated (`metadata-2.18.json`, `citation-2.18`, `description-2.18`)
342
+ - [ ] Universal dependencies script generated (`universal_dependencies-2.18`)
343
+ - [ ] README generated (`README-2.18`)
344
+ - [ ] Parquet files generated for all treebanks
345
+ - [ ] Files copied to repository root
346
+ - [ ] Tested loading from Python script
347
+ - [ ] Tested loading from Parquet
348
+ - [ ] Committed to git with proper message
349
+ - [ ] Tagged with `ud2.18-loader-v2.0`
350
+ - [ ] Pushed to origin
351
+ - [ ] Uploaded to HuggingFace Hub
352
+ - [ ] Verified on HuggingFace Hub
353
+ - [ ] Tested loading from HuggingFace Hub
354
+
355
+ ## Timeline Estimate
356
+
357
+ | Step | Time | Can be Parallelized? |
358
+ |------|------|---------------------|
359
+ | 1-5: Setup & metadata | 5-10 min | No |
360
+ | 6: Fetch repositories | 30-60 min | No |
361
+ | 7: Extract metadata | 10-20 min | No |
362
+ | 8: Generate script | 1-2 min | No |
363
+ | 9: Generate Parquet | 2-4 hours | Yes (by treebank) |
364
+ | 10-12: Commit to git | 5-10 min | No |
365
+ | 13: Upload to HF Hub | 2-6 hours | Partially |
366
+ | 14-15: Verify & update | 10-20 min | No |
367
+ | **Total** | **~5-11 hours** | |
368
+
369
+ **Recommendation:** Start the process in the morning so uploads can complete during the day.
370
+
371
+ ## Notes
372
+
373
+ - The loader version (v2.0) is already in the templates, so new UD versions automatically get the latest loader features.
374
+ - Each UD version branch is independent - you can maintain multiple versions simultaneously.
375
+ - Old branches (2.7-2.17) can be upgraded to v2.0 loader using the same process (just checkout the old branch and regenerate files).
376
+ - The `main` branch can serve as a template or point to the latest version.
377
+
378
+ ## Support
379
+
380
+ For issues:
381
+ - Universal Dependencies data issues → [UD GitHub Issues](https://github.com/UniversalDependencies/docs/issues)
382
+ - Loader/tooling issues → Your repository issues
383
+ - HuggingFace Hub issues → [HuggingFace Community Forums](https://discuss.huggingface.co/)
CHANGELOG.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Changelog
2
+
3
+ All notable changes to the Universal Dependencies HuggingFace dataset loader will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [2.0.0] - 2026-01-09
9
+
10
+ ### Breaking Changes
11
+
12
+ - **Requires datasets>=4.0.0** for Parquet format support
13
+ - Python dataset scripts are deprecated in datasets 4.0.0 (July 2025)
14
+ - Parquet format is now the primary distribution method
15
+
16
+ - **Token sequences now correctly exclude MWT forms**
17
+ - Previous versions incorrectly included Multi-Word Token (MWT) surface forms in token sequences
18
+ - Token counts now match `num_words` from stats.xml, not `num_tokens`
19
+ - Affects ~50+ treebanks with contractions (French, Italian, Portuguese, Spanish, Arabic, etc.)
20
+ - **Example:** French "des" (MWT 18-19) → "de" + "les" (syntactic words 18, 19)
21
+ - **v1.x (buggy):** tokens = [..., "des", "de", "les", ...]
22
+ - **v2.0 (correct):** tokens = [..., "de", "les", ...], mwt = [{"id": "18-19", "form": "des", "misc": ""}]
23
+
24
+ ### Added
25
+
26
+ - **Multi-Word Token (MWT) Support**
27
+ - New `mwt` field in dataset schema containing structured MWT information
28
+ - Schema: `[{"id": "1-2", "form": "surface_form", "misc": "metadata"}]`
29
+ - Enables research on contractions, clitics, and word segmentation across languages
30
+ - MWT statistics now collected from stats.xml (`num_fused` field)
31
+
32
+ - **Parquet Format Distribution**
33
+ - All 339 treebanks available as Parquet files for efficient loading
34
+ - 5-10x faster loading compared to on-the-fly CoNLL-U parsing
35
+ - Reduced memory footprint and better compression
36
+ - Compatible with HuggingFace datasets >=4.0.0
37
+
38
+ - **Separate Semantic Versioning**
39
+ - Loader version (2.0.0) now tracked independently from UD data version (2.17)
40
+ - Version in pyproject.toml reflects codebase changes
41
+ - UD data version remains in dataset configuration
42
+ - Enables clearer communication of breaking changes in tooling
43
+
44
+ - **Parquet Generation Pipeline**
45
+ - New `tools/04_generate_parquet.py` script for CoNLL-U → Parquet conversion
46
+ - Supports batch processing and validation
47
+ - Test mode for quick iteration (`--test` flag)
48
+ - Progress tracking and error reporting
49
+
50
+ ### Fixed
51
+
52
+ - **Critical Bug: MWT lines incorrectly included in token sequences**
53
+ - Fixed filtering in `_generate_examples()` to exclude MWT lines (tuple IDs) and empty nodes
54
+ - Now uses `sent.filter(id=lambda x: type(x) is int)` to extract only syntactic words
55
+ - Impact: Token counts were inflated by ~0.1-2% in affected treebanks
56
+ - All token-level annotations (lemmas, UPOS, XPOS, features, heads, deprels) now correctly aligned
57
+
58
+ ### Changed
59
+
60
+ - **Dataset schema extended** (non-breaking addition)
61
+ - Added `mwt` field to all treebanks (empty list for treebanks without MWTs)
62
+
63
+ - **Dependencies updated**
64
+ - Added `pyarrow>=14.0.0` for Parquet support
65
+ - Added `datasets>=4.0.0` for compatibility
66
+ - Added `conllu>=5.0.0` for reliable CoNLL-U parsing
67
+
68
+ - **Metadata collection enhanced**
69
+ - `tools/02_traverse_ud_repos.py` now extracts `<fused>` counts from stats.xml
70
+ - Enables validation of MWT extraction accuracy
71
+
72
+ ## [1.0.0] - 2025-12-01 (Retroactive)
73
+
74
+ ### Initial Release
75
+
76
+ - Python dataset loader using `GeneratorBasedBuilder`
77
+ - On-the-fly download and parsing of CoNLL-U files from GitHub
78
+ - Support for 339 treebanks across 186 languages
79
+ - UD data version: 2.15-2.17
80
+ - Compatible with HuggingFace datasets <4.0.0 using `trust_remote_code=True`
81
+
82
+ ### Features
83
+
84
+ - Dynamic loading from Universal Dependencies GitHub repositories
85
+ - Automatic train/dev/test split detection
86
+ - CoNLL-U format parsing with conllu library
87
+ - Dataset features: tokens, lemmas, UPOS, XPOS, features, heads, dependency relations
88
+ - Metadata: language codes, families, licenses, genres
89
+
90
+ ### Known Issues (Fixed in 2.0.0)
91
+
92
+ - MWT lines incorrectly included in token sequences
93
+ - No explicit MWT information available to users
94
+ - Requires `trust_remote_code=True` (security concern)
95
+ - Incompatible with datasets >=4.0.0
96
+
97
+ ---
98
+
99
+ ## Migration Guide
100
+
101
+ See [MIGRATION.md](MIGRATION.md) for detailed migration instructions from v1.x to v2.0.
102
+
103
+ ## Version Numbering
104
+
105
+ - **Loader version** (e.g., 2.0.0): Reflects changes to the loading infrastructure and tooling
106
+ - **UD data version** (e.g., 2.17): Reflects the Universal Dependencies release version
107
+
108
+ Both versions are tracked separately to provide clarity on what changed.
MIGRATION.md ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Migration Guide: v1.x → v2.0
2
+
3
+ This guide helps you migrate from Universal Dependencies dataset loader v1.x to v2.0.
4
+
5
+ ## Quick Start
6
+
7
+ ### For Users with datasets >=4.0.0
8
+
9
+ No code changes needed! Parquet files load automatically:
10
+
11
+ ```python
12
+ from datasets import load_dataset
13
+
14
+ # v2.0: Works seamlessly with datasets >=4.0.0
15
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd")
16
+ # Automatically uses Parquet format (fast, secure)
17
+ ```
18
+
19
+ ### For Users with datasets <4.0.0
20
+
21
+ **Option 1: Upgrade datasets (Recommended)**
22
+
23
+ ```bash
24
+ pip install --upgrade "datasets>=4.0.0"
25
+ ```
26
+
27
+ **Option 2: Continue using v1.x**
28
+
29
+ ```python
30
+ # v1.x: Requires trust_remote_code
31
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True, revision="v1.0")
32
+ ```
33
+
34
+ ## Breaking Changes
35
+
36
+ ### 1. Token Sequences Now Exclude MWT Forms
37
+
38
+ **Impact:** Token counts and sequences have changed for treebanks with Multi-Word Tokens (MWTs).
39
+
40
+ **What Changed:**
41
+ - v1.x incorrectly included MWT surface forms in token sequences
42
+ - v2.0 correctly excludes them, matching UD guidelines
43
+
44
+ **Example (French "des" → "de" + "les"):**
45
+
46
+ ```python
47
+ # v1.x (BUGGY):
48
+ {
49
+ "tokens": ["Elle", "des", "de", "les", "pommes", "."], # WRONG: "des" included
50
+ "lemmas": ["elle", "_", "de", "le", "pomme", "."],
51
+ "upos": ["PRON", "_", "ADP", "DET", "NOUN", "PUNCT"],
52
+ }
53
+
54
+ # v2.0 (CORRECT):
55
+ {
56
+ "tokens": ["Elle", "de", "les", "pommes", "."], # CORRECT: only syntactic words
57
+ "lemmas": ["elle", "de", "le", "pomme", "."],
58
+ "upos": ["PRON", "ADP", "DET", "NOUN", "PUNCT"],
59
+ "mwt": [{"id": "2-3", "form": "des", "misc": ""}], # MWT info preserved
60
+ }
61
+ ```
62
+
63
+ **Affected Treebanks (50+):**
64
+
65
+ Languages with common MWTs include:
66
+ - **French** (fr_*): du, au, des, aux (~2-5% of tokens)
67
+ - **Italian** (it_*): del, della, nel, alla (~1-3%)
68
+ - **Portuguese** (pt_*): do, da, no, pelo (~2-4%)
69
+ - **Spanish** (es_*): del, al (~0.5-1%)
70
+ - **Arabic** (ar_*): various clitics (~1-2%)
71
+ - **German** (de_*): zum, vom, am (~0.1-0.5%)
72
+ - **Catalan** (ca_*): del, al, pels (~1-2%)
73
+ - **Indonesian** (id_*): reduplications (~0.1%)
74
+
75
+ **Action Required:**
76
+
77
+ If your code assumes specific token counts or positions:
78
+
79
+ ```python
80
+ # v1.x code that might break:
81
+ def get_third_token(example):
82
+ return example["tokens"][2] # May return different token in v2.0
83
+
84
+ # Migration fix:
85
+ def get_third_syntactic_word(example):
86
+ # v2.0: This is now correct - gets the 3rd syntactic word
87
+ return example["tokens"][2]
88
+
89
+ def get_third_surface_token(example):
90
+ # v2.0: If you need surface forms, reconstruct from MWTs
91
+ tokens = example["tokens"][:]
92
+ mwts = example["mwt"]
93
+
94
+ # Insert MWT forms at appropriate positions
95
+ for mwt in reversed(mwts): # Process in reverse to maintain indices
96
+ start, end = map(int, mwt["id"].split("-"))
97
+ tokens[start-1:end] = [mwt["form"]]
98
+
99
+ return tokens[2]
100
+ ```
101
+
102
+ ### 2. New Schema Field: `mwt`
103
+
104
+ **Impact:** Dataset schema now includes an `mwt` field.
105
+
106
+ **What Changed:**
107
+ - Added: `mwt` field containing structured MWT information
108
+ - Schema: `[{"id": "1-2", "form": "surface_form", "misc": "metadata"}]`
109
+ - Empty list for treebanks without MWTs
110
+
111
+ **Example Usage:**
112
+
113
+ ```python
114
+ from datasets import load_dataset
115
+
116
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
117
+
118
+ # Access MWT information
119
+ for example in dataset:
120
+ if example["mwt"]: # Has MWTs
121
+ for mwt in example["mwt"]:
122
+ print(f"MWT {mwt['id']}: {mwt['form']}")
123
+ # Extract range
124
+ start, end = map(int, mwt["id"].split("-"))
125
+ syntactic_words = example["tokens"][start-1:end]
126
+ print(f" → {' + '.join(syntactic_words)}")
127
+
128
+ # Output example:
129
+ # MWT 2-3: des
130
+ # → de + les
131
+ ```
132
+
133
+ **Research Use Cases:**
134
+
135
+ ```python
136
+ # Count MWTs per treebank
137
+ def count_mwts(dataset):
138
+ return sum(len(ex["mwt"]) for ex in dataset)
139
+
140
+ # Analyze MWT patterns
141
+ def analyze_mwt_patterns(dataset):
142
+ patterns = {}
143
+ for ex in dataset:
144
+ for mwt in ex["mwt"]:
145
+ form = mwt["form"]
146
+ patterns[form] = patterns.get(form, 0) + 1
147
+ return patterns
148
+
149
+ fr_gsd = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
150
+ print(analyze_mwt_patterns(fr_gsd))
151
+ # Output: {'des': 1234, 'du': 987, 'au': 654, 'aux': 321, ...}
152
+ ```
153
+
154
+ ### 3. Requires datasets >=4.0.0
155
+
156
+ **Impact:** The Python script loader is deprecated (datasets >=4.0.0 policy).
157
+
158
+ **What Changed:**
159
+ - v1.x: Uses Python script with `trust_remote_code=True`
160
+ - v2.0: Uses Parquet format (no remote code execution)
161
+
162
+ **Security Benefit:**
163
+ - No arbitrary code execution from dataset loading
164
+ - Parquet files are data-only, sandboxed
165
+ - Aligns with HuggingFace security policies
166
+
167
+ **Migration:**
168
+
169
+ ```bash
170
+ # Check your datasets version
171
+ python -c "import datasets; print(datasets.__version__)"
172
+
173
+ # Upgrade if needed
174
+ pip install --upgrade "datasets>=4.0.0"
175
+ ```
176
+
177
+ If you cannot upgrade datasets:
178
+
179
+ ```python
180
+ # Use v1.x with revision pinning
181
+ dataset = load_dataset(
182
+ "commul/universal_dependencies",
183
+ "fr_gsd",
184
+ trust_remote_code=True,
185
+ revision="v1.0" # Pin to v1.x
186
+ )
187
+ ```
188
+
189
+ ## New Features in v2.0
190
+
191
+ ### 1. Parquet Format (5-10x Faster Loading)
192
+
193
+ ```python
194
+ # v1.x: Downloads CoNLL-U, parses on-the-fly (~10-30 seconds)
195
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True)
196
+
197
+ # v2.0: Loads pre-processed Parquet (~1-3 seconds)
198
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd")
199
+ ```
200
+
201
+ **Benefits:**
202
+ - Much faster loading (especially for large treebanks)
203
+ - Lower memory usage
204
+ - Better compression
205
+ - Native support in datasets >=4.0.0
206
+
207
+ ### 2. Multi-Word Token (MWT) Information
208
+
209
+ ```python
210
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
211
+
212
+ # Find sentences with MWTs
213
+ sentences_with_mwts = [ex for ex in dataset if ex["mwt"]]
214
+ print(f"Sentences with MWTs: {len(sentences_with_mwts)}/{len(dataset)}")
215
+
216
+ # Analyze MWT complexity
217
+ complex_mwts = [ex for ex in dataset if any(
218
+ int(mwt["id"].split("-")[1]) - int(mwt["id"].split("-")[0]) > 2
219
+ for mwt in ex["mwt"]
220
+ )]
221
+ print(f"Sentences with 3+ word MWTs: {len(complex_mwts)}")
222
+ ```
223
+
224
+ ### 3. Enhanced Metadata
225
+
226
+ ```python
227
+ # Load dataset info
228
+ from datasets import load_dataset_builder
229
+
230
+ builder = load_dataset_builder("commul/universal_dependencies", "fr_gsd")
231
+ info = builder.info
232
+
233
+ # Now includes MWT statistics
234
+ print(info.description) # Contains num_fused counts
235
+ ```
236
+
237
+ ## Verification Steps
238
+
239
+ ### 1. Verify Token Counts Match UD Stats
240
+
241
+ ```python
242
+ from datasets import load_dataset
243
+ import json
244
+
245
+ # Load dataset and metadata
246
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
247
+
248
+ # Count syntactic words
249
+ word_count = sum(len(ex["tokens"]) for ex in dataset)
250
+
251
+ # Load metadata (if available)
252
+ with open("metadata.json") as f:
253
+ metadata = json.load(f)
254
+
255
+ expected_words = int(metadata["fr_gsd"]["splits"]["train"]["num_words"])
256
+ print(f"Dataset words: {word_count}")
257
+ print(f"Expected words: {expected_words}")
258
+ print(f"Match: {word_count == expected_words}")
259
+
260
+ # This should be True in v2.0 (was False in v1.x for MWT treebanks)
261
+ ```
262
+
263
+ ### 2. Verify MWT Extraction
264
+
265
+ ```python
266
+ # Count MWTs
267
+ mwt_count = sum(len(ex["mwt"]) for ex in dataset)
268
+
269
+ expected_mwts = int(metadata["fr_gsd"]["splits"]["train"]["num_fused"])
270
+ print(f"Dataset MWTs: {mwt_count}")
271
+ print(f"Expected MWTs: {expected_mwts}")
272
+ print(f"Match: {mwt_count == expected_mwts}")
273
+ ```
274
+
275
+ ### 3. Compare v1.x vs v2.0 Output
276
+
277
+ ```python
278
+ # Load both versions (if v1.x still available)
279
+ v1 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]", revision="v1.0", trust_remote_code=True)
280
+ v2 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]")
281
+
282
+ # English-EWT has no MWTs, so should be identical except for new field
283
+ for i in range(10):
284
+ assert v1[i]["tokens"] == v2[i]["tokens"], f"Example {i} differs"
285
+ assert v2[i]["mwt"] == [], f"Example {i} has unexpected MWTs"
286
+
287
+ print("✓ English-EWT unchanged (no MWTs)")
288
+
289
+ # French-GSD has MWTs, so v2.0 will differ
290
+ v1_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]", revision="v1.0", trust_remote_code=True)
291
+ v2_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]")
292
+
293
+ # v1.x token count includes MWTs (WRONG)
294
+ v1_token_count = sum(len(ex["tokens"]) for ex in v1_fr)
295
+
296
+ # v2.0 token count excludes MWTs (CORRECT)
297
+ v2_token_count = sum(len(ex["tokens"]) for ex in v2_fr)
298
+
299
+ print(f"v1.x French token count: {v1_token_count} (includes MWT forms)")
300
+ print(f"v2.0 French token count: {v2_token_count} (syntactic words only)")
301
+ print(f"Difference: {v1_token_count - v2_token_count} MWT forms removed")
302
+ ```
303
+
304
+ ## Common Issues
305
+
306
+ ### Issue 1: "Dataset script not supported" Error
307
+
308
+ **Error:**
309
+ ```
310
+ RuntimeError: Dataset scripts are no longer supported
311
+ ```
312
+
313
+ **Cause:** Using datasets >=4.0.0 with v1.x loader
314
+
315
+ **Solution:**
316
+ ```bash
317
+ pip install --upgrade "datasets>=4.0.0"
318
+ # Then use v2.0 (Parquet-based)
319
+ ```
320
+
321
+ ### Issue 2: Token Count Mismatch
322
+
323
+ **Issue:** Your code expects specific token counts that changed in v2.0
324
+
325
+ **Solution:** Update your code to use `num_words` from metadata instead of `num_tokens`
326
+
327
+ ```python
328
+ # v1.x: Used num_tokens (WRONG for MWT treebanks)
329
+ expected_count = metadata["splits"]["train"]["num_tokens"]
330
+
331
+ # v2.0: Use num_words (CORRECT)
332
+ expected_count = metadata["splits"]["train"]["num_words"]
333
+ ```
334
+
335
+ ### Issue 3: MWT Field Not Found (v1.x Code)
336
+
337
+ **Issue:** Old code doesn't handle the new `mwt` field
338
+
339
+ **Solution:** Gracefully handle the field or upgrade
340
+
341
+ ```python
342
+ # Backwards compatible code
343
+ tokens = example["tokens"]
344
+ mwts = example.get("mwt", []) # Empty list if not present
345
+ ```
346
+
347
+ ## Support
348
+
349
+ If you encounter issues during migration:
350
+
351
+ 1. Check the [CHANGELOG.md](CHANGELOG.md) for detailed changes
352
+ 2. Review the [README.md](README.md) for updated examples
353
+ 3. Report issues at: https://github.com/UniversalDependencies/UD_HuggingFace/issues
354
+
355
+ ## Summary
356
+
357
+ **Key Takeaways:**
358
+
359
+ ✅ **v2.0 is more correct:** Fixes critical MWT bug
360
+ ✅ **v2.0 is faster:** Parquet format is 5-10x quicker
361
+ ✅ **v2.0 is more secure:** No remote code execution
362
+ ✅ **v2.0 adds features:** MWT information now available
363
+
364
+ **Migration Checklist:**
365
+
366
+ - [ ] Upgrade to datasets >=4.0.0
367
+ - [ ] Test your code with v2.0 data
368
+ - [ ] Update token count expectations (if using MWT treebanks)
369
+ - [ ] Utilize new MWT field for research (optional)
370
+ - [ ] Update any hard-coded token indices (if applicable)
371
+
372
+ **Estimated Migration Time:** 15-30 minutes for most users
README.md CHANGED
@@ -12749,6 +12749,24 @@ config_names:
12749
 
12750
  # Dataset Card for Universal Dependencies Treebank
12751
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12752
  ## Table of Contents
12753
  - [Dataset Description](#dataset-description)
12754
  - [Dataset Summary](#dataset-summary)
@@ -12805,28 +12823,96 @@ break, including an LF character at the end of file).
12805
 
12806
  ### Data Instances
12807
 
12808
- This dataset has 235 configurations.
 
12809
  ```python
12810
- from datasets import get_dataset_config_names
12811
 
12812
- # Get the revision specific configurations
12813
- get_dataset_config_names("commul/universal_dependencies", revision="2.17", trust_remote_code=True) # 179
12814
- ['af_afribooms',
12815
- 'akk_pisandub',
12816
- 'aqz_tudet',
12817
- 'sq_tsa',
12818
- 'gsw_uzh',
12819
- 'am_att',
12820
- ...
12821
- ]
12822
 
12823
- # Get the latest configurations
12824
- get_dataset_config_names("commul/universal_dependencies", trust_remote_code=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12825
  ```
12826
 
12827
  ### Data Fields
12828
 
12829
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12830
 
12831
  ### Data Splits
12832
 
 
12749
 
12750
  # Dataset Card for Universal Dependencies Treebank
12751
 
12752
+ ## What's New in v2.0 🎉
12753
+
12754
+ **Version 2.0.0** introduces significant improvements and breaking changes:
12755
+
12756
+ - **🚀 Parquet Format:** 5-10x faster loading with HuggingFace datasets >=4.0.0
12757
+ - **🔧 Critical Bug Fix:** Multi-Word Tokens (MWTs) now correctly handled
12758
+ - **✨ MWT Support:** New `mwt` field provides structured multi-word token information
12759
+ - **🔒 Enhanced Security:** No more `trust_remote_code=True` required
12760
+ - **📊 Separate Versioning:** Loader version (2.0.0) distinct from UD data version (2.17)
12761
+
12762
+ **Breaking Changes:**
12763
+ - Token sequences now exclude MWT surface forms (matches UD guidelines)
12764
+ - Requires `datasets>=4.0.0` for Parquet support
12765
+ - Token counts changed for ~50+ treebanks with contractions (French, Italian, Portuguese, etc.)
12766
+
12767
+ 📖 **Migration Guide:** See [MIGRATION.md](MIGRATION.md) for detailed upgrade instructions
12768
+ 📋 **Changelog:** See [CHANGELOG.md](CHANGELOG.md) for complete release notes
12769
+
12770
  ## Table of Contents
12771
  - [Dataset Description](#dataset-description)
12772
  - [Dataset Summary](#dataset-summary)
 
12823
 
12824
  ### Data Instances
12825
 
12826
+ This dataset has 339 configurations (treebanks).
12827
+
12828
  ```python
12829
+ from datasets import get_dataset_config_names, load_dataset
12830
 
12831
+ # Get all available treebank configurations
12832
+ configs = get_dataset_config_names("commul/universal_dependencies")
12833
+ print(f"Available treebanks: {len(configs)}")
 
 
 
 
 
 
 
12834
 
12835
+ # Example configurations:
12836
+ # ['abq_atb', 'af_afribooms', 'akk_pisandub', 'aqz_tudet', 'sq_tsa', 'gsw_uzh', 'am_att', ...]
12837
+
12838
+ # Load a specific treebank
12839
+ dataset = load_dataset("commul/universal_dependencies", "en_ewt")
12840
+ print(dataset)
12841
+
12842
+ # Output:
12843
+ # DatasetDict({
12844
+ # train: Dataset({
12845
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
12846
+ # num_rows: 12543
12847
+ # })
12848
+ # validation: Dataset({
12849
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
12850
+ # num_rows: 2001
12851
+ # })
12852
+ # test: Dataset({
12853
+ # features: ['idx', 'text', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt'],
12854
+ # num_rows: 2077
12855
+ # })
12856
+ # })
12857
  ```
12858
 
12859
  ### Data Fields
12860
 
12861
+ Each example in the dataset contains the following fields:
12862
+
12863
+ - **idx** (string): Sentence ID from the CoNLL-U file metadata
12864
+ - **text** (string): Full sentence text (surface form)
12865
+ - **tokens** (list of strings): Syntactic word forms (MWT surface forms excluded)
12866
+ - **lemmas** (list of strings): Lemmas for each syntactic word
12867
+ - **upos** (list of strings): Universal POS tags
12868
+ - **xpos** (list of strings): Language-specific POS tags
12869
+ - **feats** (list of strings): Morphological features in UD format
12870
+ - **head** (list of strings): Head indices for dependency relations
12871
+ - **deprel** (list of strings): Dependency relation labels
12872
+ - **deps** (list of strings): Enhanced dependency graph
12873
+ - **misc** (list of strings): Miscellaneous annotations
12874
+ - **mwt** (list of dicts): Multi-Word Token information (NEW in v2.0)
12875
+ - **id** (string): Token range (e.g., "1-2")
12876
+ - **form** (string): Surface form (e.g., "don't")
12877
+ - **misc** (string): MWT-specific metadata
12878
+
12879
+ **Example:**
12880
+
12881
+ ```python
12882
+ from datasets import load_dataset
12883
+
12884
+ dataset = load_dataset("commul/universal_dependencies", "en_ewt", split="train")
12885
+ print(dataset[0])
12886
+
12887
+ # Output:
12888
+ {
12889
+ 'idx': 'weblog-blogspot.com_nominations_20041117172713_ENG_20041117_172713-0001',
12890
+ 'text': 'From the AP comes this story:',
12891
+ 'tokens': ['From', 'the', 'AP', 'comes', 'this', 'story', ':'],
12892
+ 'lemmas': ['from', 'the', 'AP', 'come', 'this', 'story', ':'],
12893
+ 'upos': ['ADP', 'DET', 'PROPN', 'VERB', 'DET', 'NOUN', 'PUNCT'],
12894
+ 'xpos': ['IN', 'DT', 'NNP', 'VBZ', 'DT', 'NN', ':'],
12895
+ 'feats': ['_', 'Definite=Def|PronType=Art', 'Number=Sing', 'Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'Number=Sing|PronType=Dem', 'Number=Sing', '_'],
12896
+ 'head': ['4', '3', '4', '0', '6', '4', '4'],
12897
+ 'deprel': ['case', 'det', 'obl', 'root', 'det', 'nsubj', 'punct'],
12898
+ 'deps': ['_', '_', '_', '_', '_', '_', '_'],
12899
+ 'misc': ['_', '_', '_', '_', '_', '_', 'SpaceAfter=No'],
12900
+ 'mwt': [] # No MWTs in this sentence
12901
+ }
12902
+ ```
12903
+
12904
+ **MWT Example (French):**
12905
+
12906
+ ```python
12907
+ dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
12908
+ # Find sentence with MWT
12909
+ example = [ex for ex in dataset if ex['mwt']][0]
12910
+ print(example['mwt'])
12911
+
12912
+ # Output:
12913
+ [{'id': '2-3', 'form': 'des', 'misc': ''}]
12914
+ # This means tokens[1:3] = ['de', 'les'] are combined as MWT surface form "des"
12915
+ ```
12916
 
12917
  ### Data Splits
12918
 
pyproject.toml ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "universal-dependencies"
3
+ version = "2.0.0"
4
+ description = "UD Dependencies Data Set"
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ dependencies = [
8
+ "pyyaml>=6.0.2",
9
+ "pyarrow>=14.0.0",
10
+ "datasets>=4.0.0",
11
+ "conllu>=5.0.0",
12
+ ]
13
+
14
+ [build-system]
15
+ requires = ["setuptools>=61.0"]
16
+ build-backend = "setuptools.build_meta"
17
+
18
+ [tool.black]
19
+ line-length = 120
20
+ target-version = ['py312']
21
+ exclude = '''
22
+ (
23
+ /(
24
+ \.eggs
25
+ | \.git
26
+ | \.pytest_cache
27
+ | build
28
+ | dist
29
+ | venv
30
+ )/
31
+ )
32
+ '''
33
+
34
+ [tool.pytest.ini_options]
35
+ addopts = "--black --mypy --ruff"
36
+
37
+ [tool.mypy]
38
+ exclude = [".git/", ".venv/", "__pycache__", "build", "venv"]
39
+
40
+ [tool.ruff]
41
+ line-length = 120
42
+ target-version = "py312"
43
+
44
+ [tool.ruff.lint.isort]
45
+ length-sort = true
46
+ lines-after-imports = 2
47
+ no-lines-before = ["standard-library", "local-folder"]
48
+
49
+ [tool.ruff.format]
50
+ quote-style = "double"
51
+ indent-style = "space"
52
+ skip-magic-trailing-comma = false
53
+ line-ending = "auto"
tools/02_traverse_ud_repos.py CHANGED
@@ -187,8 +187,11 @@ def traverse_directory(directory):
187
  if child_node is None:
188
  continue
189
 
190
- for child_child_node_name in ["sentences", "tokens", "words"]:
191
- value = child_node.find(child_child_node_name).text
 
 
 
192
  # print(f"Item:{item} {child_node_name}-{child_child_node_name}: {value}")
193
 
194
  if value and int(value) > 0:
 
187
  if child_node is None:
188
  continue
189
 
190
+ for child_child_node_name in ["sentences", "tokens", "words", "fused"]:
191
+ value_node = child_node.find(child_child_node_name)
192
+ if value_node is None:
193
+ continue
194
+ value = value_node.text
195
  # print(f"Item:{item} {child_node_name}-{child_child_node_name}: {value}")
196
 
197
  if value and int(value) > 0:
tools/04_generate_parquet.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Generate Parquet files from Universal Dependencies CoNLL-U data.
4
+
5
+ This script converts CoNLL-U files from UD repositories into Parquet format
6
+ for efficient loading with HuggingFace datasets >=4.0.0.
7
+
8
+ Repository: commul/universal_dependencies
9
+
10
+ Usage:
11
+ python 04_generate_parquet.py [--test] [--treebanks NAMES]
12
+
13
+ --test: Only process 3 test treebanks (fr_gsd, en_ewt, it_isdt)
14
+ --treebanks: Comma-separated list of treebank names to process
15
+ """
16
+
17
+ import argparse
18
+ import json
19
+ import os
20
+ import sys
21
+ from pathlib import Path
22
+ from typing import Any, Dict, List
23
+
24
+ import conllu
25
+ import datasets
26
+ from dotenv import load_dotenv
27
+
28
+
29
+ # Load environment variables
30
+ load_dotenv()
31
+ UD_VER = os.getenv("UD_VER", "2.17")
32
+
33
+ # Base paths
34
+ SCRIPT_DIR = Path(__file__).parent
35
+ REPO_ROOT = SCRIPT_DIR.parent
36
+ UD_REPOS_DIR = SCRIPT_DIR / "UD_repos"
37
+ PARQUET_OUTPUT_DIR = REPO_ROOT / "parquet"
38
+ METADATA_FILE = SCRIPT_DIR / f"metadata-{UD_VER}.json"
39
+
40
+
41
+ def extract_examples_from_conllu(filepath: str) -> List[Dict[str, Any]]:
42
+ """
43
+ Extract examples from a CoNLL-U file with MWT support.
44
+
45
+ Args:
46
+ filepath: Path to the CoNLL-U file
47
+
48
+ Returns:
49
+ List of example dictionaries matching the dataset schema
50
+ """
51
+ examples = []
52
+
53
+ with open(filepath, "r", encoding="utf-8") as data_file:
54
+ tokenlist = list(conllu.parse_incr(data_file))
55
+
56
+ for idx, sent in enumerate(tokenlist):
57
+ # Get sentence ID from metadata or use index
58
+ if "sent_id" in sent.metadata:
59
+ sent_idx = sent.metadata["sent_id"]
60
+ else:
61
+ sent_idx = str(idx)
62
+
63
+ # Extract Multi-Word Tokens (MWTs) - tokens with tuple IDs like (1, '-', 2)
64
+ mwts = []
65
+ for token in sent:
66
+ if isinstance(token["id"], tuple): # MWT line (e.g., "1-2")
67
+ mwts.append({
68
+ "id": f"{token['id'][0]}-{token['id'][2]}",
69
+ "form": token["form"],
70
+ "misc": str(token["misc"]) if token["misc"] else ""
71
+ })
72
+
73
+ # Filter to syntactic words only (exclude MWTs and empty nodes)
74
+ sent_filtered = sent.filter(id=lambda x: type(x) is int)
75
+
76
+ # Extract token fields from syntactic words
77
+ tokens = [token["form"] for token in sent_filtered]
78
+
79
+ # Get text from metadata or reconstruct from tokens
80
+ if "text" in sent.metadata:
81
+ text = sent.metadata["text"]
82
+ else:
83
+ text = " ".join(tokens)
84
+
85
+ # Create example
86
+ example = {
87
+ "idx": str(sent_idx),
88
+ "text": text,
89
+ "tokens": tokens,
90
+ "lemmas": [token["lemma"] for token in sent_filtered],
91
+ "upos": [token["upos"] for token in sent_filtered],
92
+ "xpos": [token["xpos"] for token in sent_filtered],
93
+ "feats": [str(token["feats"]) for token in sent_filtered],
94
+ "head": [str(token["head"]) for token in sent_filtered],
95
+ "deprel": [str(token["deprel"]) for token in sent_filtered],
96
+ "deps": [str(token["deps"]) for token in sent_filtered],
97
+ "misc": [str(token["misc"]) for token in sent_filtered],
98
+ "mwt": mwts,
99
+ }
100
+
101
+ examples.append(example)
102
+
103
+ return examples
104
+
105
+
106
+ def generate_parquet_for_treebank(
107
+ name: str,
108
+ metadata: Dict[str, Any],
109
+ output_dir: Path,
110
+ verbose: bool = True
111
+ ) -> bool:
112
+ """
113
+ Generate Parquet files for a single treebank.
114
+
115
+ Args:
116
+ name: Treebank name (e.g., "fr_gsd")
117
+ metadata: Treebank metadata including splits and file paths
118
+ output_dir: Output directory for Parquet files
119
+ verbose: Print progress messages
120
+
121
+ Returns:
122
+ True if successful, False otherwise
123
+ """
124
+ if verbose:
125
+ print(f"Processing {name}...")
126
+
127
+ # Create output directory for this treebank
128
+ treebank_output_dir = output_dir / name
129
+ treebank_output_dir.mkdir(parents=True, exist_ok=True)
130
+
131
+ # Process each split
132
+ dataset_dict = {}
133
+
134
+ for split_name, split_data in metadata.get("splits", {}).items():
135
+ files = split_data.get("files", [])
136
+ if not files:
137
+ continue
138
+
139
+ if verbose:
140
+ print(f" - {split_name}: {len(files)} file(s)")
141
+
142
+ # Extract examples from all files in this split
143
+ all_examples = []
144
+ for file_path in files:
145
+ # Construct full path: UD_repos/dirname/filename
146
+ full_path = UD_REPOS_DIR / metadata["dirname"] / file_path
147
+
148
+ if not full_path.exists():
149
+ print(f" Warning: File not found: {full_path}", file=sys.stderr)
150
+ continue
151
+
152
+ try:
153
+ examples = extract_examples_from_conllu(str(full_path))
154
+ all_examples.extend(examples)
155
+ except Exception as e:
156
+ print(f" Error processing {full_path}: {e}", file=sys.stderr)
157
+ return False
158
+
159
+ if not all_examples:
160
+ print(f" Warning: No examples extracted for {split_name}", file=sys.stderr)
161
+ continue
162
+
163
+ # Define features
164
+ features = datasets.Features({
165
+ "idx": datasets.Value("string"),
166
+ "text": datasets.Value("string"),
167
+ "tokens": datasets.Sequence(datasets.Value("string")),
168
+ "lemmas": datasets.Sequence(datasets.Value("string")),
169
+ "upos": datasets.Sequence(
170
+ datasets.features.ClassLabel(
171
+ names=[
172
+ "NOUN", "PUNCT", "ADP", "NUM", "SYM", "SCONJ",
173
+ "ADJ", "PART", "DET", "CCONJ", "PROPN", "PRON",
174
+ "X", "_", "ADV", "INTJ", "VERB", "AUX",
175
+ ]
176
+ )
177
+ ),
178
+ "xpos": datasets.Sequence(datasets.Value("string")),
179
+ "feats": datasets.Sequence(datasets.Value("string")),
180
+ "head": datasets.Sequence(datasets.Value("string")),
181
+ "deprel": datasets.Sequence(datasets.Value("string")),
182
+ "deps": datasets.Sequence(datasets.Value("string")),
183
+ "misc": datasets.Sequence(datasets.Value("string")),
184
+ "mwt": datasets.Sequence({
185
+ "id": datasets.Value("string"),
186
+ "form": datasets.Value("string"),
187
+ "misc": datasets.Value("string"),
188
+ }),
189
+ })
190
+
191
+ # Create dataset from examples
192
+ dataset = datasets.Dataset.from_list(all_examples, features=features)
193
+ dataset_dict[split_name] = dataset
194
+
195
+ if verbose:
196
+ print(f" Created dataset with {len(dataset)} examples")
197
+
198
+ if not dataset_dict:
199
+ print(f" Warning: No splits processed for {name}", file=sys.stderr)
200
+ return False
201
+
202
+ # Create DatasetDict and save to Parquet
203
+ dataset_dict_obj = datasets.DatasetDict(dataset_dict)
204
+
205
+ try:
206
+ # Save as Parquet files
207
+ for split_name, dataset in dataset_dict_obj.items():
208
+ parquet_path = treebank_output_dir / f"{split_name}.parquet"
209
+ dataset.to_parquet(parquet_path)
210
+ if verbose:
211
+ print(f" Saved {split_name}.parquet ({parquet_path.stat().st_size / 1024 / 1024:.2f} MB)")
212
+
213
+ return True
214
+
215
+ except Exception as e:
216
+ print(f" Error saving Parquet files: {e}", file=sys.stderr)
217
+ return False
218
+
219
+
220
+ def main():
221
+ """Main entry point for Parquet generation."""
222
+ parser = argparse.ArgumentParser(
223
+ description="Generate Parquet files from UD CoNLL-U data"
224
+ )
225
+ parser.add_argument(
226
+ "--test",
227
+ action="store_true",
228
+ help="Only process 3 test treebanks (fr_gsd, en_ewt, it_isdt)"
229
+ )
230
+ parser.add_argument(
231
+ "--treebanks",
232
+ type=str,
233
+ help="Comma-separated list of treebank names to process"
234
+ )
235
+ parser.add_argument(
236
+ "-v", "--verbose",
237
+ action="store_true",
238
+ default=True,
239
+ help="Print progress messages (default: True)"
240
+ )
241
+ parser.add_argument(
242
+ "-q", "--quiet",
243
+ action="store_true",
244
+ help="Suppress progress messages"
245
+ )
246
+
247
+ args = parser.parse_args()
248
+ verbose = args.verbose and not args.quiet
249
+
250
+ # Load metadata
251
+ if not METADATA_FILE.exists():
252
+ print(f"Error: Metadata file not found: {METADATA_FILE}", file=sys.stderr)
253
+ print(f"Run 02_traverse_ud_repos.py first to generate metadata.", file=sys.stderr)
254
+ return 1
255
+
256
+ with open(METADATA_FILE, "r", encoding="utf-8") as f:
257
+ metadata = json.load(f)
258
+
259
+ if verbose:
260
+ print(f"Loaded metadata for {len(metadata)} treebanks")
261
+ print(f"Output directory: {PARQUET_OUTPUT_DIR}")
262
+ print()
263
+
264
+ # Determine which treebanks to process
265
+ if args.test:
266
+ # Test mode: process 3 diverse treebanks
267
+ treebanks_to_process = ["fr_gsd", "en_ewt", "it_isdt"]
268
+ treebanks_to_process = [t for t in treebanks_to_process if t in metadata]
269
+ if verbose:
270
+ print(f"TEST MODE: Processing {len(treebanks_to_process)} treebanks")
271
+ elif args.treebanks:
272
+ # User-specified treebanks
273
+ treebanks_to_process = [t.strip() for t in args.treebanks.split(",")]
274
+ treebanks_to_process = [t for t in treebanks_to_process if t in metadata]
275
+ if verbose:
276
+ print(f"Processing {len(treebanks_to_process)} specified treebanks")
277
+ else:
278
+ # All treebanks
279
+ treebanks_to_process = sorted(metadata.keys())
280
+ if verbose:
281
+ print(f"Processing all {len(treebanks_to_process)} treebanks")
282
+
283
+ if verbose:
284
+ print()
285
+
286
+ # Process treebanks
287
+ success_count = 0
288
+ fail_count = 0
289
+
290
+ for i, name in enumerate(treebanks_to_process, 1):
291
+ if verbose:
292
+ print(f"[{i}/{len(treebanks_to_process)}] {name}")
293
+
294
+ try:
295
+ success = generate_parquet_for_treebank(
296
+ name,
297
+ metadata[name],
298
+ PARQUET_OUTPUT_DIR,
299
+ verbose=verbose
300
+ )
301
+
302
+ if success:
303
+ success_count += 1
304
+ else:
305
+ fail_count += 1
306
+
307
+ except Exception as e:
308
+ print(f" Error: {e}", file=sys.stderr)
309
+ fail_count += 1
310
+
311
+ if verbose:
312
+ print()
313
+
314
+ # Summary
315
+ if verbose:
316
+ print("=" * 60)
317
+ print(f"Completed: {success_count} successful, {fail_count} failed")
318
+ print(f"Total output size: {sum(f.stat().st_size for f in PARQUET_OUTPUT_DIR.rglob('*.parquet')) / 1024 / 1024:.2f} MB")
319
+
320
+ return 0 if fail_count == 0 else 1
321
+
322
+
323
+ if __name__ == "__main__":
324
+ sys.exit(main())
tools/templates/universal_dependencies.tmpl CHANGED
@@ -106,6 +106,13 @@ class UniversalDependencies(datasets.GeneratorBasedBuilder):
106
  "deprel": datasets.Sequence(datasets.Value("string")),
107
  "deps": datasets.Sequence(datasets.Value("string")),
108
  "misc": datasets.Sequence(datasets.Value("string")),
 
 
 
 
 
 
 
109
  }
110
  ),
111
  supervised_keys=None,
@@ -157,7 +164,20 @@ class UniversalDependencies(datasets.GeneratorBasedBuilder):
157
  else:
158
  idx = id
159
 
160
- tokens = [token["form"] for token in sent]
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
  if "text" in sent.metadata:
163
  txt = sent.metadata["text"]
@@ -167,14 +187,15 @@ class UniversalDependencies(datasets.GeneratorBasedBuilder):
167
  yield id, {
168
  "idx": str(idx),
169
  "text": txt,
170
- "tokens": [token["form"] for token in sent],
171
- "lemmas": [token["lemma"] for token in sent],
172
- "upos": [token["upos"] for token in sent],
173
- "xpos": [token["xpos"] for token in sent],
174
- "feats": [str(token["feats"]) for token in sent],
175
- "head": [str(token["head"]) for token in sent],
176
- "deprel": [str(token["deprel"]) for token in sent],
177
- "deps": [str(token["deps"]) for token in sent],
178
- "misc": [str(token["misc"]) for token in sent],
 
179
  }
180
  id += 1
 
106
  "deprel": datasets.Sequence(datasets.Value("string")),
107
  "deps": datasets.Sequence(datasets.Value("string")),
108
  "misc": datasets.Sequence(datasets.Value("string")),
109
+ "mwt": datasets.Sequence(
110
+ {
111
+ "id": datasets.Value("string"),
112
+ "form": datasets.Value("string"),
113
+ "misc": datasets.Value("string"),
114
+ }
115
+ ),
116
  }
117
  ),
118
  supervised_keys=None,
 
164
  else:
165
  idx = id
166
 
167
+ # Extract Multi-Word Tokens (MWTs) - tokens with tuple IDs like (1, '-', 2)
168
+ mwts = []
169
+ for token in sent:
170
+ if isinstance(token["id"], tuple): # MWT line (e.g., "1-2")
171
+ mwts.append({
172
+ "id": f"{token['id'][0]}-{token['id'][2]}",
173
+ "form": token["form"],
174
+ "misc": str(token["misc"]) if token["misc"] else ""
175
+ })
176
+
177
+ # Filter to syntactic words only (exclude MWTs and empty nodes)
178
+ sent_filtered = sent.filter(id=lambda x: type(x) is int)
179
+
180
+ tokens = [token["form"] for token in sent_filtered]
181
 
182
  if "text" in sent.metadata:
183
  txt = sent.metadata["text"]
 
187
  yield id, {
188
  "idx": str(idx),
189
  "text": txt,
190
+ "tokens": tokens,
191
+ "lemmas": [token["lemma"] for token in sent_filtered],
192
+ "upos": [token["upos"] for token in sent_filtered],
193
+ "xpos": [token["xpos"] for token in sent_filtered],
194
+ "feats": [str(token["feats"]) for token in sent_filtered],
195
+ "head": [str(token["head"]) for token in sent_filtered],
196
+ "deprel": [str(token["deprel"]) for token in sent_filtered],
197
+ "deps": [str(token["deps"]) for token in sent_filtered],
198
+ "misc": [str(token["misc"]) for token in sent_filtered],
199
+ "mwt": mwts,
200
  }
201
  id += 1