Datasets:
File size: 14,238 Bytes
33f6a97 287fe11 33f6a97 287fe11 33f6a97 287fe11 33f6a97 287fe11 33f6a97 287fe11 33f6a97 287fe11 33f6a97 287fe11 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 |
# Migration Guide: v1.x → v2.0
This guide helps you migrate from Universal Dependencies dataset loader v1.x to v2.0.
## What's New in v2.0
**Architecture Changes:**
- **Parquet format**: Native support with datasets >=4.0.0 (5-10x faster loading)
- **No Python script**: Dataset no longer requires `trust_remote_code=True`
- **External helper library**: CoNLL-U processing utilities moved to [`ud-hf-parquet-tools`](https://github.com/bot-zen/ud-hf-parquet-tools)
**Data Changes:**
- **MWT bug fix**: Token sequences now correctly exclude Multi-Word Token surface forms
- **MWT field added**: New structured `mwt` field with Multi-Word Token information
- **Enhanced metadata**: Includes `num_fused` (MWT counts) in statistics
## Quick Start
### For Users with datasets >=4.0.0
No code changes needed! Parquet files load automatically:
```python
from datasets import load_dataset
# v2.0: Works seamlessly with datasets >=4.0.0
dataset = load_dataset("commul/universal_dependencies", "fr_gsd")
# Automatically uses Parquet format (fast, secure)
```
### For Users with datasets <4.0.0
**Option 1: Upgrade datasets (Recommended)**
```bash
pip install --upgrade "datasets>=4.0.0"
```
**Option 2: Continue using v1.x**
```python
# v1.x: Requires trust_remote_code
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True, revision="v1.0")
```
## Breaking Changes
### 1. Token Sequences Now Exclude MWT Forms
**Impact:** Token counts and sequences have changed for treebanks with Multi-Word Tokens (MWTs).
**What Changed:**
- v1.x incorrectly included MWT surface forms in token sequences
- v2.0 correctly excludes them, matching UD guidelines
**Example (French "des" → "de" + "les"):**
```python
# v1.x (BUGGY):
{
"tokens": ["Elle", "des", "de", "les", "pommes", "."], # WRONG: "des" included
"lemmas": ["elle", "_", "de", "le", "pomme", "."],
"upos": ["PRON", "_", "ADP", "DET", "NOUN", "PUNCT"],
}
# v2.0 (CORRECT):
{
"tokens": ["Elle", "de", "les", "pommes", "."], # CORRECT: only syntactic words
"lemmas": ["elle", "de", "le", "pomme", "."],
"upos": ["PRON", "ADP", "DET", "NOUN", "PUNCT"],
"mwt": [{"id": "2-3", "form": "des", "misc": ""}], # MWT info preserved
}
```
**Affected Treebanks (50+):**
Languages with common MWTs include:
- **French** (fr_*): du, au, des, aux (~2-5% of tokens)
- **Italian** (it_*): del, della, nel, alla (~1-3%)
- **Portuguese** (pt_*): do, da, no, pelo (~2-4%)
- **Spanish** (es_*): del, al (~0.5-1%)
- **Arabic** (ar_*): various clitics (~1-2%)
- **German** (de_*): zum, vom, am (~0.1-0.5%)
- **Catalan** (ca_*): del, al, pels (~1-2%)
- **Indonesian** (id_*): reduplications (~0.1%)
**Action Required:**
If your code assumes specific token counts or positions:
```python
# v1.x code that might break:
def get_third_token(example):
return example["tokens"][2] # May return different token in v2.0
# Migration fix:
def get_third_syntactic_word(example):
# v2.0: This is now correct - gets the 3rd syntactic word
return example["tokens"][2]
def get_third_surface_token(example):
# v2.0: If you need surface forms, reconstruct from MWTs
tokens = example["tokens"][:]
mwts = example["mwt"]
# Insert MWT forms at appropriate positions
for mwt in reversed(mwts): # Process in reverse to maintain indices
start, end = map(int, mwt["id"].split("-"))
tokens[start-1:end] = [mwt["form"]]
return tokens[2]
```
### 2. New Schema Field: `mwt`
**Impact:** Dataset schema now includes an `mwt` field.
**What Changed:**
- Added: `mwt` field containing structured MWT information
- Schema: `[{"id": "1-2", "form": "surface_form", "misc": "metadata"}]`
- Empty list for treebanks without MWTs
**Example Usage:**
```python
from datasets import load_dataset
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
# Access MWT information
for example in dataset:
if example["mwt"]: # Has MWTs
for mwt in example["mwt"]:
print(f"MWT {mwt['id']}: {mwt['form']}")
# Extract range
start, end = map(int, mwt["id"].split("-"))
syntactic_words = example["tokens"][start-1:end]
print(f" → {' + '.join(syntactic_words)}")
# Output example:
# MWT 2-3: des
# → de + les
```
**Research Use Cases:**
```python
# Count MWTs per treebank
def count_mwts(dataset):
return sum(len(ex["mwt"]) for ex in dataset)
# Analyze MWT patterns
def analyze_mwt_patterns(dataset):
patterns = {}
for ex in dataset:
for mwt in ex["mwt"]:
form = mwt["form"]
patterns[form] = patterns.get(form, 0) + 1
return patterns
fr_gsd = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
print(analyze_mwt_patterns(fr_gsd))
# Output: {'des': 1234, 'du': 987, 'au': 654, 'aux': 321, ...}
```
### 3. Requires datasets >=4.0.0
**Impact:** The Python script loader is deprecated (datasets >=4.0.0 policy).
**What Changed:**
- v1.x: Uses Python script with `trust_remote_code=True`
- v2.0: Uses Parquet format (no remote code execution)
**Security Benefit:**
- No arbitrary code execution from dataset loading
- Parquet files are data-only, sandboxed
- Aligns with HuggingFace security policies
**Migration:**
```bash
# Check your datasets version
python -c "import datasets; print(datasets.__version__)"
# Upgrade if needed
pip install --upgrade "datasets>=4.0.0"
```
If you cannot upgrade datasets:
```python
# Use v1.x with revision pinning
dataset = load_dataset(
"commul/universal_dependencies",
"fr_gsd",
trust_remote_code=True,
revision="v1.0" # Pin to v1.x
)
```
## Helper Functions Moved to External Library
**Important:** Helper functions for CoNLL-U processing are now in a separate package.
### What Moved
The following functions are **no longer part of the dataset**:
- `parse_feats()`, `parse_misc()`, `parse_deps()` - Parse CoNLL-U field strings
- `write_conllu()`, `example_to_conllu()` - Export data to CoNLL-U format
- Various internal conversion utilities
### How to Access Helper Functions
If you need CoNLL-U processing utilities, install the external library:
```bash
pip install ud-hf-parquet-tools
```
Then import from the package:
```python
from datasets import load_dataset
from ud_hf_parquet_tools import parse_feats, parse_misc, write_conllu
# Load dataset
ds = load_dataset("commul/universal_dependencies", "en_ewt", split="train")
# Parse optional fields
sentence = ds[0]
for i, token in enumerate(sentence['tokens']):
feats = parse_feats(sentence['feats'][i]) # Returns dict or {}
misc = parse_misc(sentence['misc'][i]) # Returns dict or {}
print(f"{token}: UPOS={sentence['upos'][i]}, feats={feats}, misc={misc}")
# Export back to CoNLL-U format
write_conllu(ds, "output.conllu")
```
**Library Documentation:** https://github.com/bot-zen/ud-hf-parquet-tools
### If You Don't Need Helper Functions
Most users only need the dataset itself and can work directly with the fields:
```python
from datasets import load_dataset
ds = load_dataset("commul/universal_dependencies", "en_ewt", split="train")
# Access data directly
sentence = ds[0]
print(f"Tokens: {sentence['tokens']}")
print(f"POS tags: {sentence['upos']}")
print(f"Dependencies: {sentence['deprel']}")
# FEATS and MISC are strings in CoNLL-U format
print(f"Features (raw): {sentence['feats'][0]}") # e.g., "Case=Nom|Number=Sing"
print(f"Misc (raw): {sentence['misc'][0]}") # e.g., "SpaceAfter=No"
# Parse manually if needed (simple cases)
feats_str = sentence['feats'][0]
if feats_str:
feats_dict = dict(kv.split('=') for kv in feats_str.split('|'))
print(f"Features (parsed): {feats_dict}")
```
## New Features in v2.0
### 1. Parquet Format (5-10x Faster Loading)
```python
# v1.x: Downloads CoNLL-U, parses on-the-fly (~10-30 seconds)
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True)
# v2.0: Loads pre-processed Parquet (~1-3 seconds)
dataset = load_dataset("commul/universal_dependencies", "fr_gsd")
```
**Benefits:**
- Much faster loading (especially for large treebanks)
- Lower memory usage
- Better compression
- Native support in datasets >=4.0.0
### 2. Multi-Word Token (MWT) Information
```python
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
# Find sentences with MWTs
sentences_with_mwts = [ex for ex in dataset if ex["mwt"]]
print(f"Sentences with MWTs: {len(sentences_with_mwts)}/{len(dataset)}")
# Analyze MWT complexity
complex_mwts = [ex for ex in dataset if any(
int(mwt["id"].split("-")[1]) - int(mwt["id"].split("-")[0]) > 2
for mwt in ex["mwt"]
)]
print(f"Sentences with 3+ word MWTs: {len(complex_mwts)}")
```
### 3. Enhanced Metadata
```python
# Load dataset info
from datasets import load_dataset_builder
builder = load_dataset_builder("commul/universal_dependencies", "fr_gsd")
info = builder.info
# Now includes MWT statistics
print(info.description) # Contains num_fused counts
```
## Verification Steps
### 1. Verify Token Counts Match UD Stats
```python
from datasets import load_dataset
import json
# Load dataset and metadata
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train")
# Count syntactic words
word_count = sum(len(ex["tokens"]) for ex in dataset)
# Load metadata (if available)
with open("metadata.json") as f:
metadata = json.load(f)
expected_words = int(metadata["fr_gsd"]["splits"]["train"]["num_words"])
print(f"Dataset words: {word_count}")
print(f"Expected words: {expected_words}")
print(f"Match: {word_count == expected_words}")
# This should be True in v2.0 (was False in v1.x for MWT treebanks)
```
### 2. Verify MWT Extraction
```python
# Count MWTs
mwt_count = sum(len(ex["mwt"]) for ex in dataset)
expected_mwts = int(metadata["fr_gsd"]["splits"]["train"]["num_fused"])
print(f"Dataset MWTs: {mwt_count}")
print(f"Expected MWTs: {expected_mwts}")
print(f"Match: {mwt_count == expected_mwts}")
```
### 3. Compare v1.x vs v2.0 Output
```python
# Load both versions (if v1.x still available)
v1 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]", revision="v1.0", trust_remote_code=True)
v2 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]")
# English-EWT has no MWTs, so should be identical except for new field
for i in range(10):
assert v1[i]["tokens"] == v2[i]["tokens"], f"Example {i} differs"
assert v2[i]["mwt"] == [], f"Example {i} has unexpected MWTs"
print("✓ English-EWT unchanged (no MWTs)")
# French-GSD has MWTs, so v2.0 will differ
v1_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]", revision="v1.0", trust_remote_code=True)
v2_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]")
# v1.x token count includes MWTs (WRONG)
v1_token_count = sum(len(ex["tokens"]) for ex in v1_fr)
# v2.0 token count excludes MWTs (CORRECT)
v2_token_count = sum(len(ex["tokens"]) for ex in v2_fr)
print(f"v1.x French token count: {v1_token_count} (includes MWT forms)")
print(f"v2.0 French token count: {v2_token_count} (syntactic words only)")
print(f"Difference: {v1_token_count - v2_token_count} MWT forms removed")
```
## Common Issues
### Issue 1: "Dataset script not supported" Error
**Error:**
```
RuntimeError: Dataset scripts are no longer supported
```
**Cause:** Using datasets >=4.0.0 with v1.x loader
**Solution:**
```bash
pip install --upgrade "datasets>=4.0.0"
# Then use v2.0 (Parquet-based)
```
### Issue 2: Token Count Mismatch
**Issue:** Your code expects specific token counts that changed in v2.0
**Solution:** Update your code to use `num_words` from metadata instead of `num_tokens`
```python
# v1.x: Used num_tokens (WRONG for MWT treebanks)
expected_count = metadata["splits"]["train"]["num_tokens"]
# v2.0: Use num_words (CORRECT)
expected_count = metadata["splits"]["train"]["num_words"]
```
### Issue 3: MWT Field Not Found (v1.x Code)
**Issue:** Old code doesn't handle the new `mwt` field
**Solution:** Gracefully handle the field or upgrade
```python
# Backwards compatible code
tokens = example["tokens"]
mwts = example.get("mwt", []) # Empty list if not present
```
### Issue 4: Helper Function Import Errors
**Error:**
```python
from universal_dependencies import parse_feats
# ImportError: No module named 'universal_dependencies'
```
**Cause:** Helper functions moved to separate library
**Solution:**
```bash
# Install the helper library
pip install ud-hf-parquet-tools
# Update imports
from ud_hf_parquet_tools import parse_feats, parse_misc, write_conllu
```
Or work with raw strings directly (see "If You Don't Need Helper Functions" section above).
## Support
If you encounter issues during migration:
1. Check the [CHANGELOG.md](CHANGELOG.md) for detailed changes
2. Review the [README.md](README.md) for updated examples
3. Helper library documentation: https://github.com/bot-zen/ud-hf-parquet-tools
4. Report issues at: https://huggingface.co/datasets/commul/universal_dependencies/discussions
## Summary
**Key Takeaways:**
✅ **v2.0 is more correct:** Fixes critical MWT bug
✅ **v2.0 is faster:** Parquet format is 5-10x quicker
✅ **v2.0 is more secure:** No remote code execution
✅ **v2.0 adds features:** MWT information now available
✅ **v2.0 is modular:** Helper functions available as separate library
**Migration Checklist:**
- [ ] Upgrade to datasets >=4.0.0
- [ ] Test your code with v2.0 data
- [ ] Update token count expectations (if using MWT treebanks)
- [ ] Update any hard-coded token indices (if applicable)
- [ ] If using helper functions: Install `ud-hf-parquet-tools` and update imports
- [ ] If exporting to CoNLL-U: Use `write_conllu()` from `ud-hf-parquet-tools`
- [ ] Utilize new MWT field for research (optional)
**Estimated Migration Time:**
- Basic usage: 15-30 minutes
- With helper functions: +10 minutes (install library, update imports)
**Resources:**
- Dataset repository: https://huggingface.co/datasets/commul/universal_dependencies
- Helper library: https://github.com/bot-zen/ud-hf-parquet-tools
- CHANGELOG: [CHANGELOG.md](CHANGELOG.md)
|