|
|
# Migration Guide: v1.x → v2.0 |
|
|
|
|
|
This guide helps you migrate from Universal Dependencies dataset loader v1.x to v2.0. |
|
|
|
|
|
## What's New in v2.0 |
|
|
|
|
|
**Architecture Changes:** |
|
|
- **Parquet format**: Native support with datasets >=4.0.0 (5-10x faster loading) |
|
|
- **No Python script**: Dataset no longer requires `trust_remote_code=True` |
|
|
- **External helper library**: CoNLL-U processing utilities moved to [`ud-hf-parquet-tools`](https://github.com/bot-zen/ud-hf-parquet-tools) |
|
|
|
|
|
**Data Changes:** |
|
|
- **MWT bug fix**: Token sequences now correctly exclude Multi-Word Token surface forms |
|
|
- **MWT field added**: New structured `mwt` field with Multi-Word Token information |
|
|
- **Enhanced metadata**: Includes `num_fused` (MWT counts) in statistics |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### For Users with datasets >=4.0.0 |
|
|
|
|
|
No code changes needed! Parquet files load automatically: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# v2.0: Works seamlessly with datasets >=4.0.0 |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd") |
|
|
# Automatically uses Parquet format (fast, secure) |
|
|
``` |
|
|
|
|
|
### For Users with datasets <4.0.0 |
|
|
|
|
|
**Option 1: Upgrade datasets (Recommended)** |
|
|
|
|
|
```bash |
|
|
pip install --upgrade "datasets>=4.0.0" |
|
|
``` |
|
|
|
|
|
**Option 2: Continue using v1.x** |
|
|
|
|
|
```python |
|
|
# v1.x: Requires trust_remote_code |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True, revision="v1.0") |
|
|
``` |
|
|
|
|
|
## Breaking Changes |
|
|
|
|
|
### 1. Token Sequences Now Exclude MWT Forms |
|
|
|
|
|
**Impact:** Token counts and sequences have changed for treebanks with Multi-Word Tokens (MWTs). |
|
|
|
|
|
**What Changed:** |
|
|
- v1.x incorrectly included MWT surface forms in token sequences |
|
|
- v2.0 correctly excludes them, matching UD guidelines |
|
|
|
|
|
**Example (French "des" → "de" + "les"):** |
|
|
|
|
|
```python |
|
|
# v1.x (BUGGY): |
|
|
{ |
|
|
"tokens": ["Elle", "des", "de", "les", "pommes", "."], # WRONG: "des" included |
|
|
"lemmas": ["elle", "_", "de", "le", "pomme", "."], |
|
|
"upos": ["PRON", "_", "ADP", "DET", "NOUN", "PUNCT"], |
|
|
} |
|
|
|
|
|
# v2.0 (CORRECT): |
|
|
{ |
|
|
"tokens": ["Elle", "de", "les", "pommes", "."], # CORRECT: only syntactic words |
|
|
"lemmas": ["elle", "de", "le", "pomme", "."], |
|
|
"upos": ["PRON", "ADP", "DET", "NOUN", "PUNCT"], |
|
|
"mwt": [{"id": "2-3", "form": "des", "misc": ""}], # MWT info preserved |
|
|
} |
|
|
``` |
|
|
|
|
|
**Affected Treebanks (50+):** |
|
|
|
|
|
Languages with common MWTs include: |
|
|
- **French** (fr_*): du, au, des, aux (~2-5% of tokens) |
|
|
- **Italian** (it_*): del, della, nel, alla (~1-3%) |
|
|
- **Portuguese** (pt_*): do, da, no, pelo (~2-4%) |
|
|
- **Spanish** (es_*): del, al (~0.5-1%) |
|
|
- **Arabic** (ar_*): various clitics (~1-2%) |
|
|
- **German** (de_*): zum, vom, am (~0.1-0.5%) |
|
|
- **Catalan** (ca_*): del, al, pels (~1-2%) |
|
|
- **Indonesian** (id_*): reduplications (~0.1%) |
|
|
|
|
|
**Action Required:** |
|
|
|
|
|
If your code assumes specific token counts or positions: |
|
|
|
|
|
```python |
|
|
# v1.x code that might break: |
|
|
def get_third_token(example): |
|
|
return example["tokens"][2] # May return different token in v2.0 |
|
|
|
|
|
# Migration fix: |
|
|
def get_third_syntactic_word(example): |
|
|
# v2.0: This is now correct - gets the 3rd syntactic word |
|
|
return example["tokens"][2] |
|
|
|
|
|
def get_third_surface_token(example): |
|
|
# v2.0: If you need surface forms, reconstruct from MWTs |
|
|
tokens = example["tokens"][:] |
|
|
mwts = example["mwt"] |
|
|
|
|
|
# Insert MWT forms at appropriate positions |
|
|
for mwt in reversed(mwts): # Process in reverse to maintain indices |
|
|
start, end = map(int, mwt["id"].split("-")) |
|
|
tokens[start-1:end] = [mwt["form"]] |
|
|
|
|
|
return tokens[2] |
|
|
``` |
|
|
|
|
|
### 2. New Schema Field: `mwt` |
|
|
|
|
|
**Impact:** Dataset schema now includes an `mwt` field. |
|
|
|
|
|
**What Changed:** |
|
|
- Added: `mwt` field containing structured MWT information |
|
|
- Schema: `[{"id": "1-2", "form": "surface_form", "misc": "metadata"}]` |
|
|
- Empty list for treebanks without MWTs |
|
|
|
|
|
**Example Usage:** |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train") |
|
|
|
|
|
# Access MWT information |
|
|
for example in dataset: |
|
|
if example["mwt"]: # Has MWTs |
|
|
for mwt in example["mwt"]: |
|
|
print(f"MWT {mwt['id']}: {mwt['form']}") |
|
|
# Extract range |
|
|
start, end = map(int, mwt["id"].split("-")) |
|
|
syntactic_words = example["tokens"][start-1:end] |
|
|
print(f" → {' + '.join(syntactic_words)}") |
|
|
|
|
|
# Output example: |
|
|
# MWT 2-3: des |
|
|
# → de + les |
|
|
``` |
|
|
|
|
|
**Research Use Cases:** |
|
|
|
|
|
```python |
|
|
# Count MWTs per treebank |
|
|
def count_mwts(dataset): |
|
|
return sum(len(ex["mwt"]) for ex in dataset) |
|
|
|
|
|
# Analyze MWT patterns |
|
|
def analyze_mwt_patterns(dataset): |
|
|
patterns = {} |
|
|
for ex in dataset: |
|
|
for mwt in ex["mwt"]: |
|
|
form = mwt["form"] |
|
|
patterns[form] = patterns.get(form, 0) + 1 |
|
|
return patterns |
|
|
|
|
|
fr_gsd = load_dataset("commul/universal_dependencies", "fr_gsd", split="train") |
|
|
print(analyze_mwt_patterns(fr_gsd)) |
|
|
# Output: {'des': 1234, 'du': 987, 'au': 654, 'aux': 321, ...} |
|
|
``` |
|
|
|
|
|
### 3. Requires datasets >=4.0.0 |
|
|
|
|
|
**Impact:** The Python script loader is deprecated (datasets >=4.0.0 policy). |
|
|
|
|
|
**What Changed:** |
|
|
- v1.x: Uses Python script with `trust_remote_code=True` |
|
|
- v2.0: Uses Parquet format (no remote code execution) |
|
|
|
|
|
**Security Benefit:** |
|
|
- No arbitrary code execution from dataset loading |
|
|
- Parquet files are data-only, sandboxed |
|
|
- Aligns with HuggingFace security policies |
|
|
|
|
|
**Migration:** |
|
|
|
|
|
```bash |
|
|
# Check your datasets version |
|
|
python -c "import datasets; print(datasets.__version__)" |
|
|
|
|
|
# Upgrade if needed |
|
|
pip install --upgrade "datasets>=4.0.0" |
|
|
``` |
|
|
|
|
|
If you cannot upgrade datasets: |
|
|
|
|
|
```python |
|
|
# Use v1.x with revision pinning |
|
|
dataset = load_dataset( |
|
|
"commul/universal_dependencies", |
|
|
"fr_gsd", |
|
|
trust_remote_code=True, |
|
|
revision="v1.0" # Pin to v1.x |
|
|
) |
|
|
``` |
|
|
|
|
|
## Helper Functions Moved to External Library |
|
|
|
|
|
**Important:** Helper functions for CoNLL-U processing are now in a separate package. |
|
|
|
|
|
### What Moved |
|
|
|
|
|
The following functions are **no longer part of the dataset**: |
|
|
- `parse_feats()`, `parse_misc()`, `parse_deps()` - Parse CoNLL-U field strings |
|
|
- `write_conllu()`, `example_to_conllu()` - Export data to CoNLL-U format |
|
|
- Various internal conversion utilities |
|
|
|
|
|
### How to Access Helper Functions |
|
|
|
|
|
If you need CoNLL-U processing utilities, install the external library: |
|
|
|
|
|
```bash |
|
|
pip install ud-hf-parquet-tools |
|
|
``` |
|
|
|
|
|
Then import from the package: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from ud_hf_parquet_tools import parse_feats, parse_misc, write_conllu |
|
|
|
|
|
# Load dataset |
|
|
ds = load_dataset("commul/universal_dependencies", "en_ewt", split="train") |
|
|
|
|
|
# Parse optional fields |
|
|
sentence = ds[0] |
|
|
for i, token in enumerate(sentence['tokens']): |
|
|
feats = parse_feats(sentence['feats'][i]) # Returns dict or {} |
|
|
misc = parse_misc(sentence['misc'][i]) # Returns dict or {} |
|
|
print(f"{token}: UPOS={sentence['upos'][i]}, feats={feats}, misc={misc}") |
|
|
|
|
|
# Export back to CoNLL-U format |
|
|
write_conllu(ds, "output.conllu") |
|
|
``` |
|
|
|
|
|
**Library Documentation:** https://github.com/bot-zen/ud-hf-parquet-tools |
|
|
|
|
|
### If You Don't Need Helper Functions |
|
|
|
|
|
Most users only need the dataset itself and can work directly with the fields: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("commul/universal_dependencies", "en_ewt", split="train") |
|
|
|
|
|
# Access data directly |
|
|
sentence = ds[0] |
|
|
print(f"Tokens: {sentence['tokens']}") |
|
|
print(f"POS tags: {sentence['upos']}") |
|
|
print(f"Dependencies: {sentence['deprel']}") |
|
|
|
|
|
# FEATS and MISC are strings in CoNLL-U format |
|
|
print(f"Features (raw): {sentence['feats'][0]}") # e.g., "Case=Nom|Number=Sing" |
|
|
print(f"Misc (raw): {sentence['misc'][0]}") # e.g., "SpaceAfter=No" |
|
|
|
|
|
# Parse manually if needed (simple cases) |
|
|
feats_str = sentence['feats'][0] |
|
|
if feats_str: |
|
|
feats_dict = dict(kv.split('=') for kv in feats_str.split('|')) |
|
|
print(f"Features (parsed): {feats_dict}") |
|
|
``` |
|
|
|
|
|
## New Features in v2.0 |
|
|
|
|
|
### 1. Parquet Format (5-10x Faster Loading) |
|
|
|
|
|
```python |
|
|
# v1.x: Downloads CoNLL-U, parses on-the-fly (~10-30 seconds) |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", trust_remote_code=True) |
|
|
|
|
|
# v2.0: Loads pre-processed Parquet (~1-3 seconds) |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd") |
|
|
``` |
|
|
|
|
|
**Benefits:** |
|
|
- Much faster loading (especially for large treebanks) |
|
|
- Lower memory usage |
|
|
- Better compression |
|
|
- Native support in datasets >=4.0.0 |
|
|
|
|
|
### 2. Multi-Word Token (MWT) Information |
|
|
|
|
|
```python |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train") |
|
|
|
|
|
# Find sentences with MWTs |
|
|
sentences_with_mwts = [ex for ex in dataset if ex["mwt"]] |
|
|
print(f"Sentences with MWTs: {len(sentences_with_mwts)}/{len(dataset)}") |
|
|
|
|
|
# Analyze MWT complexity |
|
|
complex_mwts = [ex for ex in dataset if any( |
|
|
int(mwt["id"].split("-")[1]) - int(mwt["id"].split("-")[0]) > 2 |
|
|
for mwt in ex["mwt"] |
|
|
)] |
|
|
print(f"Sentences with 3+ word MWTs: {len(complex_mwts)}") |
|
|
``` |
|
|
|
|
|
### 3. Enhanced Metadata |
|
|
|
|
|
```python |
|
|
# Load dataset info |
|
|
from datasets import load_dataset_builder |
|
|
|
|
|
builder = load_dataset_builder("commul/universal_dependencies", "fr_gsd") |
|
|
info = builder.info |
|
|
|
|
|
# Now includes MWT statistics |
|
|
print(info.description) # Contains num_fused counts |
|
|
``` |
|
|
|
|
|
## Verification Steps |
|
|
|
|
|
### 1. Verify Token Counts Match UD Stats |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
import json |
|
|
|
|
|
# Load dataset and metadata |
|
|
dataset = load_dataset("commul/universal_dependencies", "fr_gsd", split="train") |
|
|
|
|
|
# Count syntactic words |
|
|
word_count = sum(len(ex["tokens"]) for ex in dataset) |
|
|
|
|
|
# Load metadata (if available) |
|
|
with open("metadata.json") as f: |
|
|
metadata = json.load(f) |
|
|
|
|
|
expected_words = int(metadata["fr_gsd"]["splits"]["train"]["num_words"]) |
|
|
print(f"Dataset words: {word_count}") |
|
|
print(f"Expected words: {expected_words}") |
|
|
print(f"Match: {word_count == expected_words}") |
|
|
|
|
|
# This should be True in v2.0 (was False in v1.x for MWT treebanks) |
|
|
``` |
|
|
|
|
|
### 2. Verify MWT Extraction |
|
|
|
|
|
```python |
|
|
# Count MWTs |
|
|
mwt_count = sum(len(ex["mwt"]) for ex in dataset) |
|
|
|
|
|
expected_mwts = int(metadata["fr_gsd"]["splits"]["train"]["num_fused"]) |
|
|
print(f"Dataset MWTs: {mwt_count}") |
|
|
print(f"Expected MWTs: {expected_mwts}") |
|
|
print(f"Match: {mwt_count == expected_mwts}") |
|
|
``` |
|
|
|
|
|
### 3. Compare v1.x vs v2.0 Output |
|
|
|
|
|
```python |
|
|
# Load both versions (if v1.x still available) |
|
|
v1 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]", revision="v1.0", trust_remote_code=True) |
|
|
v2 = load_dataset("commul/universal_dependencies", "en_ewt", split="test[:10]") |
|
|
|
|
|
# English-EWT has no MWTs, so should be identical except for new field |
|
|
for i in range(10): |
|
|
assert v1[i]["tokens"] == v2[i]["tokens"], f"Example {i} differs" |
|
|
assert v2[i]["mwt"] == [], f"Example {i} has unexpected MWTs" |
|
|
|
|
|
print("✓ English-EWT unchanged (no MWTs)") |
|
|
|
|
|
# French-GSD has MWTs, so v2.0 will differ |
|
|
v1_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]", revision="v1.0", trust_remote_code=True) |
|
|
v2_fr = load_dataset("commul/universal_dependencies", "fr_gsd", split="test[:10]") |
|
|
|
|
|
# v1.x token count includes MWTs (WRONG) |
|
|
v1_token_count = sum(len(ex["tokens"]) for ex in v1_fr) |
|
|
|
|
|
# v2.0 token count excludes MWTs (CORRECT) |
|
|
v2_token_count = sum(len(ex["tokens"]) for ex in v2_fr) |
|
|
|
|
|
print(f"v1.x French token count: {v1_token_count} (includes MWT forms)") |
|
|
print(f"v2.0 French token count: {v2_token_count} (syntactic words only)") |
|
|
print(f"Difference: {v1_token_count - v2_token_count} MWT forms removed") |
|
|
``` |
|
|
|
|
|
## Common Issues |
|
|
|
|
|
### Issue 1: "Dataset script not supported" Error |
|
|
|
|
|
**Error:** |
|
|
``` |
|
|
RuntimeError: Dataset scripts are no longer supported |
|
|
``` |
|
|
|
|
|
**Cause:** Using datasets >=4.0.0 with v1.x loader |
|
|
|
|
|
**Solution:** |
|
|
```bash |
|
|
pip install --upgrade "datasets>=4.0.0" |
|
|
# Then use v2.0 (Parquet-based) |
|
|
``` |
|
|
|
|
|
### Issue 2: Token Count Mismatch |
|
|
|
|
|
**Issue:** Your code expects specific token counts that changed in v2.0 |
|
|
|
|
|
**Solution:** Update your code to use `num_words` from metadata instead of `num_tokens` |
|
|
|
|
|
```python |
|
|
# v1.x: Used num_tokens (WRONG for MWT treebanks) |
|
|
expected_count = metadata["splits"]["train"]["num_tokens"] |
|
|
|
|
|
# v2.0: Use num_words (CORRECT) |
|
|
expected_count = metadata["splits"]["train"]["num_words"] |
|
|
``` |
|
|
|
|
|
### Issue 3: MWT Field Not Found (v1.x Code) |
|
|
|
|
|
**Issue:** Old code doesn't handle the new `mwt` field |
|
|
|
|
|
**Solution:** Gracefully handle the field or upgrade |
|
|
|
|
|
```python |
|
|
# Backwards compatible code |
|
|
tokens = example["tokens"] |
|
|
mwts = example.get("mwt", []) # Empty list if not present |
|
|
``` |
|
|
|
|
|
### Issue 4: Helper Function Import Errors |
|
|
|
|
|
**Error:** |
|
|
```python |
|
|
from universal_dependencies import parse_feats |
|
|
# ImportError: No module named 'universal_dependencies' |
|
|
``` |
|
|
|
|
|
**Cause:** Helper functions moved to separate library |
|
|
|
|
|
**Solution:** |
|
|
```bash |
|
|
# Install the helper library |
|
|
pip install ud-hf-parquet-tools |
|
|
|
|
|
# Update imports |
|
|
from ud_hf_parquet_tools import parse_feats, parse_misc, write_conllu |
|
|
``` |
|
|
|
|
|
Or work with raw strings directly (see "If You Don't Need Helper Functions" section above). |
|
|
|
|
|
## Support |
|
|
|
|
|
If you encounter issues during migration: |
|
|
|
|
|
1. Check the [CHANGELOG.md](CHANGELOG.md) for detailed changes |
|
|
2. Review the [README.md](README.md) for updated examples |
|
|
3. Helper library documentation: https://github.com/bot-zen/ud-hf-parquet-tools |
|
|
4. Report issues at: https://huggingface.co/datasets/commul/universal_dependencies/discussions |
|
|
|
|
|
## Summary |
|
|
|
|
|
**Key Takeaways:** |
|
|
|
|
|
✅ **v2.0 is more correct:** Fixes critical MWT bug |
|
|
✅ **v2.0 is faster:** Parquet format is 5-10x quicker |
|
|
✅ **v2.0 is more secure:** No remote code execution |
|
|
✅ **v2.0 adds features:** MWT information now available |
|
|
✅ **v2.0 is modular:** Helper functions available as separate library |
|
|
|
|
|
**Migration Checklist:** |
|
|
|
|
|
- [ ] Upgrade to datasets >=4.0.0 |
|
|
- [ ] Test your code with v2.0 data |
|
|
- [ ] Update token count expectations (if using MWT treebanks) |
|
|
- [ ] Update any hard-coded token indices (if applicable) |
|
|
- [ ] If using helper functions: Install `ud-hf-parquet-tools` and update imports |
|
|
- [ ] If exporting to CoNLL-U: Use `write_conllu()` from `ud-hf-parquet-tools` |
|
|
- [ ] Utilize new MWT field for research (optional) |
|
|
|
|
|
**Estimated Migration Time:** |
|
|
- Basic usage: 15-30 minutes |
|
|
- With helper functions: +10 minutes (install library, update imports) |
|
|
|
|
|
**Resources:** |
|
|
- Dataset repository: https://huggingface.co/datasets/commul/universal_dependencies |
|
|
- Helper library: https://github.com/bot-zen/ud-hf-parquet-tools |
|
|
- CHANGELOG: [CHANGELOG.md](CHANGELOG.md) |
|
|
|