# Adding a New Universal Dependencies Version This guide explains how to add a new Universal Dependencies release (e.g., UD 2.18, 2.19, etc.) to the `commul/universal_dependencies` HuggingFace dataset. **Quick reference:** See [tools/README.md](tools/README.md) for concise commands and script documentation. ## Prerequisites - Git repository cloned and up to date - Python 3.12+ with `uv` installed - Dependencies installed: ```bash pip install ud-hf-parquet-tools pyyaml python-dotenv jinja2 ``` - Access to push to `commul/universal_dependencies` on HuggingFace Hub - `huggingface-cli` installed and authenticated: ```bash pip install huggingface_hub huggingface-cli login ``` ## Overview Each UD version (2.7, 2.8, ..., 2.17, 2.18, ...) has its own git branch. The dataset uses Parquet format (v2.0 architecture) for all versions. When a new UD release is published, you create a new branch and run the generation pipeline. **Architecture:** - **No Python script loader**: Dataset uses Parquet files only (datasets >=4.0.0) - **External tools**: Helper functions in separate `ud-hf-parquet-tools` library - **Blocked treebanks**: Some treebanks excluded due to license restrictions ## Step-by-Step Guide ### 1. Check for New UD Release Visit [Universal Dependencies releases](https://universaldependencies.org/) to check for new versions. For this example, we'll add **UD 2.18** (replace with actual version). ### 2. Create New Branch ```bash # Ensure you're on the latest main branch git checkout main git pull origin main # Create new branch for UD version git checkout -b 2.18 # Alternatively, branch from latest UD version if main is stale git checkout 2.17 git pull origin 2.17 git checkout -b 2.18 ``` **Why branching?** Each UD version is maintained independently, allowing users to load specific versions via `revision="2.18"`. ### 3. Update Environment Configuration ```bash cd tools # Set the version number export NEW_VER=2.18 echo "UD_VER=${NEW_VER}" > .env # Verify cat .env # Output: UD_VER=2.18 ``` **What is .env?** Environment file that all scripts read to determine which UD version to process. ### 4. Fetch Metadata from LINDAT/CLARIN ```bash # Fetch citation and description ./00_fetch_ud_clarin-dspace_metadata.py -o # This creates: # - etc/citation-2.18 # - etc/description-2.18 ``` **Before running**, update the script to add the new version's handle ID: 1. Open `00_fetch_ud_clarin-dspace_metadata.py` 2. Find the `url_postfixes` dictionary 3. Add entry for new version: ```python "2.18": "11234/1-XXXX", # Check UD website for correct handle ``` **Where to find handle?** Visit the [UD release page](https://universaldependencies.org/) and check the LINDAT citation link. ### 5. Fetch Language Codes and Flags ```bash # Fetch language metadata ./00_fetch_ud_codes_and_flags.sh -o # This creates: # - etc/codes_and_flags-2.18.yaml # - etc/codes_and_flags-latest.yaml (updated symlink) ``` **Before running**, update the script with the docs-automation commit hash: 1. Open `00_fetch_ud_codes_and_flags.sh` 2. Find the `VER_MAPPING` associative array 3. Add entry: ```bash VER_MAPPING["2.18"]="" ``` **How to find hash?** Check the [UD docs-automation releases](https://github.com/UniversalDependencies/docs/releases) for the commit tagged with the version. ### 6. Discover UD Repositories ```bash # Generate list of all UD repositories ./01_fetch_ud_repos.sh # This creates: # - .UD_submodules_add.commands (list of git submodule add commands) ``` **What does this do?** Queries GitHub API for all repositories in the UniversalDependencies organization and generates commands to add them as submodules. ### 7. Fetch UD Repositories as Submodules ```bash cd UD_repos # Initialize git repository (if first time) git init # Add all UD repositories as submodules bash ../.UD_submodules_add.commands # Checkout the new release tag in all submodules git submodule foreach "git fetch --tags && git checkout r${NEW_VER} && touch .tag-r${NEW_VER}" # Create branch and commit git checkout -b ${NEW_VER} git add -A git commit -m "Add UD ${NEW_VER} repositories" cd .. ``` **Expected time:** 30-60 minutes depending on network speed. **What are .tag-r{VER} files?** Marker files that `02_generate_metadata.py` checks to ensure a repository has the correct version tag. **Troubleshooting:** If some repositories don't have the tag: ```bash git submodule foreach "git fetch --tags && (git checkout r${NEW_VER} || git checkout main) && touch .tag-r${NEW_VER}" ``` ### 8. Extract Metadata from Treebanks ```bash # Generate metadata from all treebank directories ./02_generate_metadata.py -o # This creates: # - metadata-2.18.json (contains info for all treebanks) # Verify the output ls -lh metadata-${NEW_VER}.json # Should be ~200-300 KB # Quick check: count treebanks python -c "import json; print(len(json.load(open('metadata-${NEW_VER}.json'))))" # Should be 339+ (number increases with new treebanks) ``` **What does this script do?** - Reads README files from each treebank - Extracts summaries, licenses, genres - Collects statistics from stats.xml - Identifies available splits (train/dev/test) - Checks `blocked_treebanks.yaml` for license restrictions - Adds "blocked" property to metadata **Expected time:** 5-10 minutes ### 9. Generate Dataset Card (README) ```bash # Generate HuggingFace dataset card ./03_generate_README.py -o # This creates: # - README-2.18 (dataset card for HuggingFace) # Verify file was created ls -lh README-${NEW_VER} ``` **What does this do?** Renders `templates/README.tmpl` with metadata, citation, and description to create the HuggingFace dataset card. ### 10. Review Blocked Treebanks Before generating Parquet files, review the blocked treebanks: ```bash # Check blocked treebanks list cat blocked_treebanks.yaml # Example entry: # pt_cintil: # reason: "Restrictive license prohibits redistribution in derived formats" # license: "CC BY-NC-SA 4.0" ``` **Why block treebanks?** Some treebanks have licenses (e.g., CC BY-NC-SA) that prohibit redistribution in modified formats like Parquet. **See also:** [tools/BLOCKED_TREEBANKS.md](tools/BLOCKED_TREEBANKS.md) ### 11. Generate Parquet Files ```bash # Test with 3 treebanks first uv run ./04_generate_parquet.py --test # If successful, generate all treebanks (takes 2-4 hours) uv run ./04_generate_parquet.py # This creates: # - ../parquet/{treebank_name}/{split}.parquet for all treebanks # Verify output du -sh ../parquet/ # Should be ~50-80 GB total ``` **What does this do?** Wrapper script that calls the `ud-hf-parquet-tools` library to convert CoNLL-U files to Parquet format. **Options:** - `--test`: Generate only 3 treebanks (quick test) - `--overwrite`: Regenerate existing files - `--blocked-treebanks`: Path to YAML file with blocked treebanks **Expected time:** 2-4 hours for all treebanks ### 12. Validate Parquet Files ```bash # Test validation on 3 treebanks uv run ./05_validate_parquet.py --local --test # Full validation (optional, takes ~30-60 minutes) uv run ./05_validate_parquet.py --local --mode text -vv > /tmp/parquet-check.log # Check for errors (excluding metadata comments) grep -E " [+-]" /tmp/parquet-check.log | grep -vE " [+-]#" ``` **What does this do?** Compares Parquet output to original CoNLL-U to verify 100% data fidelity. **Expected output:** No differences except in comment metadata (which may vary slightly). ### 13. Copy Files to Repository Root ```bash cd .. # Back to repository root # Copy generated files cp tools/README-${NEW_VER} README.md cp tools/metadata-${NEW_VER}.json metadata.json # Verify files are in place ls -lh README.md metadata.json parquet/ ``` **Why copy to root?** HuggingFace Hub expects these files at the repository root for the dataset to work. ### 14. Test Dataset Loading Test that the dataset loads correctly: ```bash # Test loading from Parquet python -c " from datasets import load_dataset # Test a small treebank ds = load_dataset('parquet', data_files='parquet/en_pronouns/test.parquet') print(f'Loaded {len(ds[\"train\"])} examples') print(f'Features: {list(ds[\"train\"].features.keys())}') print(f'MWT field present: {\"mwt\" in ds[\"train\"].features}') " ``` **Expected output:** ``` Loaded X examples Features: ['sent_id', 'text', 'comments', 'tokens', 'lemmas', 'upos', 'xpos', 'feats', 'head', 'deprel', 'deps', 'misc', 'mwt', 'empty_nodes'] MWT field present: True ``` ### 15. Commit Changes to Git ```bash # Add generated files git add README.md metadata.json parquet/ git add tools/metadata-${NEW_VER}.json git add tools/README-${NEW_VER} git add tools/etc/citation-${NEW_VER} git add tools/etc/description-${NEW_VER} git add tools/etc/codes_and_flags-${NEW_VER}.yaml git add tools/.env # Commit with descriptive message git commit -m "Add UD ${NEW_VER} data with Parquet format - Generated from Universal Dependencies ${NEW_VER} release - 339+ treebanks across 186+ languages - Parquet format for efficient loading (datasets >=4.0.0) - Blocked treebanks excluded per license restrictions - Helper functions available in ud-hf-parquet-tools library Generated files: - README.md (dataset card) - metadata.json (treebank metadata) - parquet/ directory with all treebank splits" # Tag the commit git tag -a ud${NEW_VER} -m "Universal Dependencies ${NEW_VER} release" # Push branch and tags git push origin ${NEW_VER} git push origin --tags ``` ### 16. Upload to HuggingFace Hub **Option A: Using git-lfs (Recommended)** If you've cloned the HuggingFace repository with git-lfs: ```bash # Add HF Hub as remote (if not already) git remote add hf https://huggingface.co/datasets/commul/universal_dependencies # Push to HuggingFace git push hf ${NEW_VER} git push hf --tags ``` **Option B: Using huggingface-cli** ```bash # Upload entire directory huggingface-cli upload commul/universal_dependencies . --repo-type dataset --revision ${NEW_VER} ``` **Expected upload time:** 2-6 hours depending on network speed and HuggingFace server load. **Tip:** Run uploads during off-peak hours for better performance. ### 17. Verify on HuggingFace Hub Visit: https://huggingface.co/datasets/commul/universal_dependencies **Checklist:** 1. ✅ Branch `2.18` exists in the "Branches" dropdown 2. ✅ Files are present: - `README.md` (dataset card) - `metadata.json` - `parquet/` directory with subdirectories 3. ✅ Dataset card displays correctly 4. ✅ Files section shows parquet files **Test loading:** ```python from datasets import load_dataset # Load from new version ds = load_dataset("commul/universal_dependencies", "en_ewt", revision="2.18") print(ds) ``` **Expected output:** ``` DatasetDict({ train: Dataset({ features: ['sent_id', 'text', 'comments', 'tokens', 'lemmas', ...], num_rows: 12544 }) dev: Dataset({...}) test: Dataset({...}) }) ``` ### 18. Update Main Branch (Optional) If this is now the latest version: ```bash git checkout main git merge ${NEW_VER} git push origin main ``` This makes the new version the default when users don't specify a revision. ## Troubleshooting ### Issue: Submodule checkout fails **Problem:** Some repositories don't have the `r2.18` tag. **Solution:** ```bash cd tools/UD_repos git submodule foreach 'git fetch --tags && (git checkout r2.18 || git checkout main) && touch .tag-r2.18' ``` This falls back to `main` branch for repositories without the tag. ### Issue: Metadata extraction fails for a treebank **Problem:** A treebank is malformed or missing expected files. **Symptoms:** ``` ITEM DELETED - no summary: UD_Language-Treebank ITEM DELETED - no files : UD_Language-Treebank ITEM DELETED - no license: UD_Language-Treebank ``` **Solution:** 1. Check the specific treebank in `tools/UD_repos/UD_{Language}-{Treebank}/` 2. Verify it has `.conllu` files and `stats.xml` 3. Check if README has required metadata 4. If persistently broken, report to UD project 5. Treebank will be automatically excluded from dataset ### Issue: Parquet generation fails for a treebank **Problem:** CoNLL-U parsing error or schema mismatch. **Solution:** ```bash # Isolate the problem by generating one treebank at a time uv run ./04_generate_parquet.py --treebanks "en_ewt" # Check error message for details # Common issues: # - Malformed CoNLL-U syntax # - Encoding problems # - Invalid character in fields ``` **Report issues:** See [CONLLU_PARSING.md](https://github.com/bot-zen/ud-hf-parquet-tools/blob/main/CONLLU_PARSING.md) in ud-hf-parquet-tools for known parsing edge cases. ### Issue: HuggingFace upload is very slow **Problem:** Large Parquet files + network latency. **Solution:** - Use a machine with better network connection - Upload during off-peak hours (e.g., nighttime UTC) - Consider parallel uploads if using huggingface-cli ### Issue: Out of disk space **Problem:** Parquet files take ~50-80 GB. **Solution:** - Ensure you have at least 100 GB free space - Generate Parquet files on a machine with larger disk - Clean up old UD versions: `rm -rf tools/UD_repos/` after uploading ### Issue: Script dependencies not found **Problem:** ImportError or ModuleNotFoundError. **Solution:** ```bash # Install required packages pip install ud-hf-parquet-tools pyyaml python-dotenv jinja2 # Or use uv to manage dependencies automatically uv run --script ./script.py ``` ## Checklist Before marking the release as complete: - [ ] `.env` file updated with new version - [ ] Metadata files generated (`citation-{VER}`, `description-{VER}`, `codes_and_flags-{VER}.yaml`) - [ ] All UD repositories fetched and checked out to correct tag - [ ] `metadata-{VER}.json` generated with blocked treebank info - [ ] `README-{VER}` generated - [ ] Parquet files generated for all non-blocked treebanks - [ ] Parquet files validated (spot check) - [ ] Files copied to repository root (`README.md`, `metadata.json`, `parquet/`) - [ ] Tested loading from Parquet files - [ ] Committed to git with descriptive message - [ ] Tagged with `ud{VER}` - [ ] Pushed to origin - [ ] Uploaded to HuggingFace Hub - [ ] Verified dataset loads from HF Hub - [ ] (Optional) Updated main branch if latest version ## Timeline Estimate | Step | Time | Notes | |------|------|-------| | 1-5: Setup & metadata | 10-15 min | Manual edits required | | 6-7: Fetch repositories | 30-60 min | Network-dependent | | 8-9: Generate metadata/README | 5-10 min | Fast | | 10-12: Generate & validate Parquet | 2-4 hours | CPU-intensive | | 13-15: Commit to git | 10-15 min | Fast | | 16: Upload to HF Hub | 2-6 hours | Network-dependent | | 17-18: Verify & update | 10-20 min | Fast | | **Total** | **~5-11 hours** | Can parallelize some steps | **Recommendation:** Start the process in the morning. Long-running steps (repository fetch, Parquet generation, upload) can run unattended. ## Notes - **No Python script loader:** v2.0 architecture uses Parquet files only (no `universal_dependencies.py`) - **Helper functions external:** CoNLL-U utilities available in `ud-hf-parquet-tools` library - **Blocked treebanks:** Some treebanks excluded due to license restrictions (see `blocked_treebanks.yaml`) - **Branch independence:** Each UD version branch is self-contained - **Version pinning:** Users can load specific versions via `revision="2.18"` ## Reference Documentation - **Quick reference:** [tools/README.md](tools/README.md) - Script documentation and common operations - **Blocked treebanks:** [tools/BLOCKED_TREEBANKS.md](tools/BLOCKED_TREEBANKS.md) - License restrictions - **Migration guide:** [MIGRATION.md](MIGRATION.md) - v1.x to v2.0 migration - **Parquet tools:** https://github.com/bot-zen/ud-hf-parquet-tools - External library for CoNLL-U processing - **CoNLL-U parsing:** https://github.com/bot-zen/ud-hf-parquet-tools/blob/main/CONLLU_PARSING.md - Parsing edge cases ## Support For issues: - **UD data issues** → [UD GitHub Issues](https://github.com/UniversalDependencies/docs/issues) - **Tooling issues** → Your repository issues or [ud-hf-parquet-tools issues](https://github.com/bot-zen/ud-hf-parquet-tools/issues) - **HuggingFace Hub issues** → [HuggingFace Community Forums](https://discuss.huggingface.co/)