Datasets:
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
UDD-1 (Universal Dependency Dataset for Vietnamese) is a Vietnamese Universal Dependencies treebank with 40,000 sentences from 5 domains. The repo contains both the dataset files (CoNLL-U format) and the tooling pipeline for creating/validating them.
Domain Breakdown
| Category | Source Dataset | Sentences | Sent ID Prefix |
|---|---|---|---|
| Wikipedia | undertheseanlp/UVW-2026 |
8,000 | uvw- |
| News | undertheseanlp/UVN-1 |
8,000 | uvn- |
| Legal | undertheseanlp/UTS_VLC |
8,000 | vlc- |
| Fiction | undertheseanlp/UVB-v0.1 |
8,000 | uvb-f- |
| Non-fiction | undertheseanlp/UVB-v0.1 |
8,000 | uvb-n- |
Repository Structure
- Root
.conllufiles: The published dataset splits (vi_udd-ud-{train,dev,test}.conllu) data/: Parquet files for HuggingFace dataset hostingsrc/: Pipeline scripts for data fetching, conversion, validation, and uploadtools/udtools/: Vendored copy of the Universal Dependencies tools package (validator + scorer)
Key Pipeline Scripts
| Script | Purpose |
|---|---|
src/fetch_data.py |
Fetch 8,000 sentences from undertheseanlp/UTS_VLC (legal domain) → sentences_vlc.txt |
src/fetch_uvn_data.py |
Fetch 8,000 sentences from undertheseanlp/UVN-1 (news domain) → sentences_uvn.txt |
src/fetch_uvw_data.py |
Fetch 8,000 sentences from undertheseanlp/UVW-2026 (Wikipedia, quality_score >= 5) → sentences_uvw.txt |
src/fetch_uvb_data.py |
Fetch 8,000 fiction + 8,000 non-fiction from undertheseanlp/UVB-v0.1 → sentences_uvb.txt |
src/build_dataset.py |
Combine all sentence files, assign sent_id prefixes, create stratified train/dev/test splits → sentences_{train,dev,test}.txt |
src/convert_to_ud.py |
Convert raw sentences to UD format using underthesea NLP toolkit (dependency parsing + POS tagging). Outputs JSONL and CoNLL-U |
src/statistics.py |
Compute dataset statistics from CoNLL-U files |
src/upload_to_hf.py |
Upload dataset splits to HuggingFace Hub with domain field (requires HF_TOKEN env var) |
src/run_conversion.sh |
Wrapper that runs conversion with GPU monitoring and timestamped results |
src/run_on_runpod.py |
Manage RunPod GPU instances for conversion (requires RUNPOD_API_KEY) |
src/fetch_ws_sentences.py |
Fetch 100K sentences (20K × 5 domains) for word segmentation dataset → ws_sentences_*.txt |
src/build_ws_dataset.py |
Convert sentences to BIO format via word_tokenize + regex_tokenize → ws_{train,dev,test}.txt |
src/fix_ws_errors.py |
Fix known WS errors (cross-boundary merges, always-split compounds) in BIO files → regenerate CoNLL-U |
src/check_ws_errors.py |
Rule-based WS error checker (7 rules: inconsistency, dictionary, long tokens, punctuation, numbers, anomalies) → WS_CHECK_REPORT.md |
Pipeline Commands
1. Fetch sentences from all sources
python src/fetch_data.py # Legal → sentences_vlc.txt
python src/fetch_uvn_data.py # News → sentences_uvn.txt
python src/fetch_uvw_data.py # Wikipedia → sentences_uvw.txt
python src/fetch_uvb_data.py # Books → sentences_uvb.txt
2. Build combined dataset with splits
python src/build_dataset.py # → sentences_{train,dev,test}.txt
3. Run UD conversion (GPU-optimized)
python src/convert_to_ud.py -i sentences_train.txt -o output/ -p train -b 64
python src/convert_to_ud.py -i sentences_dev.txt -o output/ -p dev -b 64
python src/convert_to_ud.py -i sentences_test.txt -o output/ -p test -b 64
# Or use the shell wrapper:
./src/run_conversion.sh <input_file> [batch_size]
4. Validate CoNLL-U files
cd tools/udtools
pip install -e .
python validate.py --lang vi vi_udd-ud-train.conllu
5. Run udtools tests
cd tools/udtools
python -m pytest tests/
6. Compute dataset statistics
python src/statistics.py
7. Upload to HuggingFace
export HF_TOKEN=<token>
python src/upload_to_hf.py
Word Segmentation Dataset Pipeline
Separate pipeline producing a 100K-sentence BIO-tagged dataset (VLSP 2013 compatible) for CRF word segmentation training in tree-1.
# Step 1: Fetch 20K sentences per domain (100K total)
uv run src/fetch_ws_sentences.py
# → ws_sentences_vlc.txt, ws_sentences_uvn.txt, ws_sentences_uvw.txt,
# ws_sentences_uvb_f.txt, ws_sentences_uvb_n.txt
# Step 2: Convert to BIO format with stratified 80/10/10 split
uv run src/build_ws_dataset.py
# → udd-ws-v1.1-train.txt (~80K), udd-ws-v1.1-dev.txt (~10K), udd-ws-v1.1-test.txt (~10K)
# Step 3: Fix known segmentation errors (cross-boundary merges, always-split compounds)
uv run src/fix_ws_errors.py
# → fixes BIO files in-place, regenerates CoNLL-U, writes WS_FIX_REPORT.md
# Step 4: Run diagnostic checker (7 rules, no fixes applied)
uv run src/check_ws_errors.py # All rules (needs underthesea for dict rules)
uv run src/check_ws_errors.py --no-dict # Skip dictionary rules (2, 3)
# → WS_CHECK_REPORT.md
Output BIO format (syllable\tB-W / syllable\tI-W, blank line between sentences) is compatible with tree-1's load_data_vlsp2013() which maps B-W→B, I-W→I.
Architecture Notes
Conversion Pipeline (convert_to_ud.py)
The core conversion flow: raw Vietnamese text -> underthesea.dependency_parse() + underthesea.pos_tag() -> Vietnamese POS mapped to Universal POS via UPOS_MAP -> syntax error post-processing via fix_syntax_errors() -> CoNLL-U output.
fix_syntax_errors() is a critical multi-pass function that corrects UD validation issues:
- Redirects children of leaf-only relations (aux, case, punct, det, etc.)
- Maps invalid deprels via
DEPREL_MAP - Enforces UPOS/deprel consistency (e.g.,
detmust be DET/PRON,advmodmust be ADV) - Handles Vietnamese-specific auxiliary verbs (
AUX_WORDS) and copula (la) - Fixes directional constraints (flat/conj/appos must be left-to-right)
- Resolves multiple subjects/objects per predicate
- Fixes non-projective punctuation attachment
udtools (Vendored UD Validator)
The tools/udtools/ directory is a vendored copy of the official UD tools. The validator class hierarchy is: Validator -> Level6 -> Level5 -> ... -> Level1. Each level adds progressively stricter UD compliance checks. The Validator class in validator.py is the main entry point.
Data Format
- CoNLL-U: Standard 10-column UD format (ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, MISC)
- JSONL: HuggingFace-compatible format with fields:
sent_id,text,tokens,lemmas,upos,xpos,feats,head,deprel,deps,misc,domain - Sentence ID prefixes:
vlc-= legal,uvn-= news,uvw-= wikipedia,uvb-f-= fiction,uvb-n-= non-fiction - Split ratios: Train (91.4%) / Dev (4.3%) / Test (4.3%), stratified by domain
Dependencies
underthesea: Vietnamese NLP toolkit (tokenization, POS tagging, dependency parsing)torch: Required by underthesea models (GPU-accelerated)datasets,huggingface_hub: For HuggingFace dataset operationsudtoolsdependencies:udapi>=0.5.0,regex>=2020.09.27(seetools/udtools/pyproject.toml)