Datasets:
Alvin commited on
Commit ·
8d2d3e2
1
Parent(s): 7902dff
Fix LFS tracking + add phylogenetic relationship metadata
Browse files- Fix .gitattributes: add *.tsv and *.csv LFS rules (missing since
Mar 13 re-init). All TSV/CSV files now properly LFS-tracked.
- Add phylo_pairs.tsv: 386K language pair classifications from
Glottolog CLDF v5.x (99.4% coverage, 45K tests pass)
- Add 4 phylo enrichment scripts + PRD
- Update DATABASE_REFERENCE.md with phylo_pairs documentation
- .gitattributes +1 -2
- data/training/metadata/phylo_pairs.tsv +3 -0
- docs/DATABASE_REFERENCE.md +38 -2
- docs/prd/PRD_PHYLO_ENRICHMENT.md +179 -0
- scripts/build_glottolog_tree.py +333 -0
- scripts/build_phylo_pairs.py +309 -0
- scripts/ingest_glottolog.py +91 -0
- scripts/validate_phylo_pairs.py +287 -0
.gitattributes
CHANGED
|
@@ -58,8 +58,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 61 |
-
|
| 62 |
-
data/training/cognate_pairs/cognate_pairs_similarity.tsv filter=lfs diff=lfs merge=lfs -text
|
| 63 |
*.tsv filter=lfs diff=lfs merge=lfs -text
|
| 64 |
*.csv filter=lfs diff=lfs merge=lfs -text
|
| 65 |
*.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
# Dataset files
|
|
|
|
| 62 |
*.tsv filter=lfs diff=lfs merge=lfs -text
|
| 63 |
*.csv filter=lfs diff=lfs merge=lfs -text
|
| 64 |
*.pdf filter=lfs diff=lfs merge=lfs -text
|
data/training/metadata/phylo_pairs.tsv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82d0f3cbe45ec023a5d7a3c75e27bf587d5252fcfdfdc4cccb25d9d193ad077d
|
| 3 |
+
size 24669722
|
docs/DATABASE_REFERENCE.md
CHANGED
|
@@ -44,7 +44,7 @@ This document is the single source of truth for understanding, modifying, and ex
|
|
| 44 |
- `scripts/` — All extraction and processing scripts
|
| 45 |
- `cognate_pipeline/` — Python package for phonetic processing
|
| 46 |
- `docs/` — PRDs, audit reports, this reference doc
|
| 47 |
-
- `data/training/metadata/` — `languages.tsv`, `source_stats.tsv` (small summary files)
|
| 48 |
- `data/training/validation/` — Validation sets (via Git LFS)
|
| 49 |
- `data/training/lexicons/*.tsv` — Ancient language TSVs (force-added despite gitignore)
|
| 50 |
|
|
@@ -108,7 +108,7 @@ python scripts/assemble_lexicons.py # Generate metadata
|
|
| 108 |
ancient-scripts-datasets/
|
| 109 |
data/training/
|
| 110 |
lexicons/ # 1,136 TSV files (one per language) [GITIGNORED]
|
| 111 |
-
metadata/ # languages.tsv, source_stats.tsv,
|
| 112 |
cognate_pairs/ # inherited, similarity, borrowing pairs [GITIGNORED]
|
| 113 |
validation/ # stratified ML training/test sets [GIT LFS]
|
| 114 |
language_profiles/ # per-language markdown profiles
|
|
@@ -153,6 +153,42 @@ Lang_A Word_A IPA_A Lang_B Word_B IPA_B Concept_ID Relationship Score S
|
|
| 153 |
|
| 154 |
**Adversarial audit status (2026-03-14):** All 3 output files PASS final audit. Zero cross-file contamination, zero self-pairs, zero isolate/constructed language leakage, all Source_Record_IDs traceable to source databases.
|
| 155 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 156 |
---
|
| 157 |
|
| 158 |
## 2. TSV Schema & Format
|
|
|
|
| 44 |
- `scripts/` — All extraction and processing scripts
|
| 45 |
- `cognate_pipeline/` — Python package for phonetic processing
|
| 46 |
- `docs/` — PRDs, audit reports, this reference doc
|
| 47 |
+
- `data/training/metadata/` — `languages.tsv`, `source_stats.tsv`, `phylo_pairs.tsv` (small summary/lookup files)
|
| 48 |
- `data/training/validation/` — Validation sets (via Git LFS)
|
| 49 |
- `data/training/lexicons/*.tsv` — Ancient language TSVs (force-added despite gitignore)
|
| 50 |
|
|
|
|
| 108 |
ancient-scripts-datasets/
|
| 109 |
data/training/
|
| 110 |
lexicons/ # 1,136 TSV files (one per language) [GITIGNORED]
|
| 111 |
+
metadata/ # languages.tsv, source_stats.tsv, phylo_pairs.tsv [TRACKED]
|
| 112 |
cognate_pairs/ # inherited, similarity, borrowing pairs [GITIGNORED]
|
| 113 |
validation/ # stratified ML training/test sets [GIT LFS]
|
| 114 |
language_profiles/ # per-language markdown profiles
|
|
|
|
| 153 |
|
| 154 |
**Adversarial audit status (2026-03-14):** All 3 output files PASS final audit. Zero cross-file contamination, zero self-pairs, zero isolate/constructed language leakage, all Source_Record_IDs traceable to source databases.
|
| 155 |
|
| 156 |
+
### Phylogenetic Relationship Metadata
|
| 157 |
+
|
| 158 |
+
**File:** `data/training/metadata/phylo_pairs.tsv` (386,101 unique language pairs)
|
| 159 |
+
|
| 160 |
+
A lookup table mapping every unique `(Lang_A, Lang_B)` pair in the cognate dataset to its phylogenetic relationship, based on Glottolog CLDF v5.x (Hammarström et al.). Not stored inline in the 23M-row cognate files to avoid redundancy.
|
| 161 |
+
|
| 162 |
+
**Schema (9 columns, tab-separated):**
|
| 163 |
+
|
| 164 |
+
| Column | Type | Description |
|
| 165 |
+
|--------|------|-------------|
|
| 166 |
+
| `Lang_A` | str | ISO 639-3 code (alphabetically first) |
|
| 167 |
+
| `Lang_B` | str | ISO 639-3 code (alphabetically second) |
|
| 168 |
+
| `Phylo_Relation` | enum | `near_ancestral`, `close_sister`, `distant_sister`, `cross_family`, `unclassified` |
|
| 169 |
+
| `Tree_Distance` | int | Edge count through MRCA (99 = unclassified/cross-family) |
|
| 170 |
+
| `MRCA_Clade` | str | Glottocode of MRCA node |
|
| 171 |
+
| `MRCA_Depth` | int | Depth of MRCA in tree (0 = root) |
|
| 172 |
+
| `Ancestor_Lang` | str | For `near_ancestral`: ISO of the ancestor; `-` otherwise |
|
| 173 |
+
| `Family_A` | str | Top-level Glottolog family of Lang_A |
|
| 174 |
+
| `Family_B` | str | Top-level Glottolog family of Lang_B |
|
| 175 |
+
|
| 176 |
+
**Distribution:**
|
| 177 |
+
|
| 178 |
+
| Relation | Count | Percentage |
|
| 179 |
+
|----------|-------|------------|
|
| 180 |
+
| `distant_sister` | 249,392 | 64.6% |
|
| 181 |
+
| `close_sister` | 87,078 | 22.6% |
|
| 182 |
+
| `cross_family` | 45,267 | 11.7% |
|
| 183 |
+
| `unclassified` | 4,302 | 1.1% |
|
| 184 |
+
| `near_ancestral` | 62 | 0.0% |
|
| 185 |
+
|
| 186 |
+
**Usage:** Join at query time using `pair_key = (min(a,b), max(a,b))`. The classification is orthogonal to the cognate data and can be updated independently when Glottolog releases new versions.
|
| 187 |
+
|
| 188 |
+
**Scripts:** `scripts/ingest_glottolog.py` (download), `scripts/build_glottolog_tree.py` (parse), `scripts/build_phylo_pairs.py` (classify), `scripts/validate_phylo_pairs.py` (validate). See `docs/prd/PRD_PHYLO_ENRICHMENT.md` for full specification.
|
| 189 |
+
|
| 190 |
+
**Validation (2026-03-14):** 45,363/45,363 tests passed. 13 known-answer checks, 62 near-ancestral integrity, 45,267 cross-family integrity, 99.4% ISO coverage, 20/20 random audit.
|
| 191 |
+
|
| 192 |
---
|
| 193 |
|
| 194 |
## 2. TSV Schema & Format
|
docs/prd/PRD_PHYLO_ENRICHMENT.md
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PRD: Phylogenetic Relationship Metadata for Cognate Pairs
|
| 2 |
+
|
| 3 |
+
**Status**: In Progress
|
| 4 |
+
**Date**: 2026-03-14
|
| 5 |
+
**Author**: Alvin (assisted by Claude)
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## Problem
|
| 10 |
+
|
| 11 |
+
The cognate pair dataset (23.4M pairs) currently has NO metadata about the phylogenetic relationship between language pairs — no distinction between mother-daughter (Latin→Spanish), close sisters (Spanish~Italian), or distant sisters (Spanish~Hindi). Every pair should indicate its degree of cognacy.
|
| 12 |
+
|
| 13 |
+
## Solution
|
| 14 |
+
|
| 15 |
+
A post-processing enrichment pipeline that cross-references cognate pairs against an authoritative phylogenetic tree (Glottolog CLDF v5.x) to classify each unique language pair.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
## Data Source
|
| 20 |
+
|
| 21 |
+
**Primary**: Glottolog CLDF v5.x (Hammarström, Forkel, Haspelmath & Bank)
|
| 22 |
+
- Repository: `https://github.com/glottolog/glottolog-cldf`
|
| 23 |
+
- Archive: Zenodo DOI: 10.5281/zenodo.15640174
|
| 24 |
+
- License: CC BY 4.0
|
| 25 |
+
- Key files: `cldf/languages.csv` (27,177 languoids with ISO mapping), `cldf/classification.nex` (NEXUS with Newick trees per family)
|
| 26 |
+
- 8,184 languoids have ISO 639-3 codes
|
| 27 |
+
- Trees are topological (no branch lengths)
|
| 28 |
+
|
| 29 |
+
**Download method**: `scripts/ingest_glottolog.py` — downloads the CLDF CSV and NEXUS files from GitHub raw content into `data/training/raw/glottolog_cldf/`. Follows the same pattern as `ingest_acd.py`.
|
| 30 |
+
|
| 31 |
+
**Supplementary** (Phase 2, not in scope): Phlorest Bayesian phylogenies for calibrated branch lengths (years of divergence).
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
## Output
|
| 36 |
+
|
| 37 |
+
**File**: `data/training/metadata/phylo_pairs.tsv`
|
| 38 |
+
|
| 39 |
+
A lookup table keyed on canonically-ordered `(Lang_A, Lang_B)` pairs. NOT inline columns in the 23M-row cognate pair files — that would add massive redundancy since the same language pair always has the same phylo classification.
|
| 40 |
+
|
| 41 |
+
### Schema (9 columns)
|
| 42 |
+
|
| 43 |
+
| Column | Type | Description |
|
| 44 |
+
|--------|------|-------------|
|
| 45 |
+
| `Lang_A` | str | ISO 639-3 code (alphabetically first) |
|
| 46 |
+
| `Lang_B` | str | ISO 639-3 code (alphabetically second) |
|
| 47 |
+
| `Phylo_Relation` | enum | `near_ancestral`, `close_sister`, `distant_sister`, `cross_family`, `unclassified` |
|
| 48 |
+
| `Tree_Distance` | int | Edge count through MRCA (0 = same leaf group, 99 = unclassified/cross-family) |
|
| 49 |
+
| `MRCA_Clade` | str | Glottocode or name of MRCA node (e.g., `roma1334`, `germ1287`) |
|
| 50 |
+
| `MRCA_Depth` | int | Depth of MRCA in tree (0 = root, higher = more specific) |
|
| 51 |
+
| `Ancestor_Lang` | str | For `near_ancestral`: ISO of the ancestor. `-` otherwise |
|
| 52 |
+
| `Family_A` | str | Top-level Glottolog family of Lang_A |
|
| 53 |
+
| `Family_B` | str | Top-level Glottolog family of Lang_B |
|
| 54 |
+
|
| 55 |
+
### Classification Taxonomy
|
| 56 |
+
|
| 57 |
+
| Relation | Definition | Example |
|
| 58 |
+
|----------|-----------|---------|
|
| 59 |
+
| `near_ancestral` | One language is an attested ancestor of the other's clade (from curated NEAR_ANCESTOR_MAP) | Latin↔Spanish, Old English↔English, Sanskrit↔Hindi |
|
| 60 |
+
| `close_sister` | MRCA depth ≥ 3 (share a specific sub-branch) | Spanish↔Italian (both under Romance), Swedish↔Danish (both under North Germanic) |
|
| 61 |
+
| `distant_sister` | MRCA depth = 1 or 2 (share family or major branch only) | English↔Hindi (both IE, but Germanic vs Indo-Iranian) |
|
| 62 |
+
| `cross_family` | Different top-level families | English↔Japanese |
|
| 63 |
+
| `unclassified` | One or both languages not in Glottolog tree | Undeciphered/isolate languages without Glottocode |
|
| 64 |
+
|
| 65 |
+
**Depth thresholds**: The boundary between close and distant is at MRCA depth 3+. This means:
|
| 66 |
+
- Depth 0: root (should not occur for same-family pairs)
|
| 67 |
+
- Depth 1: top-level family (e.g., `indo1319` = Indo-European) → `distant_sister`
|
| 68 |
+
- Depth 2: major branch (e.g., `germ1287` = Germanic) → `distant_sister`
|
| 69 |
+
- Depth 3+: sub-branch (e.g., `west2793` = West Germanic) → `close_sister`
|
| 70 |
+
|
| 71 |
+
### Near-Ancestral Detection
|
| 72 |
+
|
| 73 |
+
A curated `NEAR_ANCESTOR_MAP` lists ~25 attested ancient/medieval languages and the Glottolog clades they are historically ancestral to:
|
| 74 |
+
|
| 75 |
+
```python
|
| 76 |
+
NEAR_ANCESTOR_MAP = {
|
| 77 |
+
"lat": ["roma1334"], # Latin → Romance
|
| 78 |
+
"grc": ["mode1248", "medi1251"], # Ancient Greek → Modern Greek clades
|
| 79 |
+
"san": ["indo1321"], # Sanskrit → Indic
|
| 80 |
+
"ang": ["angl1265"], # Old English → Anglian/English
|
| 81 |
+
"enm": ["angl1265"], # Middle English → English
|
| 82 |
+
"fro": ["oilf1242"], # Old French → Oïl French
|
| 83 |
+
"osp": ["cast1243"], # Old Spanish → Castilian
|
| 84 |
+
"non": ["nort3160"], # Old Norse → North Germanic modern
|
| 85 |
+
"goh": ["high1289"], # Old High German → High German
|
| 86 |
+
"dum": ["mode1258"], # Middle Dutch → Modern Dutch
|
| 87 |
+
"sga": ["goid1240"], # Old Irish → Goidelic modern
|
| 88 |
+
"mga": ["goid1240"], # Middle Irish → Goidelic modern
|
| 89 |
+
"wlm": ["bryt1239"], # Middle Welsh → Brythonic modern
|
| 90 |
+
"chu": ["sout3147"], # Old Church Slavonic → South Slavic
|
| 91 |
+
"orv": ["east1426"], # Old East Slavic → East Slavic modern
|
| 92 |
+
"och": ["sini1245"], # Old Chinese → Sinitic modern
|
| 93 |
+
"ota": ["oghu1243"], # Ottoman Turkish → Oghuz modern
|
| 94 |
+
"okm": ["kore1284"], # Middle Korean → Korean modern
|
| 95 |
+
}
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
Logic:
|
| 99 |
+
1. If either language is in `NEAR_ANCESTOR_MAP`
|
| 100 |
+
2. AND the other language's ancestry path in Glottolog passes through one of the listed descendant clades
|
| 101 |
+
3. AND the other language is NOT itself an ancient/medieval language (to avoid classifying Latin↔Oscan as near_ancestral)
|
| 102 |
+
4. THEN classify as `near_ancestral` with `Ancestor_Lang` = the ancient language's ISO
|
| 103 |
+
|
| 104 |
+
**Gothic (`got`) edge case**: Gothic is ancient but has NO modern descendants (East Germanic is extinct). Gothic↔Swedish should be `distant_sister`, not `near_ancestral`. The map correctly handles this by only listing clades with living descendants.
|
| 105 |
+
|
| 106 |
+
---
|
| 107 |
+
|
| 108 |
+
## Scripts
|
| 109 |
+
|
| 110 |
+
### Script 1: `scripts/ingest_glottolog.py`
|
| 111 |
+
Downloads Glottolog CLDF data from GitHub.
|
| 112 |
+
|
| 113 |
+
### Script 2: `scripts/build_glottolog_tree.py`
|
| 114 |
+
Parses the Glottolog NEXUS file into a usable Python tree structure.
|
| 115 |
+
|
| 116 |
+
### Script 3: `scripts/build_phylo_pairs.py`
|
| 117 |
+
The main enrichment script that generates the lookup table.
|
| 118 |
+
|
| 119 |
+
### Script 4: `scripts/validate_phylo_pairs.py`
|
| 120 |
+
Validation script with known-answer checks.
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## Execution Order
|
| 125 |
+
|
| 126 |
+
1. Write PRD → push to `docs/prd/PRD_PHYLO_ENRICHMENT.md`
|
| 127 |
+
2. Write `scripts/ingest_glottolog.py` → download Glottolog CLDF
|
| 128 |
+
3. Write `scripts/build_glottolog_tree.py` → parse NEXUS, build tree index
|
| 129 |
+
4. Adversarial audit: Verify tree covers ≥95% of ISO codes in cognate pairs
|
| 130 |
+
5. Write `scripts/build_phylo_pairs.py` → generate lookup table
|
| 131 |
+
6. Adversarial audit: Verify 20 random pairs trace back to Glottolog tree
|
| 132 |
+
7. Write `scripts/validate_phylo_pairs.py` → automated known-answer tests
|
| 133 |
+
8. Update `docs/DATABASE_REFERENCE.md` with phylo_pairs documentation
|
| 134 |
+
9. Commit + push to GitHub + HuggingFace
|
| 135 |
+
|
| 136 |
+
---
|
| 137 |
+
|
| 138 |
+
## Critical Design Decisions
|
| 139 |
+
|
| 140 |
+
### Why a separate lookup table (not inline columns)?
|
| 141 |
+
- The 22.9M inherited pairs reference ~385K unique language pairs. Adding 7 columns to 22.9M rows = 160M+ redundant cells.
|
| 142 |
+
- Downstream consumers join at query time: `pair_key = (min(a,b), max(a,b))`; lookup is O(1) with a dict.
|
| 143 |
+
- The phylo classification is orthogonal to the cognate data — it can be updated independently when Glottolog releases new versions.
|
| 144 |
+
|
| 145 |
+
### Why Glottolog (not the existing phylo_tree.json)?
|
| 146 |
+
- `phylo_tree.json` has 755 Austronesian languages in ONE flat list — zero sub-classification for 90% of the dataset
|
| 147 |
+
- Glottolog has deep sub-classification for ALL families including Austronesian
|
| 148 |
+
- Glottolog is the authoritative academic reference (Hammarström et al.)
|
| 149 |
+
|
| 150 |
+
### Why curated near-ancestor map (not purely algorithmic)?
|
| 151 |
+
- Glottolog classifies ALL attested languages as leaf nodes (siblings), never as parent nodes
|
| 152 |
+
- Even Latin is a sibling of Romance under "Imperial Latin" in Glottolog — not a parent
|
| 153 |
+
- Algorithmic detection from tree topology alone would classify ALL pairs as sister-sister
|
| 154 |
+
- The curated map (~25 entries) is linguistically defensible and small enough to verify exhaustively
|
| 155 |
+
|
| 156 |
+
### Why not branch-length / time-calibrated distances?
|
| 157 |
+
- Phase 1 focuses on topological classification
|
| 158 |
+
- Branch-length data requires Phlorest Bayesian phylogenies (separate download per family, inconsistent coverage)
|
| 159 |
+
- Branch lengths can be added in Phase 2 as a `Divergence_Years` column
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
## Honest Limitations
|
| 164 |
+
|
| 165 |
+
1. **"Near-ancestral" is approximate**: Latin is not literally the ancestor of French — Vulgar Latin (unattested) is. We use "near-ancestral" to mean "the attested language is historically ancestral to the clade containing the other language."
|
| 166 |
+
2. **Topological distance ≠ temporal distance**: Two languages with tree_distance=4 in Austronesian may have diverged 1,000 years ago, while two with tree_distance=4 in Indo-European may have diverged 5,000 years ago.
|
| 167 |
+
3. **Glottolog is a single hypothesis**: Disputed affiliations are not represented. The tree reflects Glottolog's conservative consensus classification.
|
| 168 |
+
4. **The similarity file (465K pairs) may contain cross-family pairs** that are correctly labeled `cross_family` — these are algorithmic similarity matches, not genetic relationships.
|
| 169 |
+
5. **Proto-language codes (ine-pro, gem-pro, etc.) are NOT in Glottolog** — any cognate pairs involving proto-languages will be `unclassified`. (Currently zero such pairs exist in the dataset.)
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## Verification
|
| 174 |
+
|
| 175 |
+
1. `python scripts/validate_phylo_pairs.py` — all known-answer checks PASS
|
| 176 |
+
2. Coverage: ≥95% of ISO codes in cognate pairs have a Glottolog classification
|
| 177 |
+
3. Distribution: `close_sister` should be majority (most pairs are intra-family from ABVD/ACD)
|
| 178 |
+
4. Adversarial audit: 20 random pairs traced back to Glottolog NEXUS tree
|
| 179 |
+
5. No `unclassified` for any language that has a Glottocode in `languages.tsv`
|
scripts/build_glottolog_tree.py
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Parse Glottolog NEXUS classification into a JSON tree index.
|
| 3 |
+
|
| 4 |
+
Reads the Glottolog CLDF languages.csv and classification.nex files,
|
| 5 |
+
builds ancestry paths for every ISO-bearing languoid, and writes
|
| 6 |
+
a JSON index mapping ISO codes to their tree positions.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
python scripts/build_glottolog_tree.py
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
from __future__ import annotations
|
| 13 |
+
|
| 14 |
+
import csv
|
| 15 |
+
import io
|
| 16 |
+
import json
|
| 17 |
+
import logging
|
| 18 |
+
import re
|
| 19 |
+
import sys
|
| 20 |
+
from pathlib import Path
|
| 21 |
+
|
| 22 |
+
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
| 23 |
+
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")
|
| 24 |
+
|
| 25 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 26 |
+
logger = logging.getLogger(__name__)
|
| 27 |
+
|
| 28 |
+
GLOTTOLOG_DIR = ROOT / "data" / "training" / "raw" / "glottolog_cldf"
|
| 29 |
+
OUTPUT_FILE = GLOTTOLOG_DIR / "glottolog_tree.json"
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def parse_newick(newick: str) -> dict:
|
| 33 |
+
"""Parse a Newick tree string into a parent->children dict.
|
| 34 |
+
|
| 35 |
+
Returns a dict mapping each node label to its list of child labels,
|
| 36 |
+
plus a special key '_root' with the root label.
|
| 37 |
+
|
| 38 |
+
The Glottolog NEXUS uses format: (child1:1,child2:1)parent:1
|
| 39 |
+
"""
|
| 40 |
+
# Remove trailing semicolon
|
| 41 |
+
newick = newick.strip().rstrip(";")
|
| 42 |
+
|
| 43 |
+
children_of: dict[str, list[str]] = {}
|
| 44 |
+
parent_of: dict[str, str] = {}
|
| 45 |
+
|
| 46 |
+
# Parse by walking the string character by character
|
| 47 |
+
stack: list[list[str]] = [] # stack of child-lists being built
|
| 48 |
+
i = 0
|
| 49 |
+
n = len(newick)
|
| 50 |
+
|
| 51 |
+
while i < n:
|
| 52 |
+
c = newick[i]
|
| 53 |
+
if c == "(":
|
| 54 |
+
stack.append([])
|
| 55 |
+
i += 1
|
| 56 |
+
elif c == ")":
|
| 57 |
+
# Read the label after the closing paren
|
| 58 |
+
i += 1
|
| 59 |
+
label, i = _read_label(newick, i)
|
| 60 |
+
child_list = stack.pop()
|
| 61 |
+
children_of[label] = child_list
|
| 62 |
+
for child in child_list:
|
| 63 |
+
parent_of[child] = label
|
| 64 |
+
# Add this node to parent's child list (if any)
|
| 65 |
+
if stack:
|
| 66 |
+
stack[-1].append(label)
|
| 67 |
+
else:
|
| 68 |
+
# This is the root
|
| 69 |
+
children_of["_root"] = label
|
| 70 |
+
elif c == ",":
|
| 71 |
+
i += 1
|
| 72 |
+
else:
|
| 73 |
+
# Leaf node
|
| 74 |
+
label, i = _read_label(newick, i)
|
| 75 |
+
if stack:
|
| 76 |
+
stack[-1].append(label)
|
| 77 |
+
else:
|
| 78 |
+
children_of["_root"] = label
|
| 79 |
+
|
| 80 |
+
return children_of
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
def _read_label(s: str, i: int) -> tuple[str, int]:
|
| 84 |
+
"""Read a node label (glottocode) and skip any :branchlength suffix."""
|
| 85 |
+
start = i
|
| 86 |
+
while i < len(s) and s[i] not in "(),;":
|
| 87 |
+
i += 1
|
| 88 |
+
raw = s[start:i]
|
| 89 |
+
# Strip branch length (e.g., "abkh1242:1" -> "abkh1242")
|
| 90 |
+
label = raw.split(":")[0]
|
| 91 |
+
return label, i
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def build_ancestry_paths(children_of: dict) -> dict[str, list[str]]:
|
| 95 |
+
"""Build leaf-to-root ancestry paths from a parsed tree.
|
| 96 |
+
|
| 97 |
+
Returns dict mapping each leaf label to its full path [root, ..., leaf].
|
| 98 |
+
"""
|
| 99 |
+
root = children_of["_root"]
|
| 100 |
+
|
| 101 |
+
# Build parent lookup
|
| 102 |
+
parent_of: dict[str, str] = {}
|
| 103 |
+
for node, kids in children_of.items():
|
| 104 |
+
if node == "_root":
|
| 105 |
+
continue
|
| 106 |
+
if isinstance(kids, list):
|
| 107 |
+
for kid in kids:
|
| 108 |
+
parent_of[kid] = node
|
| 109 |
+
|
| 110 |
+
# Find all leaves (nodes not in children_of, or with no children)
|
| 111 |
+
all_nodes = set()
|
| 112 |
+
all_nodes.add(root)
|
| 113 |
+
for node, kids in children_of.items():
|
| 114 |
+
if node == "_root":
|
| 115 |
+
continue
|
| 116 |
+
all_nodes.add(node)
|
| 117 |
+
if isinstance(kids, list):
|
| 118 |
+
for kid in kids:
|
| 119 |
+
all_nodes.add(kid)
|
| 120 |
+
|
| 121 |
+
internal = {n for n in children_of if n != "_root" and isinstance(children_of.get(n), list)}
|
| 122 |
+
leaves = all_nodes - internal
|
| 123 |
+
|
| 124 |
+
# Build paths
|
| 125 |
+
paths: dict[str, list[str]] = {}
|
| 126 |
+
for leaf in leaves:
|
| 127 |
+
path = [leaf]
|
| 128 |
+
node = leaf
|
| 129 |
+
while node in parent_of:
|
| 130 |
+
node = parent_of[node]
|
| 131 |
+
path.append(node)
|
| 132 |
+
path.reverse() # root -> ... -> leaf
|
| 133 |
+
paths[leaf] = path
|
| 134 |
+
|
| 135 |
+
# Also store paths for internal nodes (needed for languoids that are families)
|
| 136 |
+
for node in internal:
|
| 137 |
+
path = [node]
|
| 138 |
+
n = node
|
| 139 |
+
while n in parent_of:
|
| 140 |
+
n = parent_of[n]
|
| 141 |
+
path.append(n)
|
| 142 |
+
path.reverse()
|
| 143 |
+
paths[node] = path
|
| 144 |
+
|
| 145 |
+
return paths
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
def load_iso_mapping() -> dict[str, tuple[str, str]]:
|
| 149 |
+
"""Load ISO 639-3 -> (glottocode, family_id) mapping from languages.csv.
|
| 150 |
+
|
| 151 |
+
Returns dict mapping ISO code to (glottocode, family_id).
|
| 152 |
+
"""
|
| 153 |
+
lang_file = GLOTTOLOG_DIR / "languages.csv"
|
| 154 |
+
mapping: dict[str, tuple[str, str]] = {}
|
| 155 |
+
with open(lang_file, "r", encoding="utf-8") as f:
|
| 156 |
+
reader = csv.DictReader(f)
|
| 157 |
+
for row in reader:
|
| 158 |
+
iso = row.get("ISO639P3code", "").strip()
|
| 159 |
+
if iso:
|
| 160 |
+
gc = row["Glottocode"]
|
| 161 |
+
fam = row.get("Family_ID", "").strip()
|
| 162 |
+
mapping[iso] = (gc, fam)
|
| 163 |
+
return mapping
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
def load_family_names() -> dict[str, str]:
|
| 167 |
+
"""Load glottocode -> name mapping for family-level nodes."""
|
| 168 |
+
lang_file = GLOTTOLOG_DIR / "languages.csv"
|
| 169 |
+
names: dict[str, str] = {}
|
| 170 |
+
with open(lang_file, "r", encoding="utf-8") as f:
|
| 171 |
+
reader = csv.DictReader(f)
|
| 172 |
+
for row in reader:
|
| 173 |
+
gc = row["Glottocode"]
|
| 174 |
+
names[gc] = row["Name"]
|
| 175 |
+
return names
|
| 176 |
+
|
| 177 |
+
|
| 178 |
+
def parse_nexus_file() -> dict[str, dict]:
|
| 179 |
+
"""Parse the full NEXUS file, return {family_glottocode: children_of_dict}."""
|
| 180 |
+
nex_file = GLOTTOLOG_DIR / "classification.nex"
|
| 181 |
+
trees: dict[str, dict] = {}
|
| 182 |
+
|
| 183 |
+
with open(nex_file, "r", encoding="utf-8") as f:
|
| 184 |
+
for line in f:
|
| 185 |
+
line = line.strip()
|
| 186 |
+
if not line.startswith("tree "):
|
| 187 |
+
continue
|
| 188 |
+
# Parse: tree <name> = [&R] <newick>;
|
| 189 |
+
m = re.match(r"tree\s+(\S+)\s*=\s*(?:\[&R\]\s*)?(.+)", line)
|
| 190 |
+
if not m:
|
| 191 |
+
continue
|
| 192 |
+
tree_name = m.group(1)
|
| 193 |
+
newick = m.group(2)
|
| 194 |
+
try:
|
| 195 |
+
children_of = parse_newick(newick)
|
| 196 |
+
trees[tree_name] = children_of
|
| 197 |
+
except Exception as e:
|
| 198 |
+
logger.warning("Failed to parse tree %s: %s", tree_name, e)
|
| 199 |
+
|
| 200 |
+
return trees
|
| 201 |
+
|
| 202 |
+
|
| 203 |
+
def build_tree_index() -> dict:
|
| 204 |
+
"""Build the complete tree index: ISO -> tree position info.
|
| 205 |
+
|
| 206 |
+
Returns a dict with:
|
| 207 |
+
- "languages": {iso: {glottocode, path, family, depth}}
|
| 208 |
+
- "family_names": {glottocode: name}
|
| 209 |
+
"""
|
| 210 |
+
logger.info("Loading ISO mapping...")
|
| 211 |
+
iso_map = load_iso_mapping()
|
| 212 |
+
logger.info("Loaded %d ISO codes", len(iso_map))
|
| 213 |
+
|
| 214 |
+
logger.info("Loading family names...")
|
| 215 |
+
gc_names = load_family_names()
|
| 216 |
+
|
| 217 |
+
logger.info("Parsing NEXUS trees...")
|
| 218 |
+
trees = parse_nexus_file()
|
| 219 |
+
logger.info("Parsed %d family trees", len(trees))
|
| 220 |
+
|
| 221 |
+
# Build ancestry paths for all trees
|
| 222 |
+
all_paths: dict[str, list[str]] = {} # glottocode -> path
|
| 223 |
+
for family_gc, children_of in trees.items():
|
| 224 |
+
paths = build_ancestry_paths(children_of)
|
| 225 |
+
all_paths.update(paths)
|
| 226 |
+
|
| 227 |
+
logger.info("Built %d ancestry paths", len(all_paths))
|
| 228 |
+
|
| 229 |
+
# Now map ISO codes to their tree positions
|
| 230 |
+
index: dict[str, dict] = {}
|
| 231 |
+
found = 0
|
| 232 |
+
missing = 0
|
| 233 |
+
isolate = 0
|
| 234 |
+
|
| 235 |
+
for iso, (gc, family_id) in iso_map.items():
|
| 236 |
+
if gc in all_paths:
|
| 237 |
+
path = all_paths[gc]
|
| 238 |
+
family = path[0] # root of the tree = top-level family
|
| 239 |
+
depth = len(path) - 1 # distance from root
|
| 240 |
+
index[iso] = {
|
| 241 |
+
"glottocode": gc,
|
| 242 |
+
"path": path,
|
| 243 |
+
"family": family,
|
| 244 |
+
"family_name": gc_names.get(family, family),
|
| 245 |
+
"depth": depth,
|
| 246 |
+
}
|
| 247 |
+
found += 1
|
| 248 |
+
elif family_id:
|
| 249 |
+
# Language has a family but isn't in any tree
|
| 250 |
+
# This can happen for isolates or unclassified languages
|
| 251 |
+
index[iso] = {
|
| 252 |
+
"glottocode": gc,
|
| 253 |
+
"path": [family_id, gc] if family_id != gc else [gc],
|
| 254 |
+
"family": family_id,
|
| 255 |
+
"family_name": gc_names.get(family_id, family_id),
|
| 256 |
+
"depth": 1 if family_id != gc else 0,
|
| 257 |
+
}
|
| 258 |
+
found += 1
|
| 259 |
+
isolate += 1
|
| 260 |
+
else:
|
| 261 |
+
# True isolate or unclassified - no family
|
| 262 |
+
# Check if the glottocode IS a top-level family (isolate families)
|
| 263 |
+
if gc in trees:
|
| 264 |
+
index[iso] = {
|
| 265 |
+
"glottocode": gc,
|
| 266 |
+
"path": [gc],
|
| 267 |
+
"family": gc,
|
| 268 |
+
"family_name": gc_names.get(gc, gc),
|
| 269 |
+
"depth": 0,
|
| 270 |
+
}
|
| 271 |
+
found += 1
|
| 272 |
+
isolate += 1
|
| 273 |
+
else:
|
| 274 |
+
missing += 1
|
| 275 |
+
|
| 276 |
+
logger.info(
|
| 277 |
+
"ISO mapping: %d found (%d isolate/fallback), %d missing",
|
| 278 |
+
found, isolate, missing,
|
| 279 |
+
)
|
| 280 |
+
|
| 281 |
+
return {
|
| 282 |
+
"languages": index,
|
| 283 |
+
"family_names": {gc: gc_names.get(gc, gc) for gc in set(
|
| 284 |
+
info["family"] for info in index.values()
|
| 285 |
+
)},
|
| 286 |
+
"stats": {
|
| 287 |
+
"total_iso_codes": len(iso_map),
|
| 288 |
+
"mapped": found,
|
| 289 |
+
"isolate_fallback": isolate,
|
| 290 |
+
"unmapped": missing,
|
| 291 |
+
"families_in_nexus": len(trees),
|
| 292 |
+
},
|
| 293 |
+
}
|
| 294 |
+
|
| 295 |
+
|
| 296 |
+
def main():
|
| 297 |
+
logging.basicConfig(
|
| 298 |
+
level=logging.INFO,
|
| 299 |
+
format="%(asctime)s %(levelname)s %(message)s",
|
| 300 |
+
)
|
| 301 |
+
|
| 302 |
+
# Check input files exist
|
| 303 |
+
for fname in ("languages.csv", "classification.nex"):
|
| 304 |
+
fpath = GLOTTOLOG_DIR / fname
|
| 305 |
+
if not fpath.exists():
|
| 306 |
+
logger.error("Missing: %s — run ingest_glottolog.py first", fpath)
|
| 307 |
+
sys.exit(1)
|
| 308 |
+
|
| 309 |
+
tree_index = build_tree_index()
|
| 310 |
+
|
| 311 |
+
# Write output
|
| 312 |
+
OUTPUT_FILE.parent.mkdir(parents=True, exist_ok=True)
|
| 313 |
+
with open(OUTPUT_FILE, "w", encoding="utf-8") as f:
|
| 314 |
+
json.dump(tree_index, f, ensure_ascii=False, indent=1)
|
| 315 |
+
|
| 316 |
+
logger.info("Wrote tree index to %s", OUTPUT_FILE)
|
| 317 |
+
logger.info("Stats: %s", json.dumps(tree_index["stats"]))
|
| 318 |
+
|
| 319 |
+
# Print some sample entries for verification
|
| 320 |
+
samples = ["eng", "deu", "fra", "spa", "lat", "grc", "hin", "san", "jpn", "tgl"]
|
| 321 |
+
langs = tree_index["languages"]
|
| 322 |
+
for iso in samples:
|
| 323 |
+
if iso in langs:
|
| 324 |
+
info = langs[iso]
|
| 325 |
+
path_str = " > ".join(info["path"][-4:]) # last 4 nodes
|
| 326 |
+
logger.info(
|
| 327 |
+
" %s: family=%s, depth=%d, path=...%s",
|
| 328 |
+
iso, info["family_name"], info["depth"], path_str,
|
| 329 |
+
)
|
| 330 |
+
|
| 331 |
+
|
| 332 |
+
if __name__ == "__main__":
|
| 333 |
+
main()
|
scripts/build_phylo_pairs.py
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Build phylogenetic relationship metadata for cognate pairs.
|
| 3 |
+
|
| 4 |
+
Cross-references cognate pair language pairs against the Glottolog
|
| 5 |
+
tree index to classify each unique (Lang_A, Lang_B) pair by its
|
| 6 |
+
phylogenetic relationship.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
python scripts/build_phylo_pairs.py
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
from __future__ import annotations
|
| 13 |
+
|
| 14 |
+
import csv
|
| 15 |
+
import io
|
| 16 |
+
import json
|
| 17 |
+
import logging
|
| 18 |
+
import sys
|
| 19 |
+
from pathlib import Path
|
| 20 |
+
|
| 21 |
+
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
| 22 |
+
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")
|
| 23 |
+
|
| 24 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 25 |
+
logger = logging.getLogger(__name__)
|
| 26 |
+
|
| 27 |
+
GLOTTOLOG_DIR = ROOT / "data" / "training" / "raw" / "glottolog_cldf"
|
| 28 |
+
COGNATE_DIR = ROOT / "data" / "training" / "cognate_pairs"
|
| 29 |
+
METADATA_DIR = ROOT / "data" / "training" / "metadata"
|
| 30 |
+
OUTPUT_FILE = METADATA_DIR / "phylo_pairs.tsv"
|
| 31 |
+
|
| 32 |
+
COGNATE_FILES = [
|
| 33 |
+
"cognate_pairs_inherited.tsv",
|
| 34 |
+
"cognate_pairs_borrowing.tsv",
|
| 35 |
+
"cognate_pairs_similarity.tsv",
|
| 36 |
+
]
|
| 37 |
+
|
| 38 |
+
# Curated map of attested ancient/medieval languages to the Glottolog clades
|
| 39 |
+
# they are historically ancestral to. Glottocodes verified against Glottolog
|
| 40 |
+
# CLDF v5.x (2026-03-14).
|
| 41 |
+
#
|
| 42 |
+
# Logic: If language A is in this map AND language B's ancestry path passes
|
| 43 |
+
# through one of the listed clades AND B is not itself an ancient language
|
| 44 |
+
# in this map, then the pair is "near_ancestral".
|
| 45 |
+
#
|
| 46 |
+
# Limitation: "Near-ancestral" is approximate — Latin is not literally the
|
| 47 |
+
# ancestor of French (Vulgar Latin, unattested, is). We use it to mean
|
| 48 |
+
# "the attested language is historically ancestral to the clade."
|
| 49 |
+
NEAR_ANCESTOR_MAP: dict[str, list[str]] = {
|
| 50 |
+
# Latin → Romance
|
| 51 |
+
"lat": ["roma1334"],
|
| 52 |
+
# Ancient Greek → Koineic Greek (descendants of Koine)
|
| 53 |
+
"grc": ["koin1234"],
|
| 54 |
+
# Sanskrit → Indo-Aryan
|
| 55 |
+
"san": ["indo1321"],
|
| 56 |
+
# Old English → Anglic
|
| 57 |
+
"ang": ["angl1265"],
|
| 58 |
+
# Middle English → Anglic (more recent ancestor)
|
| 59 |
+
"enm": ["angl1265"],
|
| 60 |
+
# Old French → Oil (includes modern French, Picard, etc.)
|
| 61 |
+
"fro": ["oila1234"],
|
| 62 |
+
# Old Spanish → Castilic
|
| 63 |
+
"osp": ["cast1243"],
|
| 64 |
+
# Old Norse → North Germanic
|
| 65 |
+
"non": ["nort3160"],
|
| 66 |
+
# Old High German → High German
|
| 67 |
+
"goh": ["high1289"],
|
| 68 |
+
# Middle Dutch → Modern Dutch group
|
| 69 |
+
"dum": ["mode1257"],
|
| 70 |
+
# Old Irish → Goidelic
|
| 71 |
+
"sga": ["goid1240"],
|
| 72 |
+
# Middle Irish → Goidelic
|
| 73 |
+
"mga": ["goid1240"],
|
| 74 |
+
# Old Church Slavonic → South Slavic
|
| 75 |
+
"chu": ["sout3147"],
|
| 76 |
+
# Old East Slavic → East Slavic
|
| 77 |
+
"orv": ["east1426"],
|
| 78 |
+
# Old Chinese → Classical Chinese (modern Sinitic descends from this)
|
| 79 |
+
"och": ["clas1255"],
|
| 80 |
+
# Ottoman Turkish → Oghuz
|
| 81 |
+
"ota": ["oghu1243"],
|
| 82 |
+
}
|
| 83 |
+
|
| 84 |
+
# Threshold: MRCA depth >= this value → close_sister, else distant_sister
|
| 85 |
+
CLOSE_SISTER_DEPTH_THRESHOLD = 3
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def load_tree_index() -> dict:
|
| 89 |
+
"""Load the Glottolog tree index JSON."""
|
| 90 |
+
tree_file = GLOTTOLOG_DIR / "glottolog_tree.json"
|
| 91 |
+
with open(tree_file, "r", encoding="utf-8") as f:
|
| 92 |
+
return json.load(f)
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
def extract_unique_pairs() -> set[tuple[str, str]]:
|
| 96 |
+
"""Extract unique canonically-ordered (Lang_A, Lang_B) pairs from cognate files."""
|
| 97 |
+
pairs: set[tuple[str, str]] = set()
|
| 98 |
+
for fname in COGNATE_FILES:
|
| 99 |
+
fpath = COGNATE_DIR / fname
|
| 100 |
+
if not fpath.exists():
|
| 101 |
+
logger.warning("Missing cognate file: %s", fpath)
|
| 102 |
+
continue
|
| 103 |
+
with open(fpath, "r", encoding="utf-8") as f:
|
| 104 |
+
reader = csv.DictReader(f, delimiter="\t")
|
| 105 |
+
for row in reader:
|
| 106 |
+
a, b = row["Lang_A"], row["Lang_B"]
|
| 107 |
+
key = (min(a, b), max(a, b))
|
| 108 |
+
pairs.add(key)
|
| 109 |
+
logger.info("After %s: %d unique pairs", fname, len(pairs))
|
| 110 |
+
return pairs
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
def find_mrca(path_a: list[str], path_b: list[str]) -> tuple[str, int, int]:
|
| 114 |
+
"""Find Most Recent Common Ancestor of two ancestry paths.
|
| 115 |
+
|
| 116 |
+
Returns (mrca_glottocode, mrca_depth, tree_distance).
|
| 117 |
+
Tree distance = edges from A to MRCA + edges from MRCA to B.
|
| 118 |
+
"""
|
| 119 |
+
# Find longest common prefix
|
| 120 |
+
mrca_idx = -1
|
| 121 |
+
min_len = min(len(path_a), len(path_b))
|
| 122 |
+
for i in range(min_len):
|
| 123 |
+
if path_a[i] == path_b[i]:
|
| 124 |
+
mrca_idx = i
|
| 125 |
+
else:
|
| 126 |
+
break
|
| 127 |
+
|
| 128 |
+
if mrca_idx < 0:
|
| 129 |
+
# No common ancestor (different top-level families)
|
| 130 |
+
return ("", 0, 99)
|
| 131 |
+
|
| 132 |
+
mrca = path_a[mrca_idx]
|
| 133 |
+
mrca_depth = mrca_idx # depth from root (root = 0)
|
| 134 |
+
dist_a = len(path_a) - 1 - mrca_idx
|
| 135 |
+
dist_b = len(path_b) - 1 - mrca_idx
|
| 136 |
+
tree_distance = dist_a + dist_b
|
| 137 |
+
|
| 138 |
+
return (mrca, mrca_depth, tree_distance)
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
def check_near_ancestral(
|
| 142 |
+
iso_a: str,
|
| 143 |
+
iso_b: str,
|
| 144 |
+
path_a: list[str],
|
| 145 |
+
path_b: list[str],
|
| 146 |
+
) -> tuple[bool, str]:
|
| 147 |
+
"""Check if either language is a near-ancestor of the other.
|
| 148 |
+
|
| 149 |
+
Returns (is_near_ancestral, ancestor_iso).
|
| 150 |
+
"""
|
| 151 |
+
# Check if A is in the near-ancestor map
|
| 152 |
+
if iso_a in NEAR_ANCESTOR_MAP:
|
| 153 |
+
target_clades = NEAR_ANCESTOR_MAP[iso_a]
|
| 154 |
+
# Check if B's path passes through any target clade
|
| 155 |
+
if any(clade in path_b for clade in target_clades):
|
| 156 |
+
# B must NOT be an ancient language itself
|
| 157 |
+
if iso_b not in NEAR_ANCESTOR_MAP:
|
| 158 |
+
return (True, iso_a)
|
| 159 |
+
|
| 160 |
+
# Check if B is in the near-ancestor map
|
| 161 |
+
if iso_b in NEAR_ANCESTOR_MAP:
|
| 162 |
+
target_clades = NEAR_ANCESTOR_MAP[iso_b]
|
| 163 |
+
if any(clade in path_a for clade in target_clades):
|
| 164 |
+
if iso_a not in NEAR_ANCESTOR_MAP:
|
| 165 |
+
return (True, iso_b)
|
| 166 |
+
|
| 167 |
+
return (False, "-")
|
| 168 |
+
|
| 169 |
+
|
| 170 |
+
def classify_pair(
|
| 171 |
+
iso_a: str,
|
| 172 |
+
iso_b: str,
|
| 173 |
+
langs: dict,
|
| 174 |
+
family_names: dict,
|
| 175 |
+
) -> dict:
|
| 176 |
+
"""Classify a single language pair.
|
| 177 |
+
|
| 178 |
+
Returns a dict with the TSV row fields.
|
| 179 |
+
"""
|
| 180 |
+
info_a = langs.get(iso_a)
|
| 181 |
+
info_b = langs.get(iso_b)
|
| 182 |
+
|
| 183 |
+
# Default values
|
| 184 |
+
result = {
|
| 185 |
+
"Lang_A": iso_a,
|
| 186 |
+
"Lang_B": iso_b,
|
| 187 |
+
"Phylo_Relation": "unclassified",
|
| 188 |
+
"Tree_Distance": 99,
|
| 189 |
+
"MRCA_Clade": "-",
|
| 190 |
+
"MRCA_Depth": 0,
|
| 191 |
+
"Ancestor_Lang": "-",
|
| 192 |
+
"Family_A": "-",
|
| 193 |
+
"Family_B": "-",
|
| 194 |
+
}
|
| 195 |
+
|
| 196 |
+
if not info_a or not info_b:
|
| 197 |
+
# One or both not in tree
|
| 198 |
+
if info_a:
|
| 199 |
+
result["Family_A"] = info_a.get("family_name", "-")
|
| 200 |
+
if info_b:
|
| 201 |
+
result["Family_B"] = info_b.get("family_name", "-")
|
| 202 |
+
return result
|
| 203 |
+
|
| 204 |
+
family_a = info_a["family"]
|
| 205 |
+
family_b = info_b["family"]
|
| 206 |
+
result["Family_A"] = info_a.get("family_name", family_a)
|
| 207 |
+
result["Family_B"] = info_b.get("family_name", family_b)
|
| 208 |
+
|
| 209 |
+
# Cross-family check
|
| 210 |
+
if family_a != family_b:
|
| 211 |
+
result["Phylo_Relation"] = "cross_family"
|
| 212 |
+
return result
|
| 213 |
+
|
| 214 |
+
# Same family — find MRCA
|
| 215 |
+
path_a = info_a["path"]
|
| 216 |
+
path_b = info_b["path"]
|
| 217 |
+
|
| 218 |
+
mrca, mrca_depth, tree_distance = find_mrca(path_a, path_b)
|
| 219 |
+
result["Tree_Distance"] = tree_distance
|
| 220 |
+
result["MRCA_Clade"] = mrca
|
| 221 |
+
result["MRCA_Depth"] = mrca_depth
|
| 222 |
+
|
| 223 |
+
# Check near-ancestral
|
| 224 |
+
is_near_anc, ancestor = check_near_ancestral(iso_a, iso_b, path_a, path_b)
|
| 225 |
+
if is_near_anc:
|
| 226 |
+
result["Phylo_Relation"] = "near_ancestral"
|
| 227 |
+
result["Ancestor_Lang"] = ancestor
|
| 228 |
+
return result
|
| 229 |
+
|
| 230 |
+
# Sister classification based on MRCA depth
|
| 231 |
+
if mrca_depth >= CLOSE_SISTER_DEPTH_THRESHOLD:
|
| 232 |
+
result["Phylo_Relation"] = "close_sister"
|
| 233 |
+
else:
|
| 234 |
+
result["Phylo_Relation"] = "distant_sister"
|
| 235 |
+
|
| 236 |
+
return result
|
| 237 |
+
|
| 238 |
+
|
| 239 |
+
def main():
|
| 240 |
+
logging.basicConfig(
|
| 241 |
+
level=logging.INFO,
|
| 242 |
+
format="%(asctime)s %(levelname)s %(message)s",
|
| 243 |
+
)
|
| 244 |
+
|
| 245 |
+
# Check prerequisites
|
| 246 |
+
tree_file = GLOTTOLOG_DIR / "glottolog_tree.json"
|
| 247 |
+
if not tree_file.exists():
|
| 248 |
+
logger.error("Missing: %s — run build_glottolog_tree.py first", tree_file)
|
| 249 |
+
sys.exit(1)
|
| 250 |
+
|
| 251 |
+
logger.info("Loading tree index...")
|
| 252 |
+
tree_index = load_tree_index()
|
| 253 |
+
langs = tree_index["languages"]
|
| 254 |
+
family_names = tree_index.get("family_names", {})
|
| 255 |
+
logger.info("Loaded %d languages from tree index", len(langs))
|
| 256 |
+
|
| 257 |
+
logger.info("Extracting unique pairs from cognate files...")
|
| 258 |
+
pairs = extract_unique_pairs()
|
| 259 |
+
logger.info("Total unique pairs: %d", len(pairs))
|
| 260 |
+
|
| 261 |
+
logger.info("Classifying pairs...")
|
| 262 |
+
results = []
|
| 263 |
+
for i, (a, b) in enumerate(sorted(pairs)):
|
| 264 |
+
row = classify_pair(a, b, langs, family_names)
|
| 265 |
+
results.append(row)
|
| 266 |
+
if (i + 1) % 100000 == 0:
|
| 267 |
+
logger.info(" Classified %d/%d pairs...", i + 1, len(pairs))
|
| 268 |
+
|
| 269 |
+
logger.info("Classification complete. Writing output...")
|
| 270 |
+
|
| 271 |
+
# Write TSV
|
| 272 |
+
METADATA_DIR.mkdir(parents=True, exist_ok=True)
|
| 273 |
+
columns = [
|
| 274 |
+
"Lang_A", "Lang_B", "Phylo_Relation", "Tree_Distance",
|
| 275 |
+
"MRCA_Clade", "MRCA_Depth", "Ancestor_Lang", "Family_A", "Family_B",
|
| 276 |
+
]
|
| 277 |
+
with open(OUTPUT_FILE, "w", encoding="utf-8", newline="") as f:
|
| 278 |
+
writer = csv.DictWriter(f, fieldnames=columns, delimiter="\t")
|
| 279 |
+
writer.writeheader()
|
| 280 |
+
for row in results:
|
| 281 |
+
writer.writerow(row)
|
| 282 |
+
|
| 283 |
+
logger.info("Wrote %d rows to %s", len(results), OUTPUT_FILE)
|
| 284 |
+
|
| 285 |
+
# Print statistics
|
| 286 |
+
from collections import Counter
|
| 287 |
+
relation_counts = Counter(r["Phylo_Relation"] for r in results)
|
| 288 |
+
logger.info("=== Distribution by Phylo_Relation ===")
|
| 289 |
+
for relation, count in sorted(relation_counts.items(), key=lambda x: -x[1]):
|
| 290 |
+
pct = 100 * count / len(results)
|
| 291 |
+
logger.info(" %-18s %7d (%5.1f%%)", relation, count, pct)
|
| 292 |
+
|
| 293 |
+
# Count unique families
|
| 294 |
+
families_a = set(r["Family_A"] for r in results if r["Family_A"] != "-")
|
| 295 |
+
families_b = set(r["Family_B"] for r in results if r["Family_B"] != "-")
|
| 296 |
+
all_families = families_a | families_b
|
| 297 |
+
logger.info("Unique families: %d", len(all_families))
|
| 298 |
+
|
| 299 |
+
# Near-ancestral breakdown
|
| 300 |
+
near_anc = [r for r in results if r["Phylo_Relation"] == "near_ancestral"]
|
| 301 |
+
if near_anc:
|
| 302 |
+
anc_counts = Counter(r["Ancestor_Lang"] for r in near_anc)
|
| 303 |
+
logger.info("=== Near-Ancestral by Ancestor Language ===")
|
| 304 |
+
for anc, count in sorted(anc_counts.items(), key=lambda x: -x[1]):
|
| 305 |
+
logger.info(" %s: %d pairs", anc, count)
|
| 306 |
+
|
| 307 |
+
|
| 308 |
+
if __name__ == "__main__":
|
| 309 |
+
main()
|
scripts/ingest_glottolog.py
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Download Glottolog CLDF data for phylogenetic enrichment.
|
| 3 |
+
|
| 4 |
+
Source: Glottolog CLDF v5.x (Hammarström, Forkel, Haspelmath & Bank)
|
| 5 |
+
URL: https://github.com/glottolog/glottolog-cldf
|
| 6 |
+
License: CC BY 4.0
|
| 7 |
+
Citation: Hammarström et al., DOI: 10.5281/zenodo.15640174
|
| 8 |
+
|
| 9 |
+
Downloads languages.csv and classification.nex from the CLDF dataset.
|
| 10 |
+
Iron Rule: Data comes from downloaded files. No hardcoded word lists.
|
| 11 |
+
|
| 12 |
+
Usage:
|
| 13 |
+
python scripts/ingest_glottolog.py
|
| 14 |
+
"""
|
| 15 |
+
|
| 16 |
+
from __future__ import annotations
|
| 17 |
+
|
| 18 |
+
import hashlib
|
| 19 |
+
import logging
|
| 20 |
+
import sys
|
| 21 |
+
import urllib.request
|
| 22 |
+
from pathlib import Path
|
| 23 |
+
|
| 24 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 25 |
+
|
| 26 |
+
logger = logging.getLogger(__name__)
|
| 27 |
+
|
| 28 |
+
GLOTTOLOG_DIR = ROOT / "data" / "training" / "raw" / "glottolog_cldf"
|
| 29 |
+
BASE_URL = "https://raw.githubusercontent.com/glottolog/glottolog-cldf/master/cldf/"
|
| 30 |
+
|
| 31 |
+
FILES = [
|
| 32 |
+
"languages.csv",
|
| 33 |
+
"classification.nex",
|
| 34 |
+
]
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
def download_if_needed():
|
| 38 |
+
"""Download Glottolog CLDF files if not cached."""
|
| 39 |
+
GLOTTOLOG_DIR.mkdir(parents=True, exist_ok=True)
|
| 40 |
+
for fname in FILES:
|
| 41 |
+
local = GLOTTOLOG_DIR / fname
|
| 42 |
+
if local.exists():
|
| 43 |
+
logger.info("Cached: %s (%d bytes)", fname, local.stat().st_size)
|
| 44 |
+
continue
|
| 45 |
+
url = BASE_URL + fname
|
| 46 |
+
logger.info("Downloading %s ...", url)
|
| 47 |
+
req = urllib.request.Request(url, headers={
|
| 48 |
+
"User-Agent": "ancient-scripts-datasets/1.0 (phylo-enrichment)"
|
| 49 |
+
})
|
| 50 |
+
with urllib.request.urlopen(req, timeout=120) as resp:
|
| 51 |
+
data = resp.read()
|
| 52 |
+
with open(local, "wb") as f:
|
| 53 |
+
f.write(data)
|
| 54 |
+
md5 = hashlib.md5(data).hexdigest()
|
| 55 |
+
logger.info("Downloaded %s (%d bytes, md5=%s)", fname, len(data), md5)
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def verify_files():
|
| 59 |
+
"""Verify downloaded files exist and have reasonable sizes."""
|
| 60 |
+
ok = True
|
| 61 |
+
for fname in FILES:
|
| 62 |
+
local = GLOTTOLOG_DIR / fname
|
| 63 |
+
if not local.exists():
|
| 64 |
+
logger.error("MISSING: %s", local)
|
| 65 |
+
ok = False
|
| 66 |
+
continue
|
| 67 |
+
size = local.stat().st_size
|
| 68 |
+
if size < 1000:
|
| 69 |
+
logger.error("TOO SMALL: %s (%d bytes)", fname, size)
|
| 70 |
+
ok = False
|
| 71 |
+
else:
|
| 72 |
+
md5 = hashlib.md5(local.read_bytes()).hexdigest()
|
| 73 |
+
logger.info("OK: %s (%d bytes, md5=%s)", fname, size, md5)
|
| 74 |
+
return ok
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
def main():
|
| 78 |
+
logging.basicConfig(
|
| 79 |
+
level=logging.INFO,
|
| 80 |
+
format="%(asctime)s %(levelname)s %(message)s",
|
| 81 |
+
)
|
| 82 |
+
download_if_needed()
|
| 83 |
+
if verify_files():
|
| 84 |
+
logger.info("All Glottolog CLDF files downloaded successfully.")
|
| 85 |
+
else:
|
| 86 |
+
logger.error("Some files missing or corrupted.")
|
| 87 |
+
sys.exit(1)
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
if __name__ == "__main__":
|
| 91 |
+
main()
|
scripts/validate_phylo_pairs.py
ADDED
|
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Validate phylo_pairs.tsv with known-answer checks and coverage tests.
|
| 3 |
+
|
| 4 |
+
Usage:
|
| 5 |
+
python scripts/validate_phylo_pairs.py
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
from __future__ import annotations
|
| 9 |
+
|
| 10 |
+
import csv
|
| 11 |
+
import io
|
| 12 |
+
import json
|
| 13 |
+
import logging
|
| 14 |
+
import random
|
| 15 |
+
import sys
|
| 16 |
+
from collections import Counter
|
| 17 |
+
from pathlib import Path
|
| 18 |
+
|
| 19 |
+
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
| 20 |
+
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")
|
| 21 |
+
|
| 22 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 23 |
+
logger = logging.getLogger(__name__)
|
| 24 |
+
|
| 25 |
+
PHYLO_FILE = ROOT / "data" / "training" / "metadata" / "phylo_pairs.tsv"
|
| 26 |
+
TREE_FILE = ROOT / "data" / "training" / "raw" / "glottolog_cldf" / "glottolog_tree.json"
|
| 27 |
+
COGNATE_DIR = ROOT / "data" / "training" / "cognate_pairs"
|
| 28 |
+
|
| 29 |
+
# Known-answer test cases: (Lang_A, Lang_B, expected_relation, description)
|
| 30 |
+
# Lang_A, Lang_B are alphabetically ordered.
|
| 31 |
+
KNOWN_ANSWER_TESTS = [
|
| 32 |
+
("deu", "eng", "close_sister", "both West Germanic"),
|
| 33 |
+
("eng", "fra", "distant_sister", "Germanic vs Italic under IE"),
|
| 34 |
+
("fra", "lat", "near_ancestral", "Latin is ancestor of French via Romance"),
|
| 35 |
+
("lat", "spa", "near_ancestral", "Latin is ancestor of Spanish via Romance"),
|
| 36 |
+
# Latin and Oscan are in different branches of Italic:
|
| 37 |
+
# Latin in Latino-Faliscan, Oscan in Sabellic. MRCA=ital1284 at depth 2.
|
| 38 |
+
("lat", "osc", "distant_sister", "Latino-Faliscan vs Sabellic under Italic"),
|
| 39 |
+
("ang", "eng", "near_ancestral", "Old English is ancestor of English"),
|
| 40 |
+
("got", "swe", "distant_sister", "East Germanic vs North Germanic"),
|
| 41 |
+
("hin", "san", "near_ancestral", "Sanskrit is ancestor of Hindi via Indo-Aryan"),
|
| 42 |
+
("eng", "jpn", "cross_family", "Indo-European vs Japonic"),
|
| 43 |
+
("ceb", "tgl", "close_sister", "both Central Philippine under Austronesian"),
|
| 44 |
+
("dan", "swe", "close_sister", "both North Germanic"),
|
| 45 |
+
("ita", "spa", "close_sister", "both Italo-Western Romance"),
|
| 46 |
+
("rus", "ukr", "close_sister", "both East Slavic"),
|
| 47 |
+
]
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def load_phylo_pairs() -> dict[tuple[str, str], dict]:
|
| 51 |
+
"""Load phylo_pairs.tsv into a lookup dict."""
|
| 52 |
+
lookup = {}
|
| 53 |
+
with open(PHYLO_FILE, "r", encoding="utf-8") as f:
|
| 54 |
+
reader = csv.DictReader(f, delimiter="\t")
|
| 55 |
+
for row in reader:
|
| 56 |
+
key = (row["Lang_A"], row["Lang_B"])
|
| 57 |
+
lookup[key] = row
|
| 58 |
+
return lookup
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
def test_known_answers(lookup: dict) -> tuple[int, int]:
|
| 62 |
+
"""Run known-answer tests. Returns (passed, total)."""
|
| 63 |
+
passed = 0
|
| 64 |
+
total = len(KNOWN_ANSWER_TESTS)
|
| 65 |
+
|
| 66 |
+
for lang_a, lang_b, expected, description in KNOWN_ANSWER_TESTS:
|
| 67 |
+
key = (min(lang_a, lang_b), max(lang_a, lang_b))
|
| 68 |
+
if key not in lookup:
|
| 69 |
+
logger.error(
|
| 70 |
+
"FAIL %s-%s: pair not found in phylo_pairs.tsv (%s)",
|
| 71 |
+
lang_a, lang_b, description,
|
| 72 |
+
)
|
| 73 |
+
continue
|
| 74 |
+
|
| 75 |
+
row = lookup[key]
|
| 76 |
+
actual = row["Phylo_Relation"]
|
| 77 |
+
if actual == expected:
|
| 78 |
+
logger.info(
|
| 79 |
+
"PASS %s-%s: %s (%s)",
|
| 80 |
+
lang_a, lang_b, actual, description,
|
| 81 |
+
)
|
| 82 |
+
passed += 1
|
| 83 |
+
else:
|
| 84 |
+
logger.error(
|
| 85 |
+
"FAIL %s-%s: expected %s, got %s (%s)",
|
| 86 |
+
lang_a, lang_b, expected, actual, description,
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
return passed, total
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def test_near_ancestral_integrity(lookup: dict) -> tuple[int, int]:
|
| 93 |
+
"""Verify near_ancestral pairs have valid Ancestor_Lang values."""
|
| 94 |
+
passed = 0
|
| 95 |
+
total = 0
|
| 96 |
+
for key, row in lookup.items():
|
| 97 |
+
if row["Phylo_Relation"] == "near_ancestral":
|
| 98 |
+
total += 1
|
| 99 |
+
anc = row["Ancestor_Lang"]
|
| 100 |
+
if anc != "-" and anc in (row["Lang_A"], row["Lang_B"]):
|
| 101 |
+
passed += 1
|
| 102 |
+
else:
|
| 103 |
+
logger.error(
|
| 104 |
+
"FAIL near_ancestral %s-%s: Ancestor_Lang=%s not in pair",
|
| 105 |
+
row["Lang_A"], row["Lang_B"], anc,
|
| 106 |
+
)
|
| 107 |
+
return passed, total
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
def test_cross_family_integrity(lookup: dict) -> tuple[int, int]:
|
| 111 |
+
"""Verify cross_family pairs have different families."""
|
| 112 |
+
passed = 0
|
| 113 |
+
total = 0
|
| 114 |
+
for key, row in lookup.items():
|
| 115 |
+
if row["Phylo_Relation"] == "cross_family":
|
| 116 |
+
total += 1
|
| 117 |
+
if row["Family_A"] != row["Family_B"]:
|
| 118 |
+
passed += 1
|
| 119 |
+
else:
|
| 120 |
+
logger.error(
|
| 121 |
+
"FAIL cross_family %s-%s: same family %s",
|
| 122 |
+
row["Lang_A"], row["Lang_B"], row["Family_A"],
|
| 123 |
+
)
|
| 124 |
+
return passed, total
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
def test_coverage(lookup: dict) -> tuple[int, int]:
|
| 128 |
+
"""Verify coverage of cognate pair ISO codes in tree."""
|
| 129 |
+
tree_file = TREE_FILE
|
| 130 |
+
with open(tree_file, "r", encoding="utf-8") as f:
|
| 131 |
+
tree = json.load(f)
|
| 132 |
+
langs = tree["languages"]
|
| 133 |
+
|
| 134 |
+
# Get all ISO codes from cognate pairs
|
| 135 |
+
cognate_isos = set()
|
| 136 |
+
for fname in ["cognate_pairs_inherited.tsv", "cognate_pairs_borrowing.tsv",
|
| 137 |
+
"cognate_pairs_similarity.tsv"]:
|
| 138 |
+
fpath = COGNATE_DIR / fname
|
| 139 |
+
if not fpath.exists():
|
| 140 |
+
continue
|
| 141 |
+
with open(fpath, "r", encoding="utf-8") as f:
|
| 142 |
+
reader = csv.DictReader(f, delimiter="\t")
|
| 143 |
+
for row in reader:
|
| 144 |
+
cognate_isos.add(row["Lang_A"])
|
| 145 |
+
cognate_isos.add(row["Lang_B"])
|
| 146 |
+
|
| 147 |
+
in_tree = sum(1 for iso in cognate_isos if iso in langs)
|
| 148 |
+
pct = 100 * in_tree / len(cognate_isos)
|
| 149 |
+
logger.info("Coverage: %d/%d ISO codes (%.1f%%)", in_tree, len(cognate_isos), pct)
|
| 150 |
+
|
| 151 |
+
if pct >= 95.0:
|
| 152 |
+
return (1, 1)
|
| 153 |
+
else:
|
| 154 |
+
logger.error("Coverage below 95%%: %.1f%%", pct)
|
| 155 |
+
return (0, 1)
|
| 156 |
+
|
| 157 |
+
|
| 158 |
+
def test_random_audit(lookup: dict, n: int = 20) -> tuple[int, int]:
|
| 159 |
+
"""Audit N random pairs by tracing back to tree index."""
|
| 160 |
+
with open(TREE_FILE, "r", encoding="utf-8") as f:
|
| 161 |
+
tree = json.load(f)
|
| 162 |
+
langs = tree["languages"]
|
| 163 |
+
|
| 164 |
+
# Select random pairs that are not unclassified
|
| 165 |
+
classifiable = [
|
| 166 |
+
(k, v) for k, v in lookup.items()
|
| 167 |
+
if v["Phylo_Relation"] != "unclassified"
|
| 168 |
+
]
|
| 169 |
+
random.seed(42) # reproducible
|
| 170 |
+
samples = random.sample(classifiable, min(n, len(classifiable)))
|
| 171 |
+
|
| 172 |
+
passed = 0
|
| 173 |
+
for (a, b), row in samples:
|
| 174 |
+
info_a = langs.get(a)
|
| 175 |
+
info_b = langs.get(b)
|
| 176 |
+
|
| 177 |
+
if not info_a or not info_b:
|
| 178 |
+
logger.error("AUDIT FAIL %s-%s: missing from tree index", a, b)
|
| 179 |
+
continue
|
| 180 |
+
|
| 181 |
+
# Verify families match
|
| 182 |
+
fam_a_match = info_a.get("family_name", "-") == row["Family_A"]
|
| 183 |
+
fam_b_match = info_b.get("family_name", "-") == row["Family_B"]
|
| 184 |
+
|
| 185 |
+
# Verify same/different family classification
|
| 186 |
+
same_family = info_a["family"] == info_b["family"]
|
| 187 |
+
relation = row["Phylo_Relation"]
|
| 188 |
+
family_consistent = (
|
| 189 |
+
(relation == "cross_family" and not same_family) or
|
| 190 |
+
(relation in ("close_sister", "distant_sister", "near_ancestral") and same_family)
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
+
if fam_a_match and fam_b_match and family_consistent:
|
| 194 |
+
passed += 1
|
| 195 |
+
else:
|
| 196 |
+
logger.error(
|
| 197 |
+
"AUDIT FAIL %s-%s: fam_a=%s/%s fam_b=%s/%s family_consistent=%s",
|
| 198 |
+
a, b,
|
| 199 |
+
info_a.get("family_name"), row["Family_A"],
|
| 200 |
+
info_b.get("family_name"), row["Family_B"],
|
| 201 |
+
family_consistent,
|
| 202 |
+
)
|
| 203 |
+
|
| 204 |
+
return passed, n
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def main():
|
| 208 |
+
logging.basicConfig(
|
| 209 |
+
level=logging.INFO,
|
| 210 |
+
format="%(asctime)s %(levelname)s %(message)s",
|
| 211 |
+
)
|
| 212 |
+
|
| 213 |
+
if not PHYLO_FILE.exists():
|
| 214 |
+
logger.error("Missing: %s — run build_phylo_pairs.py first", PHYLO_FILE)
|
| 215 |
+
sys.exit(1)
|
| 216 |
+
|
| 217 |
+
logger.info("Loading phylo_pairs.tsv...")
|
| 218 |
+
lookup = load_phylo_pairs()
|
| 219 |
+
logger.info("Loaded %d pairs", len(lookup))
|
| 220 |
+
|
| 221 |
+
all_passed = 0
|
| 222 |
+
all_total = 0
|
| 223 |
+
|
| 224 |
+
# Test 1: Known-answer checks
|
| 225 |
+
logger.info("")
|
| 226 |
+
logger.info("=== Test 1: Known-Answer Checks ===")
|
| 227 |
+
p, t = test_known_answers(lookup)
|
| 228 |
+
all_passed += p
|
| 229 |
+
all_total += t
|
| 230 |
+
logger.info("Known answers: %d/%d passed", p, t)
|
| 231 |
+
|
| 232 |
+
# Test 2: Near-ancestral integrity
|
| 233 |
+
logger.info("")
|
| 234 |
+
logger.info("=== Test 2: Near-Ancestral Integrity ===")
|
| 235 |
+
p, t = test_near_ancestral_integrity(lookup)
|
| 236 |
+
all_passed += p
|
| 237 |
+
all_total += t
|
| 238 |
+
logger.info("Near-ancestral integrity: %d/%d passed", p, t)
|
| 239 |
+
|
| 240 |
+
# Test 3: Cross-family integrity
|
| 241 |
+
logger.info("")
|
| 242 |
+
logger.info("=== Test 3: Cross-Family Integrity ===")
|
| 243 |
+
p, t = test_cross_family_integrity(lookup)
|
| 244 |
+
all_passed += p
|
| 245 |
+
all_total += t
|
| 246 |
+
logger.info("Cross-family integrity: %d/%d passed", p, t)
|
| 247 |
+
|
| 248 |
+
# Test 4: Coverage
|
| 249 |
+
logger.info("")
|
| 250 |
+
logger.info("=== Test 4: Coverage Check ===")
|
| 251 |
+
p, t = test_coverage(lookup)
|
| 252 |
+
all_passed += p
|
| 253 |
+
all_total += t
|
| 254 |
+
|
| 255 |
+
# Test 5: Random audit
|
| 256 |
+
logger.info("")
|
| 257 |
+
logger.info("=== Test 5: Random Audit (20 pairs) ===")
|
| 258 |
+
p, t = test_random_audit(lookup)
|
| 259 |
+
all_passed += p
|
| 260 |
+
all_total += t
|
| 261 |
+
logger.info("Random audit: %d/%d passed", p, t)
|
| 262 |
+
|
| 263 |
+
# Summary
|
| 264 |
+
logger.info("")
|
| 265 |
+
logger.info("=" * 50)
|
| 266 |
+
logger.info("TOTAL: %d/%d tests passed", all_passed, all_total)
|
| 267 |
+
|
| 268 |
+
# Distribution summary
|
| 269 |
+
relation_counts = Counter(v["Phylo_Relation"] for v in lookup.values())
|
| 270 |
+
logger.info("")
|
| 271 |
+
logger.info("=== Distribution ===")
|
| 272 |
+
for relation, count in sorted(relation_counts.items(), key=lambda x: -x[1]):
|
| 273 |
+
pct = 100 * count / len(lookup)
|
| 274 |
+
logger.info(" %-18s %7d (%5.1f%%)", relation, count, pct)
|
| 275 |
+
|
| 276 |
+
if all_passed == all_total:
|
| 277 |
+
logger.info("")
|
| 278 |
+
logger.info("ALL TESTS PASSED")
|
| 279 |
+
sys.exit(0)
|
| 280 |
+
else:
|
| 281 |
+
logger.info("")
|
| 282 |
+
logger.info("SOME TESTS FAILED")
|
| 283 |
+
sys.exit(1)
|
| 284 |
+
|
| 285 |
+
|
| 286 |
+
if __name__ == "__main__":
|
| 287 |
+
main()
|