Datasets:
File size: 12,219 Bytes
30441c3 b37727e f304b7a 30441c3 26786e3 75d5ab7 26786e3 75d5ab7 105ea5a 75d5ab7 26786e3 30441c3 26786e3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | ---
language:
- mul
license: cc-by-sa-4.0
task_categories:
- text-classification
- token-classification
tags:
- linguistics
- historical-linguistics
- cognate-detection
- phylogenetics
- ancient-languages
- IPA
- phonetics
size_categories:
- 10M<n<100M
configs:
- config_name: cognate_pairs_inherited
data_files:
- split: train
path: data/training/cognate_pairs/cognate_pairs_inherited.parquet
- config_name: cognate_pairs_borrowing
data_files:
- split: train
path: data/training/cognate_pairs/cognate_pairs_borrowing.parquet
- config_name: cognate_pairs_similarity
data_files:
- split: train
path: data/training/cognate_pairs/cognate_pairs_similarity.parquet
- config_name: cognate_pairs_phono_filtered
data_files:
- split: train
path: data/training/cognate_pairs/cognate_pairs_phono_filtered.parquet
- config_name: linear_a_phonotactics_validation
data_files:
- split: train
path: data/training/cognate_pairs/linear_a_phonotactics_validation.parquet
- config_name: phylo_pairs
data_files:
- split: train
path: data/training/metadata/phylo_pairs.parquet
- config_name: languages
data_files:
- split: train
path: data/training/metadata/languages.parquet
---
# Ancient Scripts Decipherment Datasets
Collated datasets for the paper:
> **Deciphering Undersegmented Ancient Scripts Using Phonetic Prior**
> Jiaming Luo, Frederik Hartmann, Enrico Santus, Regina Barzilay, Yuan Cao
> *Transactions of the Association for Computational Linguistics*, 2021
> [arXiv:2010.11054](https://arxiv.org/abs/2010.11054)
This repository gathers the training datasets used in the paper — both those hosted in the authors' GitHub repos and the external cited sources.
---
## Repository Structure
```
data/
├── gothic/ # Gothic language data
│ ├── got.pretrained.pth # Pretrained phonological embeddings (PyTorch)
│ ├── segments.pkl # Phonetic segment data (Python pickle)
│ ├── gotica.txt # Gothic Bible plain text (Wulfila project)
│ └── gotica.xml.zip # Gothic Bible TEI XML (Wulfila project)
│
├── ugaritic/ # Ugaritic-Hebrew cognate data
│ ├── uga-heb.no_spe.cog # Full cognate pairs (TSV, ~7,353 tokens)
│ └── uga-heb.small.no_spe.cog # Small training subset (~10% of full)
│
├── iberian/ # Iberian inscription data
│ └── iberian.csv # Cleaned Hesperia epigraphy (3,466 chunks)
│
├── religious_terms/ # ** CURATED SUBSET: Religious vocabulary **
│ ├── README.md # Methodology and category definitions
│ ├── ugaritic_hebrew_religious.tsv # ~170 Ug-Heb cognate pairs (deity, ritual, sacred)
│ ├── gothic_religious.tsv # ~65 Gothic Bible religious terms
│ └── iberian_religious.tsv # ~40 Iberian votive/religious elements
│
├── linear_b/ # Linear B (Mycenaean Greek) dataset
│ ├── README.md # Sources, methodology, limitations
│ ├── linear_b_signs.tsv # 211 signs (88 syllabograms + 123 ideograms)
│ ├── sign_to_ipa.json # 74 syllabogram → IPA mappings
│ └── linear_b_words.tsv # 2,484 words with IPA, glosses, sources
│
├── validation/ # Phylogenetic validation dataset (9 branches)
│ ├── README.md # Format, sources, concept list
│ ├── concepts.tsv # 40 shared concept IDs
│ ├── germanic.tsv # got, ang, non, goh (~160 entries)
│ ├── celtic.tsv # sga, cym, bre (~120 entries)
│ ├── balto_slavic.tsv # lit, chu, rus (~120 entries)
│ ├── indo_iranian.tsv # san, ave, fas (~120 entries)
│ ├── italic.tsv # lat, osc, xum (~120 entries)
│ ├── hellenic.tsv # grc, gmy (~80 entries)
│ ├── semitic.tsv # heb, arb, amh (~120 entries)
│ ├── turkic.tsv # otk, tur, aze (~120 entries)
│ └── uralic.tsv # fin, hun, est (~120 entries)
│
└── cited_sources/ # External datasets cited in the paper
├── genesis/
│ ├── Hebrew.xml # Hebrew Bible (Christodouloupoulos & Steedman 2015)
│ └── Latin.xml # Latin Bible (same corpus)
├── basque/
│ ├── Basque-NT.xml # Basque New Testament (same corpus)
│ └── Trask_Etymological_Dictionary_Basque.pdf # Trask's Basque etymological dictionary
└── iberian_names/
└── RodriguezRamos2014.pdf # Iberian onomastic index (personal names)
```
---
## Dataset Details
### Gothic (`data/gothic/`)
| File | Source | Description |
|---|---|---|
| `got.pretrained.pth` | [DecipherUnsegmented](https://github.com/j-luo93/DecipherUnsegmented) | Pretrained phonological embeddings trained on Gothic IPA data |
| `segments.pkl` | [DecipherUnsegmented](https://github.com/j-luo93/DecipherUnsegmented) | Serialized phonetic segment inventory |
| `gotica.txt` | [Wulfila Project](https://www.wulfila.be/gothic/download/) | Plain text of the Gothic Bible (4th century CE translation by Bishop Wulfila) |
| `gotica.xml.zip` | [Wulfila Project](https://www.wulfila.be/gothic/download/) | TEI P5 XML encoding with linguistic annotations |
The Gothic Bible is the primary source of Gothic text. The paper uses unsegmented Gothic inscriptions from the 3rd-10th century AD period.
### Ugaritic (`data/ugaritic/`)
| File | Source | Description |
|---|---|---|
| `uga-heb.no_spe.cog` | [NeuroDecipher](https://github.com/j-luo93/NeuroDecipher) | Full Ugaritic-Hebrew cognate pairs |
| `uga-heb.small.no_spe.cog` | [NeuroDecipher](https://github.com/j-luo93/NeuroDecipher) | ~10% training subset |
**Format:** Tab-separated values. Each row is a cognate pair. Column 1 = Ugaritic transliteration, Column 2 = Hebrew transliteration. `|` separates multiple cognates; `_` marks missing entries. Originally from Snyder et al. (2010), covering 7,353 segmented tokens from the 14th-12th century BC.
### Iberian (`data/iberian/`)
| File | Source | Description |
|---|---|---|
| `iberian.csv` | [DecipherUnsegmented](https://github.com/j-luo93/DecipherUnsegmented) | Cleaned epigraphic inscriptions |
**Format:** CSV with columns `REF. HESPERIA` (inscription reference code) and `cleaned` (transcribed text). Contains 3,466 undersegmented character chunks from the 6th-1st century BC. Sourced from the [Hesperia database](http://hesperia.ucm.es/en/proyecto_hesperia.php) and cleaned via the authors' Jupyter notebook.
### Linear B / Mycenaean Greek (`data/linear_b/`)
| File | Source | Description |
|---|---|---|
| `linear_b_signs.tsv` | [Unicode UCD](https://www.unicode.org/Public/UCD/latest/) | 211 signs: 88 syllabograms + 123 ideograms with Bennett numbers and IPA |
| `sign_to_ipa.json` | Ventris & Chadwick (1973) | 74 syllabogram transliteration → IPA mappings |
| `linear_b_words.tsv` | Multiple (see below) | 2,478 words with IPA, glosses, and source attribution |
**Format:** Tab-separated values. The word list contains columns: `Word` (transliteration), `IPA`, `SCA` (sound class), `Source`, `Concept_ID`, `Cognate_Set_ID`, `Gloss`, `Word_Type`, `IPA_Source`. Words come from three CC-BY-SA compatible sources: [jhnwnstd/shannon](https://github.com/jhnwnstd/shannon) Linear B Lexicon (MIT, 2,272 entries), [Wiktionary](https://en.wiktionary.org/wiki/Category:Mycenaean_Greek_lemmas) Mycenaean Greek lemmas (CC-BY-SA, 170 entries with 46 expert IPA), and IE-CoR cognate pairs (42 entries). The sign inventory is from the Unicode Character Database.
### Cited Sources (`data/cited_sources/`)
These are external datasets referenced in the paper for known-language vocabularies and comparison:
| File | Citation | Usage in Paper |
|---|---|---|
| `genesis/Hebrew.xml` | Christodouloupoulos & Steedman (2015) | Hebrew vocabulary for Ugaritic comparison |
| `genesis/Latin.xml` | Christodouloupoulos & Steedman (2015) | Latin vocabulary for cross-linguistic comparison |
| `basque/Basque-NT.xml` | Christodouloupoulos & Steedman (2015) | Basque vocabulary for Iberian comparison |
| `basque/Trask_Etymological_Dictionary_Basque.pdf` | Trask (2008) | Basque etymological data |
| `iberian_names/RodriguezRamos2014.pdf` | Rodriguez Ramos (2014) | Iberian personal name lists with Latin correspondences |
The Bible texts are from the [Massively Parallel Bible Corpus](https://github.com/christos-c/bible-corpus) (CC0 licensed).
---
## Additional Data Sources (Not Included)
The following sources were cited in the paper but are not machine-readable or freely downloadable:
- **Wiktionary descendant trees** for Proto-Germanic, Old Norse, and Old English vocabularies — extracted by the authors from Wiktionary's structured data
- **Original Hesperia epigraphy** (`hesperia_epigraphy.csv`) — referenced in the DecipherUnsegmented README but not present in the repository
---
## Cognate Detection Pipeline
The `cognate_pipeline/` directory contains a full Python package for cross-linguistic cognate detection, built on the datasets in this repository. It provides:
- **Ingestion** of CSV/TSV/COG, CLDF, Wiktionary JSONL, and generic JSON sources
- **Phonetic normalisation** with transcription type tracking (IPA, transliteration, orthographic)
- **SCA sound class encoding** (List 2012) for phonological comparison
- **Family-aware cognate candidate generation** (tags `cognate_inherited` vs `similarity_only`)
- **Weighted Levenshtein scoring** with SCA-class-aware substitution costs
- **Clustering** via connected components or UPGMA
- **PostgreSQL/PostGIS database** with 8 normalised tables and Alembic migrations
- **Export** to CLDF Wordlist and JSON-LD formats
- **Full provenance tracking** through every pipeline stage
Supports 36 languages across 9 phylogenetic branches (Germanic, Celtic, Balto-Slavic, Indo-Iranian, Italic, Hellenic, Semitic, Turkic, Uralic) plus isolates, with Glottocode resolution and IPA transcriptions.
See `data/validation/README.md` for the phylogenetic validation dataset.
See [`cognate_pipeline/README.md`](cognate_pipeline/README.md) for installation and usage.
---
## Original Repositories
- [j-luo93/DecipherUnsegmented](https://github.com/j-luo93/DecipherUnsegmented) — main code for the paper
- [j-luo93/NeuroDecipher](https://github.com/j-luo93/NeuroDecipher) — predecessor (Ugaritic/Linear B decipherment)
- [j-luo93/xib](https://github.com/j-luo93/xib) — earlier Iberian codebase
## Programmatic Access
### Via HuggingFace `datasets`
```python
from datasets import load_dataset
# Load cognate pairs (Parquet, fast)
ds = load_dataset("Nacryos/ancient-scripts-datasets", "cognate_pairs_inherited")
# Available configs: cognate_pairs_inherited, cognate_pairs_borrowing,
# cognate_pairs_similarity, phylo_pairs, languages
```
### Via Python SDK
For typed access with phylogenetic filtering, IPA parsing, and validation sets:
```bash
pip install git+https://github.com/Project-Phaistos/ancient-scripts-datasets-NEW.git
```
```python
from ancient_scripts_data import AncientScriptsDataset
ds = AncientScriptsDataset() # auto-downloads from HF
pairs = ds.cognate_pairs("inherited", phylo_filter="close_sister", limit=1000)
lex = ds.lexicon("lat")
rel = ds.phylo_relation("lat", "spa") # near_ancestral
```
See the [SDK documentation](https://github.com/Project-Phaistos/ancient-scripts-datasets-NEW) for full API reference.
---
## Paper Citation
```bibtex
@article{luo2021deciphering,
title={Deciphering Undersegmented Ancient Scripts Using Phonetic Prior},
author={Luo, Jiaming and Hartmann, Frederik and Santus, Enrico and Barzilay, Regina and Cao, Yuan},
journal={Transactions of the Association for Computational Linguistics},
volume={9},
pages={69--81},
year={2021},
doi={10.1162/tacl_a_00354}
}
```
|