File size: 3,923 Bytes
22f3c19
8854499
22f3c19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f311e5
22f3c19
c95b04d
9f311e5
22f3c19
 
 
9f311e5
cd998a7
 
 
 
 
 
 
 
9f311e5
 
22f3c19
 
 
b69cf17
 
 
 
 
cd998a7
22f3c19
b69cf17
 
cd998a7
 
b69cf17
 
 
 
 
 
 
cd998a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22f3c19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd998a7
22f3c19
cd998a7
22f3c19
 
cd998a7
22f3c19
cd998a7
22f3c19
cd998a7
 
 
22f3c19
cd998a7
22f3c19
 
cd998a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22f3c19
 
 
 
 
cd998a7
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: cc-by-sa-4.0
task_categories:
- text-retrieval
language:
- en
tags:
- wiktionary
- dictionary
- english
- linguistics
- morphology
- semantics
- normalized
size_categories:
- 1M<n<10M
---

# English Wiktionary - Normalized SQLite Database

This is a normalized SQLite database of English Wiktionary, capturing every field from the `cstr/en-wiktionary-extracted` dataset.
Note that this does **not** include all that would be extractable per wiktextract with `--all`, like translations, examples, etc. 

## 🎯 Key Features

- fields captured including:
  - πŸ”— **Wikilinks** in definitions (semantic connections)
  - πŸ“ **Qualifiers** (e.g., "archaic", "US", "informal")
  - 🏷️ **Sense IDs** (unique identifiers)
  - 🌐 **Wikidata IDs** (for semantic web linking)
  - πŸ“š **Attestations** (historical citations)
  - 🎭 **Head templates** (morphological data)
  - πŸ“– **Info templates** (structured metadata)
- **⚑ Fast Queries**: Fully indexed schema for sub-20ms queries
- **πŸ”— Semantic Web**: relations preserved with sense-level granularity
- **πŸ“± Mobile-ready**: Optimized for sq(f)lite (Flutter) and local DB use cases

## πŸ“Š Database Statistics

- **Entries**: 1,243,200
- **Word Senses**: 1,361,968
- **Definitions (Glosses)**: 1,381,486
- **Wikilinks**: 2,585,821
- **Sense IDs**: 1,361,968
- **Qualifiers**: Embedded in senses
- **Translations**: 0
- **Word Forms**: 700,191
- **Head Templates**: 1,237,679
- **Pronunciations**: 0
- **Examples**: 0
- **Attestations**: 4,295
- **Wikidata IDs**: 2,309
- **Synonyms**: 214,838
- **Antonyms**: 11,816
- **Hypernyms**: 9,818
- **Hyponyms**: 22,649
- 
## πŸ—οΈ Database Schema (40+ Tables)

### New Tables (vs Previous Versions)
- **head_templates**: Morphological templates
- **entry_wikipedia**: Wikipedia cross-references
- **sense_links**: Wikilinks in definitions
- **sense_raw_tags**: Unstructured tags
- **sense_wikidata**: Wikidata identifiers
- **sense_wikipedia**: Wikipedia at sense level
- **attestations**: Historical citations
- **info_templates**: Structured metadata

### Core Tables
- **entries**: Core word data with etymology
- **senses**: Definitions with qualifier, senseid, head_nr
- **translations**: Multi-language translations
- **examples**: Usage examples
- **semantic relations**: hypernyms/hyponyms/meronyms/holonyms/coordinate_terms

## πŸ“– Usage

### Download
```python
from huggingface_hub import hf_hub_download
import sqlite3
import gzip
import shutil

# Download compressed database
db_gz_path = hf_hub_download(
    repo_id="cstr/en-wiktionary-sqlite-full",
    filename="en_wiktionary_normalized_full.db.gz",
    repo_type="dataset"
)

# Decompress
db_path = db_gz_path.replace('.gz', '')
with gzip.open(db_gz_path, 'rb') as f_in:
    with open(db_path, 'wb') as f_out:
        shutil.copyfileobj(f_in, f_out)

# Connect
conn = sqlite3.connect(db_path)
```

### Example Queries

```python
# Get definition with wikilinks for "dog"
cursor.execute('''
    SELECT g.gloss_text, GROUP_CONCAT(l.link_text, ', ') as links
    FROM entries e
    JOIN senses s ON e.id = s.entry_id
    JOIN glosses g ON s.id = g.sense_id
    LEFT JOIN sense_links l ON s.id = l.sense_id
    WHERE e.word = ? AND e.lang = 'English'
    GROUP BY g.id
''', ('dog',))

# Get words with specific qualifier (e.g., "archaic")
cursor.execute('''
    SELECT e.word, s.qualifier, g.gloss_text
    FROM entries e
    JOIN senses s ON e.id = s.entry_id
    JOIN glosses g ON s.id = g.sense_id
    WHERE s.qualifier LIKE '%archaic%'
    LIMIT 10
''')

# Find Wikidata ID for a sense
cursor.execute('''
    SELECT e.word, w.wikidata_id
    FROM entries e
    JOIN senses s ON e.id = s.entry_id
    JOIN sense_wikidata w ON s.id = w.sense_id
    WHERE e.word = ?
''', ('cat',))
```

## πŸ“œ License

CC-BY-SA 4.0 (same as source)

## πŸ”„ Version

This is a **truly lossless** version capturing all 40+ fields from the source data.