File size: 3,717 Bytes
26bc207
 
77962ce
26bc207
77962ce
0f29924
77962ce
 
 
26bc207
77962ce
26bc207
 
77962ce
26bc207
77962ce
26bc207
77962ce
26bc207
0f29924
 
26bc207
77962ce
26bc207
0f29924
 
 
26bc207
77962ce
26bc207
77962ce
26bc207
77962ce
26bc207
77962ce
26bc207
77962ce
26bc207
77962ce
26bc207
77962ce
 
 
 
 
26bc207
77962ce
26bc207
77962ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26bc207
77962ce
 
 
 
 
 
 
26bc207
77962ce
 
 
 
c949405
77962ce
c949405
77962ce
c949405
77962ce
 
 
c949405
77962ce
c949405
77962ce
c949405
77962ce
c949405
77962ce
 
c949405
77962ce
 
 
 
 
 
 
 
 
 
 
c949405
 
 
77962ce
c949405
77962ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
license: cc-by-sa-4.0
pretty_name: Tigre Wikipedia Corpus
tags:
  - wikipedia
  - tigre language
  - tig
  - corpus
  - low-resource
language:
  - tig
---

# Tigre Wikipedia Corpus (tigwiki)

### Overview

This repository houses the **Tigre Wikipedia Corpus**, a foundational linguistic resource containing all non-template articles from **https://tig.wikipedia.org**.

**Tigre** is an under-resourced South Semitic language within the Afro-Asiatic family. 
This dataset serves as a critical component for bridging the digital divide, facilitating the development of Natural Language Processing (NLP) models—including Language Models (LMs), Machine Translation (MT) systems, and text generation tools—specifically tailored for the Tigre community.

### Background & Scope

The Tigre language Wikipedia was officially **approved and launched in December 2024**, marking a significant milestone for the language's digital presence. 
This corpus represents the collective effort of the **Tigre diaspora community**, with contributions from over twenty dedicated volunteers 
who actively build and maintain this comprehensive digital knowledge base.

---

## Included Data & Coverage

### **Data Modalities**

This repository contains **Monolingual Text** extracted directly from Wikipedia articles.

### **Domain Coverage**

The corpus encompasses a diverse range of topics, reflecting the broad scope of the encyclopedia. Key sections include:

- **Culture & Heritage:** Art, Culture, Food, and Drinks
- **Geography & Travel:** "Let's explore our country," Tourism
- **STEM:** Science, Technology, Health
- **Humanities:** History, Politics, Biography, Literature (Books)
- **General Interest:** Sport, Entertainment ("Fun"), Miscellaneous

---

## Dataset Structure

The corpus is provided as a single, compressed JSON Lines file (`tigre_wikipedia.jsonl.gz`). This format is efficient for streaming and compatibility with standard NLP libraries.

```
tigre-data-wikipedia/
├── README.md
├── build_corpus.py             
└── tigre_wikipedia.jsonl.gz    
```

### **Data Fields**

| Field | Type | Description |
|-------|--------|-------------|
| id | string | Unique ID of the article |
| title | string | Article title |
| text | string | Cleaned plain-text content |

---

## Data Provenance & Methodology

### **Data Generation Pipeline**

The data is generated from official Wikimedia XML dumps using a custom SAX-based streaming parser. Steps include:

1. **Source:** Downloads raw XML dumps.  
2. **Extraction:** Processes compressed files efficiently.  
3. **Filtering:** Removes non-main namespaces (User:, Talk:, Template:).  
4. **Output:** Serializes clean text into JSONL format.

---

## Bias, Risks & Known Limitations

- **Community Bias:** Overrepresentation of diaspora-relevant topics.  
- **Domain Bias:** Overemphasis on formal encyclopedic style.  
- **Size Limitations:** As a newer Wikipedia (Dec 2024), content volume is still growing.

---

## How to Use

### **Loading via Hugging Face**

```python
from datasets import load_dataset

dataset = load_dataset("BeitTigreAI/tigre-data-wikipedia", split="train")

print(dataset[0]["title"])
print(dataset[0]["text"][:200])
```

---

## Reproducibility: Build Latest Version

### 1. Install Requirements

```bash
pip install requests beautifulsoup4
```

### 2. Run Script

```bash
python build_corpus.py
```

---

## Licensing

Licensed under **CC BY-SA 4.0**.

---

## Citation

```
@misc{tigre-wikipedia-corpus,
  author = {BeitTigreAI},
  title = {Tigre Wikipedia (tigwiki) Corpus},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {https://huggingface.co/datasets/BeitTigreAI/tigre-data-wikipedia}
}
```