Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -25,22 +25,26 @@ size_categories:
|
|
| 25 |
# KenTrans: Kenyan Languages to Swahili Translation Dataset
|
| 26 |
|
| 27 |
## Dataset Structure
|
| 28 |
-
**KenTrans** is a parallel corpus between **Swahili** and three Kenyan languages (with multiple Luhya dialects). The
|
| 29 |
-
|
| 30 |
-
- **Dholuo → Swahili:**
|
| 31 |
-
- **Luhya → Swahili (total):**
|
| 32 |
-
- **Lumarachi (lch)
|
| 33 |
-
- **Lulogooli (llg)
|
| 34 |
-
- **Lubukusu (lbk)
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
>
|
| 43 |
-
>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
> ```
|
| 45 |
|
| 46 |
### Languages & Codes
|
|
@@ -55,12 +59,14 @@ Each document contains sentence pairs formatted with a leading label:
|
|
| 55 |
|
| 56 |
---
|
| 57 |
|
|
|
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
-
-
|
| 62 |
-
-
|
| 63 |
-
-
|
|
|
|
| 64 |
|
| 65 |
---
|
| 66 |
|
|
@@ -80,24 +86,37 @@ When parsed into records, each example can be represented with:
|
|
| 80 |
|
| 81 |
## Usage
|
| 82 |
|
| 83 |
-
### Loading with 🤗 Datasets
|
|
|
|
|
|
|
| 84 |
|
| 85 |
```python
|
| 86 |
from datasets import load_dataset
|
| 87 |
|
| 88 |
# Load Dholuo → Swahili
|
| 89 |
-
dho = load_dataset("Kencorpus/KenTrans",
|
| 90 |
|
| 91 |
# Load Lubukusu → Swahili
|
| 92 |
-
lbk = load_dataset("Kencorpus/KenTrans",
|
| 93 |
|
| 94 |
# Load Lumarachi → Swahili
|
| 95 |
-
lch = load_dataset("Kencorpus/KenTrans",
|
| 96 |
|
| 97 |
# Load Lulogooli → Swahili
|
| 98 |
-
llg = load_dataset("Kencorpus/KenTrans",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
```
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
|
| 102 |
|
| 103 |
## Sources
|
|
|
|
| 25 |
# KenTrans: Kenyan Languages to Swahili Translation Dataset
|
| 26 |
|
| 27 |
## Dataset Structure
|
| 28 |
+
**KenTrans** is a parallel corpus between **Swahili** and three Kenyan languages (with multiple Luhya dialects). The dataset contains **11,795** sentence pairs translated **into Swahili**:
|
| 29 |
+
|
| 30 |
+
- **Dholuo → Swahili:** 4,222 pairs
|
| 31 |
+
- **Luhya → Swahili (total):** 7,573 pairs across three dialects
|
| 32 |
+
- **Lumarachi (lch):** 2,475 pairs
|
| 33 |
+
- **Lulogooli (llg):** 3,692 pairs
|
| 34 |
+
- **Lubukusu (lbk):** 1,406 pairs
|
| 35 |
+
|
| 36 |
+
The dataset is provided in **Parquet format**, which is compatible with Hugging Face `datasets` library version 4.0.0 and above.
|
| 37 |
+
|
| 38 |
+
Each example contains parallel text with the following fields:
|
| 39 |
+
- **source**: Original sentence in the source language
|
| 40 |
+
- **target**: Translation in Swahili
|
| 41 |
+
|
| 42 |
+
> Example
|
| 43 |
+
> ```python
|
| 44 |
+
> {
|
| 45 |
+
> 'source': 'OSIEPE MA KENDE',
|
| 46 |
+
> 'target': 'MARAFIKI WA DHATI'
|
| 47 |
+
> }
|
| 48 |
> ```
|
| 49 |
|
| 50 |
### Languages & Codes
|
|
|
|
| 59 |
|
| 60 |
---
|
| 61 |
|
| 62 |
+
## Dataset Format
|
| 63 |
|
| 64 |
+
The dataset is distributed as **Parquet files** for optimal performance and compatibility:
|
| 65 |
|
| 66 |
+
- **Format**: Apache Parquet (columnar storage)
|
| 67 |
+
- **Encoding**: UTF-8
|
| 68 |
+
- **File naming**: `{language}-train.parquet` (e.g., `dho-train.parquet`)
|
| 69 |
+
- **Compatibility**: Works with `datasets` 4.0.0+ without custom loading scripts
|
| 70 |
|
| 71 |
---
|
| 72 |
|
|
|
|
| 86 |
|
| 87 |
## Usage
|
| 88 |
|
| 89 |
+
### Loading with 🤗 Datasets
|
| 90 |
+
|
| 91 |
+
**Compatible with datasets 4.0.0+** (No `trust_remote_code` needed!)
|
| 92 |
|
| 93 |
```python
|
| 94 |
from datasets import load_dataset
|
| 95 |
|
| 96 |
# Load Dholuo → Swahili
|
| 97 |
+
dho = load_dataset("Kencorpus/KenTrans", "dho")
|
| 98 |
|
| 99 |
# Load Lubukusu → Swahili
|
| 100 |
+
lbk = load_dataset("Kencorpus/KenTrans", "lbk")
|
| 101 |
|
| 102 |
# Load Lumarachi → Swahili
|
| 103 |
+
lch = load_dataset("Kencorpus/KenTrans", "lch")
|
| 104 |
|
| 105 |
# Load Lulogooli → Swahili
|
| 106 |
+
llg = load_dataset("Kencorpus/KenTrans", "llg")
|
| 107 |
+
|
| 108 |
+
# Access the data
|
| 109 |
+
print(dho['train'][0])
|
| 110 |
+
# Output: {'id': 'dho_dho_combined.txt_0', 'source': '6AM DALA FM NEWS...', 'target': 'VIDOKEZI VYA HABARI...', ...}
|
| 111 |
```
|
| 112 |
|
| 113 |
+
### Dataset Format
|
| 114 |
+
|
| 115 |
+
The dataset is stored in **Parquet format** with the following structure:
|
| 116 |
+
- Each language pair has its own Parquet file (e.g., `dho-train.parquet`)
|
| 117 |
+
- Each row represents a parallel sentence pair
|
| 118 |
+
- All metadata is included in the Parquet schema
|
| 119 |
+
|
| 120 |
|
| 121 |
|
| 122 |
## Sources
|