Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
Swahili
Size:
1K - 10K
ArXiv:
Tags:
swahili
kiswahili
low-resource-languages
african-languages
extractive-qa
reading-comprehension
License:
File size: 7,478 Bytes
78d8980 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
---
language:
- sw
license: cc-by-4.0
task_categories:
- question-answering
tags:
- swahili
- kiswahili
- low-resource-languages
- african-languages
- extractive-qa
- reading-comprehension
pretty_name: KenSwQuAD
size_categories:
- 1K<n<10K
---
# KenSwQuAD: A Question Answering Dataset for Swahili
## Dataset Description
**KenSwQuAD** (Kenyan Swahili Question Answering Dataset) is a reading comprehension and question answering dataset for **Swahili**, a low-resource African language. The dataset contains **7,506 question-answer pairs** derived from **1,441 unique Swahili contexts** covering diverse topics including agriculture, education, technology, governance, and daily life in Kenya.
This dataset is designed for training and evaluating extractive question answering models on Swahili text.
## Dataset Statistics
| Metric | Count |
|--------|-------|
| Total QA Pairs | 7,506 |
| Unique Contexts | 1,441 |
| Avg QA Pairs per Context | 5.21 |
| Avg Question Length | 41 characters |
| Avg Answer Length | 14 characters |
| Avg Context Length | 2,702 characters |
## Dataset Format
The dataset is distributed as **Parquet files** for optimal performance and compatibility:
- **Format**: Apache Parquet (columnar storage)
- **Encoding**: UTF-8
- **Compatibility**: Works with `datasets` 4.0.0+ without custom loading scripts
---
## Data Fields
Each record in the dataset contains:
- **id**: `string` - Unique identifier for the QA pair (format: `{story_id}_{qa_index}`)
- **story_id**: `string` - Identifier for the source context/story (e.g., `3830_swa`)
- **context**: `string` - The passage/story from which questions are derived
- **question**: `string` - The question in Swahili
- **answer**: `string` - The answer text
- **paragraph_id**: `string` - Optional paragraph/position indicator
### Example Record
```python
{
'id': '3830_swa_0',
'story_id': '3830_swa',
'context': 'MANUFAA YA KILIMO KATIKA UIMARISHAJI WA UCHUMI WA KENYA Kilimo katika nchi yetu ya Kenya ni muhimu...',
'question': 'Ni katika nchi ipi kilimo ni muhimu',
'answer': 'Kenya',
'paragraph_id': 'x'
}
```
---
## Usage
### Loading with 🤗 Datasets
**Compatible with datasets 4.0.0+** (No `trust_remote_code` needed!)
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Kencorpus/KenSwQuAD")
# Access the training split
train = dataset['train']
# View first example
print(train[0])
```
### Example: Training a QA Model
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, TrainingArguments, Trainer
# Load dataset
dataset = load_dataset("Kencorpus/KenSwQuAD")
# Load a multilingual model (supports Swahili)
model_name = "xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
# Tokenize function
def tokenize_function(examples):
return tokenizer(
examples['question'],
examples['context'],
truncation=True,
padding='max_length',
max_length=384
)
# Tokenize dataset
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Train model (example)
training_args = TrainingArguments(
output_dir="./kenswquad-model",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
num_train_epochs=3,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'],
)
trainer.train()
```
### Example: Exploring the Data
```python
from datasets import load_dataset
import pandas as pd
# Load dataset
dataset = load_dataset("Kencorpus/KenSwQuAD")
df = pd.DataFrame(dataset['train'])
# Count QA pairs per story
qa_per_story = df.groupby('story_id').size().describe()
print("QA pairs per story distribution:")
print(qa_per_story)
# View sample context
sample = df[df['story_id'] == '3830_swa'].iloc[0]
print(f"\nContext: {sample['context'][:200]}...")
print(f"\nQuestion: {sample['question']}")
print(f"Answer: {sample['answer']}")
```
---
## Dataset Topics
The contexts cover a wide variety of topics relevant to Kenyan society:
- 🌾 **Agriculture & Farming** - Crop cultivation, livestock, economic impact
- 🏫 **Education** - Schools, technology in education, student life
- 💻 **Technology** - Digital tools, internet, communication
- 🏛️ **Governance & Politics** - Leadership, government policies, elections
- 💰 **Economy & Business** - Trade, employment, economic development
- 🏥 **Health** - COVID-19, medical services, public health
- 🌍 **Society & Culture** - Daily life, traditions, social issues
---
## Data Collection
The dataset was created by:
1. Collecting Swahili texts from various sources (articles, social media, essays)
2. Manual annotation of question-answer pairs by native Swahili speakers
3. Quality control and validation
**Source Contexts:**
- 2,585 texts from general sources (`collected_data_text_swa_final_2585_out_of_2585`)
- 324 texts from Twitter/social media (`collected_data_text_swa_tweets_324_out_of_324`)
---
## Intended Uses
### Primary Uses
- Training extractive question answering models for Swahili
- Evaluating reading comprehension capabilities
- Transfer learning for low-resource African languages
- Multilingual model evaluation
### Out-of-Scope Uses
- Generative question answering (dataset is designed for extractive QA)
- Tasks requiring answers not present in the context
- Languages other than Swahili
---
## Limitations
- **Extractive nature**: Answers are expected to be spans within the context
- **Domain coverage**: While diverse, may not cover all Swahili domains
- **Answer length**: Most answers are short (avg. 14 characters)
- **Regional variation**: Primarily Kenyan Swahili, may not represent all Swahili dialects
---
## Dataset Curators
- **Barack Wanjawa** (University of Nairobi)
- **Lilian D.A. Wanzare** (Maseno University)
- **Florence Indede** (Maseno University)
- **Owen McOnyango** (Maseno University)
- **Lawrence Muchemi** (University of Nairobi)
- **Edward Ombui** (Africa Nazarene University)
---
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{wanjawa2022kencorpus,
title={Kencorpus: A Kenyan Language Corpus of Swahili, Dholuo and Luhya for Natural Language Processing Tasks},
author={Wanjawa, Barack W. and Wanzare, Lilian D. and Indede, Florence and McOnyango, Owen and Ombui, Edward and Muchemi, Lawrence},
journal={arXiv preprint arXiv:2208.12081},
year={2022}
}
```
---
## Links
- **Research Paper**: https://arxiv.org/abs/2208.12081
- **Dataverse**: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OTL0LM
- **ResearchGate**: https://www.researchgate.net/publication/371767223
- **Semantic Scholar**: https://www.semanticscholar.org/paper/8cf70c5cd8b195ed7a399ea2cdc0b0e8f08c61ce
---
## License
This dataset is licensed under **CC-BY-4.0**.
---
## Acknowledgments
This dataset is part of the **Kencorpus** project, which aims to create NLP resources for low-resource Kenyan languages. We thank all annotators and contributors who made this dataset possible.
|