|
|
--- |
|
|
viewer: false |
|
|
pretty_name: "Tigrinya-SQuAD: Machine-Translated Training Dataset" |
|
|
language: |
|
|
- ti |
|
|
multilinguality: |
|
|
- monolingual |
|
|
task_categories: |
|
|
- question-answering |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
dataset_size: ~10MB |
|
|
download_size: ~6MB |
|
|
license: cc-by-sa-4.0 |
|
|
tags: |
|
|
- tigrinya |
|
|
- question-answering |
|
|
- mrc |
|
|
- reading-comprehension |
|
|
- low-resource |
|
|
- african-languages |
|
|
- machine-translation |
|
|
- silver-standard |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 46737 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: "train.parquet" |
|
|
--- |
|
|
|
|
|
# Tigrinya-SQuAD: Machine-Translated Training Dataset |
|
|
|
|
|
Tigrinya-SQuAD is a machine-translated and filtered version of the English SQuAD 1.1 training dataset, automatically converted to Tigrinya for training question-answering models in low-resource settings. |
|
|
|
|
|
This silver dataset serves as training data for Tigrinya question-answering systems. **For evaluation and benchmarking, please use the gold-standard [TiQuAD](https://huggingface.co/datasets/fgaim/tiquad) dataset, which contains human-annotated validation and test sets.** |
|
|
|
|
|
**Published with the paper:** [Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for Tigrinya](https://aclanthology.org/2023.acl-long.661/) (ACL 2023) |
|
|
|
|
|
**Related repositories:** |
|
|
|
|
|
- [TiQuAD (gold dataset)](https://huggingface.co/datasets/fgaim/tiquad) |
|
|
- The paper's [GitHub repository](https://github.com/fgaim/TiQuAD) |
|
|
|
|
|
## Dataset Overview |
|
|
|
|
|
Tigrinya-SQuAD is designed as training data for extractive question answering in Tigrinya, a low-resource Semitic language primarily spoken in Eritrea and Ethiopia. The dataset features: |
|
|
|
|
|
- **Source data**: English SQuAD 1.1 training part, which is based on Wikipedia articles |
|
|
- **Machine-translated**: Automatically translated from English SQuAD 1.1 using neural machine translation |
|
|
- **Filtered**: Post-processed with heuristic filtering to improve quality and discarded low-quality samples |
|
|
- **Training-only**: Contains only training split; use TiQuAD for validation/testing |
|
|
- **SQuAD format**: Maintains compatibility with standard QA frameworks |
|
|
- Not human verified, to be used for training but not for final evaluation |
|
|
|
|
|
| **Split** | **Articles** | **Paragraphs** | **Questions** | **Answers** | |
|
|
|-----------|--------------|----------------|---------------|-------------| |
|
|
| Train | 442 | 17,391 | 46,737 | 46,737 | |
|
|
|
|
|
## How to Load Tigrinya-SQuAD |
|
|
|
|
|
Install the `datasets` library installed by running `pip install -U datasets` in the terminal. |
|
|
|
|
|
> Make sure the latest `datasets` library is installed as older versions may not properly load the data. |
|
|
|
|
|
Then pull and load the dataset using Python, as follows: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset |
|
|
tigrinya_squad = load_dataset("fgaim/tigrinya-squad") |
|
|
print(tigrinya_squad) |
|
|
``` |
|
|
|
|
|
That will print the dataset features: |
|
|
|
|
|
```python |
|
|
DatasetDict({ |
|
|
train: Dataset({ |
|
|
features: ['id', 'question', 'context', 'answers', 'article_title', 'context_id'], |
|
|
num_rows: 46737 |
|
|
}) |
|
|
}) |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
- **`id`**: Unique identifier for each question |
|
|
- **`question`**: The question to be answered (in Tigrinya) |
|
|
- **`context`**: The paragraph containing the answer (in Tigrinya) |
|
|
- **`answers`**: A list of dictionaries of candidate answers, each entry containing: |
|
|
- `text`: An answer string (training data has one answer per question) |
|
|
- `answer_start`: A starting position of answer string in the context |
|
|
- **`article_title`**: Title of the source article |
|
|
- **`context_id`**: Unique identifier of the context in the data split |
|
|
|
|
|
## Evaluation and Benchmarking |
|
|
|
|
|
This dataset contains only training data, for proper evaluation of Tigrinya question-answering models use [TiQuAD](https://huggingface.co/datasets/fgaim/tiquad), which provides multireference, human-annotated validation/test splits. Both datasets can be combined during training for best results as reported in the paper. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your work, please cite the original TiQuAD paper: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{gaim-etal-2023-tiquad, |
|
|
title = "Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for {T}igrinya", |
|
|
author = "Fitsum Gaim and Wonsuk Yang and Hancheol Park and Jong C. Park", |
|
|
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", |
|
|
month = jul, |
|
|
year = "2023", |
|
|
address = "Toronto, Canada", |
|
|
publisher = "Association for Computational Linguistics", |
|
|
url = "https://aclanthology.org/2023.acl-long.661", |
|
|
pages = "11857--11870", |
|
|
} |
|
|
``` |
|
|
|
|
|
## Data Quality and Limitations |
|
|
|
|
|
As a machine-translated dataset, Tigrinya-SQuAD has inherent limitations: |
|
|
|
|
|
- **Translation errors**: Some questions/answers may have translation artifacts |
|
|
- **Cultural adaptation**: Context may not perfectly align with Tigrinya cultural references |
|
|
- Not suitable for model evaluation or human performance comparison but for training purpose only. |
|
|
|
|
|
If you identify any issues with the dataset, please contact us at <fitsum.gaim@kaist.ac.kr>. |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This dataset builds upon the foundational work of the Stanford Question Answering Dataset (SQuAD) and the human-annotated TiQuAD dataset. We thank the original SQuAD creators for making their data freely available. |
|
|
|
|
|
## License |
|
|
|
|
|
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). |
|
|
|
|
|
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://licensebuttons.net/l/by-sa/4.0/88x31.png" /></a> |
|
|
|