Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -16,4 +16,54 @@ language:
|
|
| 16 |
pretty_name: IndicSQuAD
|
| 17 |
size_categories:
|
| 18 |
- 10K<n<100K
|
| 19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
pretty_name: IndicSQuAD
|
| 17 |
size_categories:
|
| 18 |
- 10K<n<100K
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# IndicSQuAD Dataset
|
| 22 |
+
|
| 23 |
+
## Dataset Description
|
| 24 |
+
|
| 25 |
+
IndicSQuAD is a comprehensive **multilingual extractive Question Answering (QA) dataset** covering nine major Indic languages: **Hindi, Bengali, Tamil, Telugu, Marathi, Gujarati, Urdu, Kannada, Oriya, and Malayalam**. It's systematically derived from the popular English SQuAD (Stanford Question Answering Dataset).
|
| 26 |
+
|
| 27 |
+
The rapid progress in QA systems has predominantly benefited high-resource languages, leaving Indic languages significantly underrepresented. IndicSQuAD aims to bridge this gap by providing a robust foundation for model development in these languages.
|
| 28 |
+
|
| 29 |
+
The dataset was created by adapting and extending translation techniques, building upon previous work with MahaSQuAD for Marathi. The methodology focuses on maintaining high linguistic fidelity and accurate answer-span alignment across diverse languages.
|
| 30 |
+
|
| 31 |
+
IndicSQuAD comprises extensive training, validation, and test sets for each language, mirroring the structure of the original SQuAD dataset. Named entities and numerical values are transliterated into their respective scripts to maintain consistency.
|
| 32 |
+
|
| 33 |
+
More details about the dataset can be found in the <a href='https://arxiv.org/abs/2505.03688'> paper </a>.
|
| 34 |
+
|
| 35 |
+
## Languages
|
| 36 |
+
|
| 37 |
+
The dataset covers the following 10 Indic languages:
|
| 38 |
+
* Hindi (`hi`)
|
| 39 |
+
* Bengali (`bn`)
|
| 40 |
+
* Tamil (`ta`)
|
| 41 |
+
* Telugu (`te`)
|
| 42 |
+
* Marathi (`mr`)
|
| 43 |
+
* Gujarati (`gu`)
|
| 44 |
+
* Punjabi (`pa`)
|
| 45 |
+
* Kannada (`kn`)
|
| 46 |
+
* Oriya (`or`)
|
| 47 |
+
* Malayalam (`ml`)
|
| 48 |
+
|
| 49 |
+
## Dataset Structure
|
| 50 |
+
|
| 51 |
+
The dataset structure is similar to the original SQuAD dataset, consisting of contexts, questions, and corresponding answer spans. Each example includes:
|
| 52 |
+
* `id`: Unique identifier for the question-answer pair.
|
| 53 |
+
* `title`: The title of the Wikipedia article from which the context is extracted.
|
| 54 |
+
* `context`: The passage of text containing the answer.
|
| 55 |
+
* `question`: The question asked about the context.
|
| 56 |
+
* `answers`: A dictionary containing:
|
| 57 |
+
* `text`: A list of possible answer spans from the context.
|
| 58 |
+
* `answer_start`: A list of starting character indices for each answer span within the context.
|
| 59 |
+
|
| 60 |
+
## Citing
|
| 61 |
+
If you use the IndicSQuAD dataset, please cite the following paper:
|
| 62 |
+
```
|
| 63 |
+
@article{endait2025indicsquad,
|
| 64 |
+
title={IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages},
|
| 65 |
+
author={Endait, Sharvi and Ghatage, Ruturaj and Kulkarni, Aditya and Patil, Rajlaxmi and Joshi, Raviraj},
|
| 66 |
+
journal={arXiv preprint arXiv:2505.03688},
|
| 67 |
+
year={2025}
|
| 68 |
+
}
|
| 69 |
+
```
|