Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
indic-squad / README.md
l3cube-pune's picture
Update README.md
f597390 verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - bn
  - gu
  - hi
  - kn
  - ml
  - mr
  - or
  - pa
  - ta
  - te
pretty_name: IndicSQuAD
size_categories:
  - 10K<n<100K

IndicSQuAD Dataset

Dataset Description

IndicSQuAD is a comprehensive multilingual extractive Question Answering (QA) dataset covering nine major Indic languages: Hindi, Bengali, Tamil, Telugu, Marathi, Gujarati, Urdu, Kannada, Oriya, and Malayalam. It's systematically derived from the popular English SQuAD (Stanford Question Answering Dataset).

The rapid progress in QA systems has predominantly benefited high-resource languages, leaving Indic languages significantly underrepresented. IndicSQuAD aims to bridge this gap by providing a robust foundation for model development in these languages.

The dataset was created by adapting and extending translation techniques, building upon previous work with MahaSQuAD for Marathi. The methodology focuses on maintaining high linguistic fidelity and accurate answer-span alignment across diverse languages.

IndicSQuAD comprises extensive training, validation, and test sets for each language, mirroring the structure of the original SQuAD dataset. Named entities and numerical values are transliterated into their respective scripts to maintain consistency.

More details about the dataset can be found in the IndicSQuAD paper . The exact data curation approach is outlined in MahaSQuAD paper

Languages

The dataset covers the following 10 Indic languages:

  • Hindi (hi)
  • Bengali (bn)
  • Tamil (ta)
  • Telugu (te)
  • Marathi (mr)
  • Gujarati (gu)
  • Punjabi (pa)
  • Kannada (kn)
  • Oriya (or)
  • Malayalam (ml)

Dataset Structure

The dataset structure is similar to the original SQuAD dataset, consisting of contexts, questions, and corresponding answer spans. Each example includes:

  • id: Unique identifier for the question-answer pair.
  • title: The title of the Wikipedia article from which the context is extracted.
  • context: The passage of text containing the answer.
  • question: The question asked about the context.
  • answers: A dictionary containing:
    • text: A list of possible answer spans from the context.
    • answer_start: A list of starting character indices for each answer span within the context.

Citing

If you use the IndicSQuAD dataset, please cite the following paper:

@article{endait2025indicsquad,
  title={IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages},
  author={Endait, Sharvi and Ghatage, Ruturaj and Kulkarni, Aditya and Patil, Rajlaxmi and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2505.03688},
  year={2025}
}
@article{ghatage2024mahasquad,
  title={MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering},
  author={Ghatage, Ruturaj and Kulkarni, Aditya and Patil, Rajlaxmi and Endait, Sharvi and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2404.13364},
  year={2024}
}

IndicSQuAD BERT models