Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Serbian
ArXiv:
Libraries:
Datasets
pandas
License:
teami12 commited on
Commit
bbc6e80
·
verified ·
1 Parent(s): 8926358

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +164 -6
README.md CHANGED
@@ -1,10 +1,168 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - question-answering
5
  language:
6
  - sr
7
- pretty_name: SQuaD Serbian
 
8
  size_categories:
9
- - 10K<n<100K
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  language:
3
  - sr
4
+ license: cc-by-sa-4.0
5
+ pretty_name: "Serbian SQuAD"
6
  size_categories:
7
+ - 100K<n<1M
8
+ source_datasets:
9
+ - rajpurkar/squad
10
+ task_categories:
11
+ - question-answering
12
+ task_ids:
13
+ - extractive-qa
14
+ ---
15
+
16
+ # Dataset Card for Serbian SQuAD
17
+
18
+ ## Dataset Description
19
+
20
+ - **Repository:** [Hugging Face Dataset](https://huggingface.co/datasets/smartcat/squad_sr)
21
+ - **Point of Contact:** [SmartCat]
22
+ - **Original Data:** https://huggingface.co/datasets/rajpurkar/squad
23
+ - ** Original data in Serbian: https://www.kaggle.com/datasets/aleksacvetanovic/squad-sr
24
+
25
+ ### Dataset Summary
26
+
27
+ This dataset is an automatic Serbian translation of the Stanford Question Answering Dataset (SQuAD) 1.1. The original SQuAD, developed by Stanford, is a reading comprehension dataset consisting of questions posed by crowdworkers on a set of Wikipedia articles. The answer to every question is a segment of text (span) from the corresponding reading passage, or the question might be unanswerable. It's the largest Serbian QA dataset with more than 87K samples in both Cyrillic and Latin.
28
+
29
+ The translation was performed automatically using the GPT-3.5-turbo-0125 model, preserving the structure and format of the original dataset while providing content in Serbian.
30
+
31
+ ### Supported Tasks and Leaderboards
32
+
33
+ - **Extractive Question Answering**: The dataset can be used to train and evaluate models for extractive question answering tasks in Serbian.
34
+
35
+ ### Languages
36
+
37
+ The dataset is in Serbian (sr). The original dataset was in English (en).
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Data Instances
42
+
43
+ Each instance in the dataset contains:
44
+ - A passage of text from a Wikipedia article (context)
45
+ - A question about that passage
46
+ - The answer to the question, which is a span of text from the passage
47
+
48
+ ### Data Fields
49
+
50
+ - `id`: A string identifier for the example
51
+ - `title`: The title of the Wikipedia article
52
+ - `context`: The passage of text (paragraph from Wikipedia)
53
+ - `question`: The question posed about the context
54
+
55
+ ## Dataset Creation
56
+
57
+ ### Curation Rationale
58
+
59
+ This dataset was created to provide a resource for Serbian language question answering tasks, leveraging the high-quality structure and content of the original SQuAD dataset.
60
+
61
+ ### Source Data
62
+
63
+ #### Initial Data Collection and Normalization
64
+
65
+ The source data is the SQuAD 1.1 dataset, which contains 100,000+ question-answer pairs on 500+ articles. The original data was retrieved from: https://huggingface.co/datasets/rajpurkar/squad
66
+
67
+ #### Who are the source language producers?
68
+
69
+ The original questions were posed by crowdworkers on English Wikipedia articles. The answers were extracted from these articles.
70
+
71
+ ### Annotations
72
+
73
+ #### Annotation process
74
+
75
+ The original English dataset was automatically translated to Serbian using the GPT-3.5-turbo-0125 model.
76
+
77
+ #### Who are the annotators?
78
+
79
+ The translation was performed automatically by an AI model, without human intervention. The original annotations were created by crowdworkers.
80
+
81
+ ### Personal and Sensitive Information
82
+
83
+ The dataset is based on Wikipedia articles and should not contain personal or sensitive information. However, as it's an automatic translation of web content, users should be aware of potential inaccuracies or unintended content.
84
+
85
+ ## Considerations for Using the Data
86
+
87
+ ### Social Impact of Dataset
88
+
89
+ This dataset contributes to the development of question answering systems for the Serbian language, potentially improving access to information and language technologies for Serbian speakers.
90
+
91
+ ### Discussion of Biases
92
+
93
+ The dataset may inherit biases present in the original SQuAD dataset, including biases in Wikipedia content and in the questions posed by crowdworkers. Additionally, the automatic translation process may introduce its own biases or errors.
94
+
95
+ ### Other Known Limitations
96
+
97
+ - The quality of the Serbian translations has not been manually verified and may contain errors.
98
+ - The automatic translation process may not capture nuances or context-specific meanings from the original text.
99
+ - The dataset maintains the same limitations as the original SQuAD 1.1, such as the potential for artificial questions that might not reflect real-world information-seeking behavior.
100
+
101
+ ## Additional Information
102
+
103
+ ### Dataset Curators
104
+
105
+ [Your Name or Organization]
106
+
107
+ ### Licensing Information
108
+
109
+ This dataset is licensed under CC-BY-SA-4.0, following the licensing of the original SQuAD dataset.
110
+
111
+ ### Citation Information
112
+
113
+ If you use this dataset, please cite both the original SQuAD dataset and this Serbian translation:
114
+
115
+ ```
116
+
117
+
118
+ @article
119
+
120
+ {rajpurkar2016squad,
121
+ title={Squad: 100,000+ questions for machine comprehension of text},
122
+ author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},
123
+ journal={arXiv preprint arXiv:1606.05250},
124
+ year={2016}
125
+ }
126
+
127
+
128
+
129
+ @misc
130
+
131
+ {cvetanović2024syntheticdatasetcreationfinetuning,
132
+ title={Synthetic Dataset Creation and Fine-Tuning of Transformer Models for Question Answering in Serbian},
133
+ author={Aleksa Cvetanović and Predrag Tadić},
134
+ year={2024},
135
+ eprint={2404.08617},
136
+ archivePrefix={arXiv},
137
+ primaryClass={cs.CL},
138
+ url={https://arxiv.org/abs/2404.08617},
139
+ }
140
+
141
+
142
+
143
+ @misc
144
+
145
+ {serbian-squad,
146
+ title={Serbian SQuAD Dataset},
147
+ author={[Your Name]},
148
+ year={2024},
149
+ howpublished={\url{https://huggingface.co/datasets/your-username/serbian-squad}}
150
+ }
151
+ ```
152
+
153
+ ### Contributions
154
+
155
+ Thanks to Stanford for creating the original SQuAD dataset.
156
+ Thank you to Aleksa Cvetanovic and Predrag Tadic for translating SQuAD to Serbian and uploading the dataset to Kaggle.
157
+
158
+ ## Loading the Dataset
159
+
160
+ Here's a Python code example to load the dataset using the Hugging Face `datasets` library:
161
+
162
+ ```python
163
+ from datasets import load_dataset
164
+
165
+ # Load the dataset
166
+ dataset = load_dataset("smartcat/squad_sr")
167
+
168
+ ```