Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -60,6 +60,32 @@ This work presents the development of two question–answering (QA) datasets for
|
|
| 60 |
|
| 61 |
The **SQuAD-sr-md** dataset contains around 7k examples and represents a manually refined subset of the original SQuAD-sr dataset. While SQuAD-sr is a significant resource for Serbian QA research, it was created through a translation-based approach using an adapted Translate–Align–Retrieve method, which introduced various linguistic inconsistencies. The manual revision process, although time-consuming, substantially improved the grammatical correctness, terminological consistency, and overall reliability of the dataset, making it more suitable for training extractive QA models. Evaluation conducted on several Serbian base encoder models highlighted the TeslaXLM base model as a particularly promising candidate for future extractive QA training.
|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
The **SerbianQA-Gen** dataset consists of approximately 74k automatically generated QA pairs produced with the assistance of large language models (LLMs). The dataset covers a wide range of topics and sources and is currently undergoing additional manual revision to improve grammatical accuracy, fluency, and naturalness.
|
| 64 |
|
| 65 |
Together, these datasets contribute to addressing the lack of large-scale QA resources for the Serbian language. They enable the training and evaluation of language models capable of deeper semantic understanding of Serbian, a language characterized by rich morphology and complex syntax. By making these datasets publicly available, we aim to support further research in Serbian NLP and encourage the development of more robust QA and conversational AI systems for the language.
|
|
|
|
| 60 |
|
| 61 |
The **SQuAD-sr-md** dataset contains around 7k examples and represents a manually refined subset of the original SQuAD-sr dataset. While SQuAD-sr is a significant resource for Serbian QA research, it was created through a translation-based approach using an adapted Translate–Align–Retrieve method, which introduced various linguistic inconsistencies. The manual revision process, although time-consuming, substantially improved the grammatical correctness, terminological consistency, and overall reliability of the dataset, making it more suitable for training extractive QA models. Evaluation conducted on several Serbian base encoder models highlighted the TeslaXLM base model as a particularly promising candidate for future extractive QA training.
|
| 62 |
|
| 63 |
+
### SQuAD-sr subset
|
| 64 |
+
|
| 65 |
+
| Base model | EM (%) | F1 (%) |
|
| 66 |
+
|-------------|--------|--------|
|
| 67 |
+
| BERTić | 71 | 81.29 |
|
| 68 |
+
| Jerteh-81 | 48 | 57.33 |
|
| 69 |
+
| Jerteh-355 | 64 | 71.64 |
|
| 70 |
+
| XLM-r-BERTić | 79 | 87.94 |
|
| 71 |
+
| XLMali | 68 | 78.74 |
|
| 72 |
+
| TeslaXLM | **80** | **89.10** |
|
| 73 |
+
| **Average** | 68.83 | 77.75 |
|
| 74 |
+
|
| 75 |
+
### SQuAD-sr-md
|
| 76 |
+
|
| 77 |
+
| Base model | EM (%) | F1 (%) |
|
| 78 |
+
|-------------|--------|--------|
|
| 79 |
+
| BERTić | 71 | 80.61 ↓ |
|
| 80 |
+
| Jerteh-81 | 52 ↑ | 61.55 ↑ |
|
| 81 |
+
| Jerteh-355 | 60 ↓ | 73.01 ↑ |
|
| 82 |
+
| XLM-r-BERTić | 77 ↓ | 88.49 ↑ |
|
| 83 |
+
| XLMali | 75 ↑ | 83.00 ↑ |
|
| 84 |
+
| TeslaXLM | **83 ↑** | **92.23 ↑** |
|
| 85 |
+
| **Average** | **69.67 ↑** | **79.81 ↑** |
|
| 86 |
+
|
| 87 |
+
↑ indicates improvement compared to the original *SQuAD-sr* subset, while ↓ indicates a decrease.
|
| 88 |
+
|
| 89 |
The **SerbianQA-Gen** dataset consists of approximately 74k automatically generated QA pairs produced with the assistance of large language models (LLMs). The dataset covers a wide range of topics and sources and is currently undergoing additional manual revision to improve grammatical accuracy, fluency, and naturalness.
|
| 90 |
|
| 91 |
Together, these datasets contribute to addressing the lack of large-scale QA resources for the Serbian language. They enable the training and evaluation of language models capable of deeper semantic understanding of Serbian, a language characterized by rich morphology and complex syntax. By making these datasets publicly available, we aim to support further research in Serbian NLP and encourage the development of more robust QA and conversational AI systems for the language.
|