Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,249 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# RAGU_Benchmarks
|
| 6 |
+
## MultiQ Dataset
|
| 7 |
+
|
| 8 |
+
### Dataset Description
|
| 9 |
+
|
| 10 |
+
### Dataset Summary
|
| 11 |
+
|
| 12 |
+
**MultiQ** is a small but rich dataset designed for question answering (QA) and multi-document information retrieval tasks. It contains 169 Russian-language questions, each accompanied by a correct answer and a set of relevant Wikipedia articles serving as context for locating the answer. This dataset is suitable for evaluating models’ ability to identify precise answers based on multiple potentially relevant documents.
|
| 13 |
+
|
| 14 |
+
Source: https://mera.a-ai.ru/ru/text/tasks/5
|
| 15 |
+
|
| 16 |
+
The texts of the original benchmark were replaced with all Wikipedia articles relevant to the given question.
|
| 17 |
+
### Dataset Structure
|
| 18 |
+
|
| 19 |
+
Each example includes the following fields:
|
| 20 |
+
|
| 21 |
+
- `index` - a unique numeric identifier for the example.
|
| 22 |
+
- `question` - a Russian-language question requiring a concise, exact answer.
|
| 23 |
+
- `answer` - the ground-truth answer (a string) corresponding to the question.
|
| 24 |
+
- `articles` - a dictionary where keys are Wikipedia article titles and values are the full text of those articles. The articles contain sufficient information to deduce the correct answer.
|
| 25 |
+
|
| 26 |
+
Example entry:
|
| 27 |
+
```json
|
| 28 |
+
{
|
| 29 |
+
"index": 0,
|
| 30 |
+
"question": "Где родился человек, который был братом Тиберия?",
|
| 31 |
+
"answer": "Рим",
|
| 32 |
+
"articles": {
|
| 33 |
+
"Тиберий Клавдий Нерон": "Тибе́рий Кла́вдий Не́рон — римский политический деятель..."
|
| 34 |
+
}
|
| 35 |
+
}
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
### Dataset Statistics
|
| 39 |
+
|
| 40 |
+
Total dataset size: **169 examples**
|
| 41 |
+
|
| 42 |
+
- All examples belong to a single split (can be used as a test or validation set).
|
| 43 |
+
- Questions span diverse topics: geography, history, politics, biographies.
|
| 44 |
+
- Answers consist of short phrases or proper names.
|
| 45 |
+
- Each question is supported by at least one Wikipedia article; often, only one highly relevant article is provided.
|
| 46 |
+
|
| 47 |
+
### Dataset Creation
|
| 48 |
+
|
| 49 |
+
#### Data Sources
|
| 50 |
+
|
| 51 |
+
All articles are sourced from the Russian-language Wikipedia. Questions and answers were manually crafted to evaluate systems’ capability to extract factual information from the given context.
|
| 52 |
+
|
| 53 |
+
#### Annotations
|
| 54 |
+
|
| 55 |
+
Annotation includes:
|
| 56 |
+
|
| 57 |
+
- Formulating questions that require specific factual responses.
|
| 58 |
+
- Selecting a ground-truth answer verifiable within the article text.
|
| 59 |
+
- Curating relevant article(s) containing the necessary information to derive the answer.
|
| 60 |
+
|
| 61 |
+
Some questions may require logical inference or cross-referencing (e.g., “Tiberius’s brother → who is he? → where was he born?”).
|
| 62 |
+
|
| 63 |
+
#### Language
|
| 64 |
+
|
| 65 |
+
The entire dataset - including both questions and contexts - is in **Russian**.
|
| 66 |
+
|
| 67 |
+
### Licensing Information
|
| 68 |
+
|
| 69 |
+
Article texts are derived from **Wikipedia**, distributed under the **Creative Commons Attribution-ShareAlike (CC BY-SA)** license.
|
| 70 |
+
The dataset itself (its structure, questions, and answers) may be freely used for research and educational purposes. Attribution of the source is recommended when publishing results.
|
| 71 |
+
|
| 72 |
+
## NSU QA Dataset
|
| 73 |
+
|
| 74 |
+
### Dataset Description
|
| 75 |
+
|
| 76 |
+
**NSU QA Dataset** is a specialized question answering (QA) and multi-document retrieval benchmark based on official materials, websites, and publications of Novosibirsk State University (NSU). The dataset contains 207 Russian-language questions, each paired with a precise answer and a list of relevant document IDs (pages) containing sufficient context to derive the answer. Suitable for evaluating models’ ability to extract factual information from structured and semi-structured university sources.
|
| 77 |
+
|
| 78 |
+
#### Dataset Structure
|
| 79 |
+
|
| 80 |
+
Each example includes the following fields:
|
| 81 |
+
|
| 82 |
+
- `instruction` - question prompt (always starts with “Instruction” for standardization).
|
| 83 |
+
- `inputs` - dictionary with key `'text'` containing the actual question; sometimes includes optional `'topic'`.
|
| 84 |
+
- `outputs` - string with the correct, concise answer.
|
| 85 |
+
- `meta` - metadata: unique `id`, `author`, `tour_name` (e.g., “Quest”).
|
| 86 |
+
- `related_pages` - list of document IDs from the doc collection relevant for answering this question.
|
| 87 |
+
|
| 88 |
+
Example entry:
|
| 89 |
+
|
| 90 |
+
```json
|
| 91 |
+
{
|
| 92 |
+
"instruction": "Instruction",
|
| 93 |
+
"inputs": {
|
| 94 |
+
"text": "Что такое направление подготовки «информатика и вычислительная техника»?"
|
| 95 |
+
},
|
| 96 |
+
"outputs": "Направление подготовки «Информатика и вычислительная техника» готовит специалистов в области разработки программного обеспечения, системного анализа и управления IT-проектами.",
|
| 97 |
+
"meta": {
|
| 98 |
+
"id": 3,
|
| 99 |
+
"author": "НГУ",
|
| 100 |
+
"tour_name": "Quest"
|
| 101 |
+
},
|
| 102 |
+
"related_pages": [230, 204, 1026, 948, 543]
|
| 103 |
+
}
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
#### Document Collection (doc)
|
| 107 |
+
|
| 108 |
+
A separate table contains 449 documents (`page_content`), each with a unique `id`, textual content, and `metadata` (including `title`). Also provided: `qa_references` - list of question IDs answerable using this document.
|
| 109 |
+
|
| 110 |
+
Example document:
|
| 111 |
+
|
| 112 |
+
```json
|
| 113 |
+
{
|
| 114 |
+
"id": 21,
|
| 115 |
+
"page_content": "Механико-математический факультет НГУ\n\nНаправления подготовки: математика, механика, прикладная математика...",
|
| 116 |
+
"metadata": {
|
| 117 |
+
"title": "Образование и карьера на Механико-математическом факультете"
|
| 118 |
+
},
|
| 119 |
+
"qa_references": [116, 127]
|
| 120 |
+
}
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
#### Dataset Statistics
|
| 124 |
+
|
| 125 |
+
- Total questions: **207**
|
| 126 |
+
- All questions are in **Russian**
|
| 127 |
+
- Topics: education, university structure, admissions, faculties, programs, events, personnel, infrastructure
|
| 128 |
+
- Answers: short, factual, sometimes marked as “no information available”
|
| 129 |
+
- Each question linked to exactly 5 relevant documents (page IDs)
|
| 130 |
+
- Documents cover a broad range: from applicant guides to research centers and contact pages
|
| 131 |
+
|
| 132 |
+
#### Dataset Creation
|
| 133 |
+
|
| 134 |
+
##### Data Sources
|
| 135 |
+
|
| 136 |
+
All texts are derived from real NSU materials: official website, brochures, news, faculty descriptions, and program pages. No external sources (e.g., Wikipedia) are used.
|
| 137 |
+
|
| 138 |
+
##### Annotations
|
| 139 |
+
|
| 140 |
+
- Questions are designed to require specific, verifiable answers.
|
| 141 |
+
- Answers are extracted directly from document texts or synthesized based on them.
|
| 142 |
+
- For each question, 5 most relevant documents are manually selected.
|
| 143 |
+
- Some questions intentionally have no answer in the provided fragments - to test model’s ability to handle “unknown” cases correctly.
|
| 144 |
+
|
| 145 |
+
##### Language
|
| 146 |
+
|
| 147 |
+
The entire dataset - questions, answers, and contexts - is exclusively in **Russian**.
|
| 148 |
+
|
| 149 |
+
#### Licensing
|
| 150 |
+
|
| 151 |
+
Texts are based on open NSU materials available on the official university website. Distributed under **CC BY 4.0**.
|
| 152 |
+
The dataset itself (structure, annotations, markup) may be freely used for research and educational purposes with proper attribution.
|
| 153 |
+
|
| 154 |
+
## CheGeKa Dataset (“What? Where? When?”)
|
| 155 |
+
|
| 156 |
+
### Dataset Description
|
| 157 |
+
|
| 158 |
+
#### Dataset Summary
|
| 159 |
+
|
| 160 |
+
**CheGeKa** is a benchmark dataset designed to evaluate models’ ability to answer complex intellectual questions from the popular Russian TV quiz show “What? Where? When?”. The dataset contains 104 questions, each paired with a precise answer, metadata about the author and tournament, and references to related Wikipedia documents. It is suitable for testing fact retrieval, logical inference, and multi-document reasoning skills.
|
| 161 |
+
|
| 162 |
+
All examples and contexts are in **Russian**, including question formulations and document content.
|
| 163 |
+
|
| 164 |
+
Source: [https://mera.a-ai.ru/ru/text/tasks/8](https://mera.a-ai.ru/ru/text/tasks/8)
|
| 165 |
+
#### Dataset Structure
|
| 166 |
+
|
| 167 |
+
The dataset consists of two tables:
|
| 168 |
+
|
| 169 |
+
##### Table `qa` - Questions and Answers:
|
| 170 |
+
|
| 171 |
+
- `instruction` - prompt simulating participation in the game.
|
| 172 |
+
- `inputs.text` - the actual question text.
|
| 173 |
+
- `outputs` - ground-truth answer.
|
| 174 |
+
- `meta` - metadata: `id`, `author`, `tour_name`, etc.
|
| 175 |
+
- `related_pages` - list of document IDs from the `doc` table that provide relevant context.
|
| 176 |
+
|
| 177 |
+
Example entry:
|
| 178 |
+
|
| 179 |
+
```json
|
| 180 |
+
{
|
| 181 |
+
"instruction": "You are participating in the quiz 'What? Where? When?'. Answer the question.",
|
| 182 |
+
"inputs": {
|
| 183 |
+
"text": "Автором текста гимна Норвегии является лауреат Нобелевской премии по литературе. Назовите его."
|
| 184 |
+
},
|
| 185 |
+
"outputs": "Бьёрнстьерне Бьёрнсон (лауреат Нобелевской премии по литературе 1903 года)",
|
| 186 |
+
"meta": {
|
| 187 |
+
"id": 0,
|
| 188 |
+
"author": "Орест Петросянц",
|
| 189 |
+
"tour_name": "Кубок Москвы 2005"
|
| 190 |
+
},
|
| 191 |
+
"related_pages": [0, 1]
|
| 192 |
+
}
|
| 193 |
+
```
|
| 194 |
+
|
| 195 |
+
##### Table `documents` - Documents (Contexts):
|
| 196 |
+
|
| 197 |
+
- `id` - unique document ID.
|
| 198 |
+
- `page_content` - full text of the Wikipedia article.
|
| 199 |
+
- `metadata.title` - article title.
|
| 200 |
+
- `qa_references` - list of question IDs from `qa` that this document supports.
|
| 201 |
+
|
| 202 |
+
Example entry:
|
| 203 |
+
|
| 204 |
+
```json
|
| 205 |
+
{
|
| 206 |
+
"id": 0,
|
| 207 |
+
"page_content": "Бьёрнстьерне Мартиниус Бьёрнсон (норв. Bjørnstjerne Martinus Bjørnson) - норвежский писатель, лауреат Нобелевской премии по литературе 1903 года...",
|
| 208 |
+
"metadata": {
|
| 209 |
+
"title": "Бьёрнстьерне Бьёрнсон",
|
| 210 |
+
"source": ""
|
| 211 |
+
},
|
| 212 |
+
"qa_references": [0]
|
| 213 |
+
}
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
#### Dataset Statistics
|
| 217 |
+
|
| 218 |
+
Total size: **104 questions**
|
| 219 |
+
|
| 220 |
+
- All questions are in Russian.
|
| 221 |
+
- Answers are short phrases, proper names, or titles, sometimes with clarifying notes in parentheses.
|
| 222 |
+
- Each question is linked to 1-5 supporting Wikipedia articles.
|
| 223 |
+
- Question authors include well-known CheGeKa writers (Orest Petrosyants, Evgeny Lyapin, Alexey Bogoslovsky, etc.).
|
| 224 |
+
|
| 225 |
+
#### Dataset Creation
|
| 226 |
+
|
| 227 |
+
##### Data Sources
|
| 228 |
+
|
| 229 |
+
All documents are sourced from **Russian Wikipedia**. Questions and answers were collected from real tournaments of the “What? Where? When?” club held between 2000–2010.
|
| 230 |
+
|
| 231 |
+
##### Annotations
|
| 232 |
+
|
| 233 |
+
Annotation includes:
|
| 234 |
+
|
| 235 |
+
- Formulating questions in the distinctive CheGeKa style - often metaphorical, culturally nuanced, or hint-based.
|
| 236 |
+
- Selecting verifiable ground-truth answers supported by referenced documents.
|
| 237 |
+
- Recording author and tournament metadata.
|
| 238 |
+
- Linking each question to relevant Wikipedia articles.
|
| 239 |
+
|
| 240 |
+
Some questions require not direct extraction but cultural knowledge or logical deduction.
|
| 241 |
+
|
| 242 |
+
##### Language
|
| 243 |
+
|
| 244 |
+
The entire dataset - including questions, answers, and contexts - is in **Russian**.
|
| 245 |
+
|
| 246 |
+
#### Licensing Information
|
| 247 |
+
|
| 248 |
+
Wikipedia article texts are distributed under the **Creative Commons Attribution-ShareAlike (CC BY-SA)** license.
|
| 249 |
+
The dataset structure, questions, and metadata may be freely used for research and educational purposes. Attribution is recommended when publishing results.
|