The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found ragu_benchmarks.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found ragu_benchmarks.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
RAGU_Benchmarks
MultiQ Dataset
Dataset Description
Dataset Summary
MultiQ is a small but rich dataset designed for question answering (QA) and multi-document information retrieval tasks. It contains 169 Russian-language questions, each accompanied by a correct answer and a set of relevant Wikipedia articles serving as context for locating the answer. This dataset is suitable for evaluating models’ ability to identify precise answers based on multiple potentially relevant documents.
Source: https://mera.a-ai.ru/ru/text/tasks/5
The texts of the original benchmark were replaced with all Wikipedia articles relevant to the given question.
Dataset Structure
Each example includes the following fields:
index- a unique numeric identifier for the example.question- a Russian-language question requiring a concise, exact answer.answer- the ground-truth answer (a string) corresponding to the question.articles- a dictionary where keys are Wikipedia article titles and values are the full text of those articles. The articles contain sufficient information to deduce the correct answer.
Example entry:
{
"index": 0,
"question": "Где родился человек, который был братом Тиберия?",
"answer": "Рим",
"articles": {
"Тиберий Клавдий Нерон": "Тибе́рий Кла́вдий Не́рон — римский политический деятель..."
}
}
Dataset Statistics
Total dataset size: 169 examples
- All examples belong to a single split (can be used as a test or validation set).
- Questions span diverse topics: geography, history, politics, biographies.
- Answers consist of short phrases or proper names.
- Each question is supported by at least one Wikipedia article; often, only one highly relevant article is provided.
Dataset Creation
Data Sources
All articles are sourced from the Russian-language Wikipedia. Questions and answers were manually crafted to evaluate systems’ capability to extract factual information from the given context.
Annotations
Annotation includes:
- Formulating questions that require specific factual responses.
- Selecting a ground-truth answer verifiable within the article text.
- Curating relevant article(s) containing the necessary information to derive the answer.
Some questions may require logical inference or cross-referencing (e.g., “Tiberius’s brother → who is he? → where was he born?”).
Language
The entire dataset - including both questions and contexts - is in Russian.
Licensing Information
Article texts are derived from Wikipedia, distributed under the Creative Commons Attribution-ShareAlike (CC BY-SA) license.
The dataset itself (its structure, questions, and answers) may be freely used for research and educational purposes. Attribution of the source is recommended when publishing results.
NSU QA Dataset
Dataset Description
NSU QA Dataset is a specialized question answering (QA) and multi-document retrieval benchmark based on official materials, websites, and publications of Novosibirsk State University (NSU). The dataset contains 207 Russian-language questions, each paired with a precise answer and a list of relevant document IDs (pages) containing sufficient context to derive the answer. Suitable for evaluating models’ ability to extract factual information from structured and semi-structured university sources.
Dataset Structure
Each example includes the following fields:
instruction- question prompt (always starts with “Instruction” for standardization).inputs- dictionary with key'text'containing the actual question; sometimes includes optional'topic'.outputs- string with the correct, concise answer.meta- metadata: uniqueid,author,tour_name(e.g., “Quest”).related_pages- list of document IDs from the doc collection relevant for answering this question.
Example entry:
{
"instruction": "Instruction",
"inputs": {
"text": "Что такое направление подготовки «информатика и вычислительная техника»?"
},
"outputs": "Направление подготовки «Информатика и вычислительная техника» готовит специалистов в области разработки программного обеспечения, системного анализа и управления IT-проектами.",
"meta": {
"id": 3,
"author": "НГУ",
"tour_name": "Quest"
},
"related_pages": [230, 204, 1026, 948, 543]
}
Document Collection (doc)
A separate table contains 449 documents (page_content), each with a unique id, textual content, and metadata (including title). Also provided: qa_references - list of question IDs answerable using this document.
Example document:
{
"id": 21,
"page_content": "Механико-математический факультет НГУ\n\nНаправления подготовки: математика, механика, прикладная математика...",
"metadata": {
"title": "Образование и карьера на Механико-математическом факультете"
},
"qa_references": [116, 127]
}
Dataset Statistics
- Total questions: 207
- All questions are in Russian
- Topics: education, university structure, admissions, faculties, programs, events, personnel, infrastructure
- Answers: short, factual, sometimes marked as “no information available”
- Each question linked to exactly 5 relevant documents (page IDs)
- Documents cover a broad range: from applicant guides to research centers and contact pages
Dataset Creation
Data Sources
All texts are derived from real NSU materials: official website, brochures, news, faculty descriptions, and program pages. No external sources (e.g., Wikipedia) are used.
Annotations
- Questions are designed to require specific, verifiable answers.
- Answers are extracted directly from document texts or synthesized based on them.
- For each question, 5 most relevant documents are manually selected.
- Some questions intentionally have no answer in the provided fragments - to test model’s ability to handle “unknown” cases correctly.
Language
The entire dataset - questions, answers, and contexts - is exclusively in Russian.
Licensing
Texts are based on open NSU materials available on the official university website. Distributed under CC BY 4.0. The dataset itself (structure, annotations, markup) may be freely used for research and educational purposes with proper attribution.
CheGeKa Dataset (“What? Where? When?”)
Dataset Description
Dataset Summary
CheGeKa is a benchmark dataset designed to evaluate models’ ability to answer complex intellectual questions from the popular Russian TV quiz show “What? Where? When?”. The dataset contains 104 questions, each paired with a precise answer, metadata about the author and tournament, and references to related Wikipedia documents. It is suitable for testing fact retrieval, logical inference, and multi-document reasoning skills.
All examples and contexts are in Russian, including question formulations and document content.
Source: https://mera.a-ai.ru/ru/text/tasks/8
Dataset Structure
The dataset consists of two tables:
Table qa - Questions and Answers:
instruction- prompt simulating participation in the game.inputs.text- the actual question text.outputs- ground-truth answer.meta- metadata:id,author,tour_name, etc.related_pages- list of document IDs from thedoctable that provide relevant context.
Example entry:
{
"instruction": "You are participating in the quiz 'What? Where? When?'. Answer the question.",
"inputs": {
"text": "Автором текста гимна Норвегии является лауреат Нобелевской премии по литературе. Назовите его."
},
"outputs": "Бьёрнстьерне Бьёрнсон (лауреат Нобелевской премии по литературе 1903 года)",
"meta": {
"id": 0,
"author": "Орест Петросянц",
"tour_name": "Кубок Москвы 2005"
},
"related_pages": [0, 1]
}
Table documents - Documents (Contexts):
id- unique document ID.page_content- full text of the Wikipedia article.metadata.title- article title.qa_references- list of question IDs fromqathat this document supports.
Example entry:
{
"id": 0,
"page_content": "Бьёрнстьерне Мартиниус Бьёрнсон (норв. Bjørnstjerne Martinus Bjørnson) - норвежский писатель, лауреат Нобелевской премии по литературе 1903 года...",
"metadata": {
"title": "Бьёрнстьерне Бьёрнсон",
"source": ""
},
"qa_references": [0]
}
Dataset Statistics
Total size: 104 questions
- All questions are in Russian.
- Answers are short phrases, proper names, or titles, sometimes with clarifying notes in parentheses.
- Each question is linked to 1-5 supporting Wikipedia articles.
- Question authors include well-known CheGeKa writers (Orest Petrosyants, Evgeny Lyapin, Alexey Bogoslovsky, etc.).
Dataset Creation
Data Sources
All documents are sourced from Russian Wikipedia. Questions and answers were collected from real tournaments of the “What? Where? When?” club held between 2000–2010.
Annotations
Annotation includes:
- Formulating questions in the distinctive CheGeKa style - often metaphorical, culturally nuanced, or hint-based.
- Selecting verifiable ground-truth answers supported by referenced documents.
- Recording author and tournament metadata.
- Linking each question to relevant Wikipedia articles.
Some questions require not direct extraction but cultural knowledge or logical deduction.
Language
The entire dataset - including questions, answers, and contexts - is in Russian.
Licensing Information
Wikipedia article texts are distributed under the Creative Commons Attribution-ShareAlike (CC BY-SA) license.
The dataset structure, questions, and metadata may be freely used for research and educational purposes. Attribution is recommended when publishing results.
- Downloads last month
- 46