File size: 9,746 Bytes
f22e16a 8ffcdc8 f22e16a 8ffcdc8 f22e16a 8ffcdc8 f22e16a 8ffcdc8 f22e16a 8ffcdc8 f22e16a 8ffcdc8 a520a41 86c6f22 f22e16a a5d202c 63481c6 a5d202c fd12559 a5d202c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
---
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: documents
sequence: string
- name: type
dtype: string
- name: qid
dtype: int64
- name: documents_title
sequence: string
- name: output
dtype: string
splits:
- name: train_v1
num_bytes: 23686198
num_examples: 10431
- name: train_v2
num_bytes: 16155141
num_examples: 7156
- name: eval
num_bytes: 1127377
num_examples: 512
- name: test
num_bytes: 1064670
num_examples: 512
- name: test_top1
num_bytes: 849720
num_examples: 512
download_size: 10586256
dataset_size: 42883106
configs:
- config_name: default
data_files:
- split: train_v1
path: data/train_v1-*
- split: train_v2
path: data/train_v2-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
- split: test_top1
path: data/test_top1-*
tags:
- question
---
# CQuAE: A New French Question-Answering Corpus for Teaching Assistant
CQuAE (Contextualised Question-Answering for Education) is a French question-answering dataset in the domain of secondary education.
It has been designed to facilitate the development of virtual teaching assistants,
with a particular focus on creating and answering complex questions that go beyond simple fact extraction.
CQuAE includes questions, answers, and corresponding source documents (excerpts of textbook or Wikipedia articles).
By providing both straightforward and deeper, multi-sentence, or interpretative queries, the dataset supports diverse QA tasks, including factual, definitional, explanatory, and synthetic question types.
This dataset was described in:
“CQuAE : Un nouveau corpus de question-réponse pour l’enseignement”
by Thomas Gerald, Louis Tamames, Sofiane Ettayeb, Patrick Paroubek, Anne Vilnat.
----------------------------------------------------------------------------------------------------
## Table of Contents
1. [Dataset Summary](#dataset-summary)
2. [Supported Tasks](#supported-tasks)
3. [Dataset Structure](#dataset-structure)
4. [Data Fields](#data-fields)
5. [Versions Summary](#versions-summary)
6. [Source Data and Construction](#source-data-and-construction)
7. [Annotation Process and Types of Questions](#annotation-process-and-types-of-questions)
8. [Applications and Examples](#applications-and-examples)
9. [Citation](#citation)
----------------------------------------------------------------------------------------------------
## Dataset Summary
CQuAE is designed to train and evaluate QA systems capable of handling a range of question types in French.
Questions are grounded in educational material from various subject areas—mainly history, geography, and sciences—at the late middle-school and early high-school levels.
Each entry comprises:
• A manually written question (French).
• The corresponding source document excerpt(s).
• A manually written answer (in French).
• The question’s type (factual, definition, course-level explanatory, or synthetic).
• Metadata such as a question identifier and document title(s).
One of the key goals behind CQuAE is to collect and evaluate questions that require varying levels of reasoning complexity.
While many QA datasets in French emphasize short factual or named-entity answers, CQuAE includes longer, more elaborate responses that often span multiple elements of a text.
----------------------------------------------------------------------------------------------------
## Supported Tasks
• **Question Answering (QA)**: Given a question and a relevant document, generate or extract an answer.
• **Complex QA**: Some questions require multi-sentence answers, synthesis, or deeper interpretation.
• **Document Retrieval (RAG)**: Identify the relevant passages in the larger corpus to answer a question.
----------------------------------------------------------------------------------------------------
## Dataset Structure
The dataset is organized as follows (feature schema applies to all splits):
• **train_v1**: 10,431 examples.
- First version of the training data.
• **train_v2**: 7,156 examples.
- A partially “human-filtered” or corrected version of the training data (some problematic instances from v1 were filtered or improved).
• **eval**: 512 examples.
- Evaluation split for model development.
• **test**: 512 examples.
- Standard test set.
• **test_top1**: 512 examples.
- Same underlying question set as “test,” except that the single document provided here was retrieved automatically from the full collection via a retrieval-augmented generation (RAG) approach. In other words, it may differ from the original reference document used by annotators.
A high-level representation of the dataset structure:
----------------------------------------------------------------------------------------------------
## Data Fields
Each split contains the following fields:
• **question** (string): The question in French.
• **title** (string): Source title (Chapter of the textbook or wikipedia article).
• **documents** (list): The list of text excerpts used by the annotator to create the question and its answer.
• **type** (string): The type of question. Possible values include:
- “factuelle” (factual)
- “définition” (definition)
- “cours” (explanatory course-level)
- “synthèse” (synthesis-based)
• **qid** (int): A unique question identifier.
• **documents_title** (string): Title(s) or metadata for the document(s).
• **output** (string): The annotated answer in French.
----------------------------------------------------------------------------------------------------
## Versions Summary
• **train_v1**: Original stage of the dataset with over 10k QA pairs.
• **train_v2**: A refined set of ~7k QA pairs produced after a thorough human review and correction phase (e.g., addressing syntax, relevance, completeness).
• **eval**, **test**: Held-out sets of 512 QA items each, created from the corrected dataset (v2).
• **test_top1**: Mirrors “test,” but includes automatically retrieved passages (via RAG) as opposed to the original documents used during annotation.
----------------------------------------------------------------------------------------------------
## Source Data and Construction
CQuAE is composed of short extracts from textbooks (e.g., “lelivrescolaire.fr”) and filtered Wikipedia articles chosen to match middle- and high-school curricula in fields like:
• History
• Geography
• Sciences de la Vie et de la Terre (Biology/Earth Sciences)
• Éducation Civique
Wikipedia articles were split into smaller parts (up to three paragraphs) for manageability.
In total, thousands of texts were collected, though not all were annotated. Two groups of annotators contributed:
• **Group A**: ~20 annotators (non-teachers).
• **Group B**: 6 annotators with teaching experience.
Each annotator was asked to produce:
1. A question grounded in the document.
2. The type of the question (factual, definition, course, synthesis).
3. The document snippet justifying the question.
4. Evidence for the answer (the relevant phrases in the text).
5. A written answer in French.
----------------------------------------------------------------------------------------------------
## Annotation Process and Types of Questions
Questions were created to vary in difficulty:
1. **Factuelle (Factual)**: Straightforward facts (e.g., event, date, person, location).
2. **Définition (Definition)**: Explaining a term or concept.
3. **Cours (Course-level)**: More detailed or explanatory answers derived from the text.
4. **Synthèse (Synthesis)**: Answers that require reasoned aggregation or interpretation of multiple text elements.
A manual correction phase was then carried out to improve the quality of the initial annotations.
Approximately 8,000–10,000 items were rechecked to address issues like syntax, missing context, or irrelevance.
As a result, train_v2 is slightly smaller but generally of higher quality.
----------------------------------------------------------------------------------------------------
## Applications and Examples
CQuAE can be employed for:
• **Training QA Systems**: Evaluate model performance on fact-based vs. complex (explanatory, synthesis) queries.
• **Retrieval-Augmented Generation (RAG)**: test_top1 split specifically tests how well a system can retrieve relevant passages from a large corpus.
• **Multilingual or Cross-lingual Adaptation**: Although the dataset is in French, it can serve as a testbed for domain adaptation in educational contexts.
• **Automatic Question and Answer Generation**: Evaluate how models produce realistic and pedagogically viable Q&A pairs.
----------------------------------------------------------------------------------------------------
## License
Creative Commons Attribution-NonCommercial 4.0 International
## Citation
[CQuAE : Un nouveau corpus de question-réponse pour l’enseignement](https://aclanthology.org/2024.jeptalnrecital-taln.4/) (Gerald et al., JEP/TALN/RECITAL 2024)
If you use or reference CQuAE, please cite:
@inproceedings{gerald-etal-2024-cquae,
title = "{CQ}u{AE} : Un nouveau corpus de question-r{\'e}ponse pour l`enseignement",
author = "Gerald, Thomas and
Tamames, Louis and
Ettayeb, Sofiane and
Paroubek, Patrick and
Vilnat, Anne",
year = "2024",
publisher = "ATALA and AFPC",
url = "https://aclanthology.org/2024.jeptalnrecital-taln.4/",
language = "fra",
}
|