LsTam commited on
Commit
a5d202c
·
verified ·
1 Parent(s): 86c6f22

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -0
README.md CHANGED
@@ -49,3 +49,175 @@ configs:
49
  tags:
50
  - question
51
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  tags:
50
  - question
51
  ---
52
+
53
+ # CQuAE: A New French Question-Answering Corpus for Teaching Assistant
54
+
55
+ CQuAE (Corpus of QUestions for Assisting Education) is a French question-answering dataset in the domain of secondary education.
56
+ It has been designed to facilitate the development of virtual teaching assistants,
57
+ with a particular focus on creating and answering complex questions that go beyond simple fact extraction.
58
+ CQuAE includes questions, answers, and corresponding source documents (excerpts of textbook or Wikipedia articles).
59
+ By providing both straightforward and deeper, multi-sentence, or interpretative queries, the dataset supports diverse QA tasks, including factual, definitional, explanatory, and synthetic question types.
60
+
61
+ This dataset was described in:
62
+ “CQuAE : Un nouveau corpus de question-réponse pour l’enseignement”
63
+ by Thomas Gerald, Louis Tamames, Sofiane Ettayeb, Patrick Paroubek, Anne Vilnat.
64
+
65
+ ----------------------------------------------------------------------------------------------------
66
+
67
+ ## Table of Contents
68
+
69
+ 1. [Dataset Summary](#dataset-summary)
70
+ 2. [Supported Tasks](#supported-tasks)
71
+ 3. [Dataset Structure](#dataset-structure)
72
+ 4. [Data Fields](#data-fields)
73
+ 5. [Versions Summary](#versions-summary)
74
+ 6. [Source Data and Construction](#source-data-and-construction)
75
+ 7. [Annotation Process and Types of Questions](#annotation-process-and-types-of-questions)
76
+ 8. [Applications and Examples](#applications-and-examples)
77
+ 9. [Citation](#citation)
78
+
79
+ ----------------------------------------------------------------------------------------------------
80
+
81
+ ## Dataset Summary
82
+
83
+ CQuAE is designed to train and evaluate QA systems capable of handling a range of question types in French.
84
+ Questions are grounded in educational material from various subject areas—mainly history, geography, and sciences—at the late middle-school and early high-school levels.
85
+ Each entry comprises:
86
+
87
+ • A manually written question (French).
88
+ • The corresponding source document excerpt(s).
89
+ • A manually written answer (in French).
90
+ • The question’s type (factual, definition, course-level explanatory, or synthetic).
91
+ • Metadata such as a question identifier and document title(s).
92
+
93
+ One of the key goals behind CQuAE is to collect and evaluate questions that require varying levels of reasoning complexity.
94
+ While many QA datasets in French emphasize short factual or named-entity answers, CQuAE includes longer, more elaborate responses that often span multiple elements of a text.
95
+
96
+ ----------------------------------------------------------------------------------------------------
97
+
98
+ ## Supported Tasks
99
+
100
+ • **Question Answering (QA)**: Given a question and a relevant document, generate or extract an answer.
101
+ • **Complex QA**: Some questions require multi-sentence answers, synthesis, or deeper interpretation.
102
+ • **Document Retrieval (RAG)**: Identify the relevant passages in the larger corpus to answer a question.
103
+
104
+ ----------------------------------------------------------------------------------------------------
105
+
106
+ ## Dataset Structure
107
+
108
+ The dataset is organized as follows (feature schema applies to all splits):
109
+
110
+ • **train_v1**: 10,431 examples.
111
+ - First version of the training data.
112
+
113
+ • **train_v2**: 7,156 examples.
114
+ - A partially “human-filtered” or corrected version of the training data (some problematic instances from v1 were filtered or improved).
115
+
116
+ • **eval**: 512 examples.
117
+ - Evaluation split for model development.
118
+
119
+ • **test**: 512 examples.
120
+ - Standard test set.
121
+
122
+ • **test_top1**: 512 examples.
123
+ - Same underlying question set as “test,” except that the single document provided here was retrieved automatically from the full collection via a retrieval-augmented generation (RAG) approach. In other words, it may differ from the original reference document used by annotators.
124
+
125
+ A high-level representation of the dataset structure:
126
+
127
+ ----------------------------------------------------------------------------------------------------
128
+
129
+ ## Data Fields
130
+
131
+ Each split contains the following fields:
132
+
133
+ • **question** (string): The question in French.
134
+ • **title** (string): Source title (Chapter of the textbook or wikipedia article).
135
+ • **documents** (list): The list of text excerpts used by the annotator to create the question and its answer.
136
+ • **type** (string): The type of question. Possible values include:
137
+ - “factuelle” (factual)
138
+ - “définition” (definition)
139
+ - “cours” (explanatory course-level)
140
+ - “synthèse” (synthesis-based)
141
+ • **qid** (int): A unique question identifier.
142
+ • **documents_title** (string): Title(s) or metadata for the document(s).
143
+ • **output** (string): The annotated answer in French.
144
+
145
+ ----------------------------------------------------------------------------------------------------
146
+
147
+ ## Versions Summary
148
+
149
+ • **train_v1**: Original stage of the dataset with over 10k QA pairs.
150
+ • **train_v2**: A refined set of ~7k QA pairs produced after a thorough human review and correction phase (e.g., addressing syntax, relevance, completeness).
151
+ • **eval**, **test**: Held-out sets of 512 QA items each, created from the corrected dataset (v2).
152
+ • **test_top1**: Mirrors “test,” but includes automatically retrieved passages (via RAG) as opposed to the original documents used during annotation.
153
+
154
+ ----------------------------------------------------------------------------------------------------
155
+
156
+ ## Source Data and Construction
157
+
158
+ CQuAE is composed of short extracts from textbooks (e.g., “lelivrescolaire.fr”) and filtered Wikipedia articles chosen to match middle- and high-school curricula in fields like:
159
+
160
+ • History
161
+ • Geography
162
+ • Sciences de la Vie et de la Terre (Biology/Earth Sciences)
163
+ • Éducation Civique
164
+
165
+ Wikipedia articles were split into smaller parts (up to three paragraphs) for manageability.
166
+ In total, thousands of texts were collected, though not all were annotated. Two groups of annotators contributed:
167
+
168
+ • **Group A**: ~20 annotators (non-teachers).
169
+ • **Group B**: 6 annotators with teaching experience.
170
+
171
+ Each annotator was asked to produce:
172
+
173
+ 1. A question grounded in the document.
174
+ 2. The type of the question (factual, definition, course, synthesis).
175
+ 3. The document snippet justifying the question.
176
+ 4. Evidence for the answer (the relevant phrases in the text).
177
+ 5. A written answer in French.
178
+
179
+ ----------------------------------------------------------------------------------------------------
180
+
181
+ ## Annotation Process and Types of Questions
182
+
183
+ Questions were created to vary in difficulty:
184
+
185
+ 1. **Factuelle (Factual)**: Straightforward facts (e.g., event, date, person, location).
186
+ 2. **Définition (Definition)**: Explaining a term or concept.
187
+ 3. **Cours (Course-level)**: More detailed or explanatory answers derived from the text.
188
+ 4. **Synthèse (Synthesis)**: Answers that require reasoned aggregation or interpretation of multiple text elements.
189
+
190
+ A manual correction phase was then carried out to improve the quality of the initial annotations.
191
+ Approximately 8,000–10,000 items were rechecked to address issues like syntax, missing context, or irrelevance.
192
+ As a result, train_v2 is slightly smaller but generally of higher quality.
193
+
194
+ ----------------------------------------------------------------------------------------------------
195
+
196
+ ## Applications and Examples
197
+
198
+ CQuAE can be employed for:
199
+
200
+ • **Training QA Systems**: Evaluate model performance on fact-based vs. complex (explanatory, synthesis) queries.
201
+ • **Retrieval-Augmented Generation (RAG)**: test_top1 split specifically tests how well a system can retrieve relevant passages from a large corpus.
202
+ • **Multilingual or Cross-lingual Adaptation**: Although the dataset is in French, it can serve as a testbed for domain adaptation in educational contexts.
203
+ • **Automatic Question and Answer Generation**: Evaluate how models produce realistic and pedagogically viable Q&A pairs.
204
+
205
+ ----------------------------------------------------------------------------------------------------
206
+
207
+ ## Citation
208
+
209
+ [CQuAE : Un nouveau corpus de question-réponse pour l’enseignement](https://aclanthology.org/2024.jeptalnrecital-taln.4/) (Gerald et al., JEP/TALN/RECITAL 2024)
210
+
211
+ If you use or reference CQuAE, please cite:
212
+ @inproceedings{gerald-etal-2024-cquae,
213
+ title = "{CQ}u{AE} : Un nouveau corpus de question-r{\'e}ponse pour l`enseignement",
214
+ author = "Gerald, Thomas and
215
+ Tamames, Louis and
216
+ Ettayeb, Sofiane and
217
+ Paroubek, Patrick and
218
+ Vilnat, Anne",
219
+ year = "2024",
220
+ publisher = "ATALA and AFPC",
221
+ url = "https://aclanthology.org/2024.jeptalnrecital-taln.4/",
222
+ language = "fra",
223
+ }