Update README.md
Browse files
README.md
CHANGED
|
@@ -167,31 +167,31 @@ The COLE benchmark is a **suite** of multiple French NLP tasks for evaluating la
|
|
| 167 |
- **[DACCORD](https://aclanthology.org/2024.lrec-main.1065/)**
|
| 168 |
Determine if a French sentence makes sense semantically (binary label).
|
| 169 |
|
| 170 |
-
- **[FQuAD](https://
|
| 171 |
Fquad is question/answer pair built on high-quality wikipedia articles. The goal of the model in this task is to accurately predict if the answer to the question really can be found in the provided answer.
|
| 172 |
|
| 173 |
-
- **[FraCaS](
|
| 174 |
Fracas is a natural language inference (NLI) taskthe where the model must classify the relationship between a premise and a hypothesis—entailment, contradiction, or neutral—based on complex linguistic phenomena such as quantifiers, plurality, anaphora, and ellipsis.
|
| 175 |
|
| 176 |
-
- **[Fr-BoolQ](https://huggingface.co/datasets/
|
| 177 |
Boolean question answering in French: answer true/false based on context.
|
| 178 |
|
| 179 |
-
- **[GQNLI-fr](https://
|
| 180 |
The dataset consists of carefully constructed premise-hypothesis pairs that involve quantifier logic (e.g. most, at least, more than half). The goal is to evaluate the model's ability to reason about these expressions and determine whether the hypothesis logically follows from the premise, contradicts it, or is neutral.
|
| 181 |
|
| 182 |
- **[MMS-fr](https://arxiv.org/abs/2306.07902)**
|
| 183 |
MMS-fr is a sentiment analysis task where the model classifies a French text as positive (2), neutral (1), or negative (0), assessing its ability to detect sentiment across diverse domains and sources.
|
| 184 |
|
| 185 |
-
- **[MNLI-nineeleven-Fr-MT](https://
|
| 186 |
French machine-translated version of MNLI using 9/11 context, for entailment classification.
|
| 187 |
|
| 188 |
-
- **[MultiBLiMP-Fr](https://
|
| 189 |
MultiBLiMP-Fr is a grammatical judgment task where the model must identify the grammatically correct sentence from a minimal pair differing by a single targeted feature, thereby assessing its knowledge of French syntax, morphology, and agreement.
|
| 190 |
|
| 191 |
-
- **[PAWS-X](https://
|
| 192 |
This task aims to test paraphrase identification by giving two sentences and a label defining if these sentences are equivalent in meaning or not.
|
| 193 |
|
| 194 |
-
- **[PIAF](https://
|
| 195 |
This task consists of pairs of questions and text answers with information of where in the answer is the truly relevant information.
|
| 196 |
|
| 197 |
- **[QFrBLiMP](https://arxiv.org/abs/2401.67890)**
|
|
@@ -206,25 +206,25 @@ The COLE benchmark is a **suite** of multiple French NLP tasks for evaluating la
|
|
| 206 |
- **[QFrCoRT](https://vivreenfrancais.mcgill.ca/capsules-linguistiques/expressions-quebecoises/)**
|
| 207 |
QFrCoRT is a definition matching task where the model selects the correct standard French definition for a Quebec French term from a list of candidates.
|
| 208 |
|
| 209 |
-
- **[RTE3-Fr](https://
|
| 210 |
French version of RTE3 for textual entailment recognition.
|
| 211 |
|
| 212 |
-
- **[SICK-fr](https://
|
| 213 |
This task also has pairs of sentences and notes them on 2 dimensions, relatedness and entailment. While relatedness scales from 1 to 5, entailement is a choice between entails, contradicts or neutral.
|
| 214 |
|
| 215 |
-
- **[STS22](https://
|
| 216 |
This task evaluates whether pairs of news articles, written in different languages, cover the same story. It focuses on document-level similarity, where systems rate article pairs on a 4-point scale from most to least similar.
|
| 217 |
|
| 218 |
-
- **[Wino-X-LM](https://
|
| 219 |
Pronoun resolution task: choose between two referents in a sentence with an ambiguous pronoun.
|
| 220 |
|
| 221 |
-
- **[Wino-X-MT](https://
|
| 222 |
Translation-based pronoun resolution: choose which of two French translations uses the correct gendered pronoun.
|
| 223 |
|
| 224 |
-
- **[WSD-Fr](https://
|
| 225 |
WSD-Fr is a word sense disambiguation task where the model must identify the correct meaning of an ambiguous verb in context, as part of the FLUE benchmark.
|
| 226 |
|
| 227 |
-
- **[XNLI-fr](https://
|
| 228 |
This task consists of pairs of sentences where the goal is to determine the relation between the two sentences, this relation can be either entailement, neutral or contradiction.
|
| 229 |
|
| 230 |
### Language
|
|
|
|
| 167 |
- **[DACCORD](https://aclanthology.org/2024.lrec-main.1065/)**
|
| 168 |
Determine if a French sentence makes sense semantically (binary label).
|
| 169 |
|
| 170 |
+
- **[FQuAD](https://aclanthology.org/2020.findings-emnlp.107/)**
|
| 171 |
Fquad is question/answer pair built on high-quality wikipedia articles. The goal of the model in this task is to accurately predict if the answer to the question really can be found in the provided answer.
|
| 172 |
|
| 173 |
+
- **[FraCaS](https://arxiv.org/abs/2309.10604)**
|
| 174 |
Fracas is a natural language inference (NLI) taskthe where the model must classify the relationship between a premise and a hypothesis—entailment, contradiction, or neutral—based on complex linguistic phenomena such as quantifiers, plurality, anaphora, and ellipsis.
|
| 175 |
|
| 176 |
+
- **[Fr-BoolQ](https://huggingface.co/datasets/manu/french_boolq)**
|
| 177 |
Boolean question answering in French: answer true/false based on context.
|
| 178 |
|
| 179 |
+
- **[GQNLI-fr](https://aclanthology.org/2024.lrec-main.1065/)**
|
| 180 |
The dataset consists of carefully constructed premise-hypothesis pairs that involve quantifier logic (e.g. most, at least, more than half). The goal is to evaluate the model's ability to reason about these expressions and determine whether the hypothesis logically follows from the premise, contradicts it, or is neutral.
|
| 181 |
|
| 182 |
- **[MMS-fr](https://arxiv.org/abs/2306.07902)**
|
| 183 |
MMS-fr is a sentiment analysis task where the model classifies a French text as positive (2), neutral (1), or negative (0), assessing its ability to detect sentiment across diverse domains and sources.
|
| 184 |
|
| 185 |
+
- **[MNLI-nineeleven-Fr-MT](https://aclanthology.org/N18-1101/)**
|
| 186 |
French machine-translated version of MNLI using 9/11 context, for entailment classification.
|
| 187 |
|
| 188 |
+
- **[MultiBLiMP-Fr](https://arxiv.org/abs/2504.02768)**
|
| 189 |
MultiBLiMP-Fr is a grammatical judgment task where the model must identify the grammatically correct sentence from a minimal pair differing by a single targeted feature, thereby assessing its knowledge of French syntax, morphology, and agreement.
|
| 190 |
|
| 191 |
+
- **[PAWS-X](https://arxiv.org/abs/1908.11828)**
|
| 192 |
This task aims to test paraphrase identification by giving two sentences and a label defining if these sentences are equivalent in meaning or not.
|
| 193 |
|
| 194 |
+
- **[PIAF](https://aclanthology.org/2020.lrec-1.673/)**
|
| 195 |
This task consists of pairs of questions and text answers with information of where in the answer is the truly relevant information.
|
| 196 |
|
| 197 |
- **[QFrBLiMP](https://arxiv.org/abs/2401.67890)**
|
|
|
|
| 206 |
- **[QFrCoRT](https://vivreenfrancais.mcgill.ca/capsules-linguistiques/expressions-quebecoises/)**
|
| 207 |
QFrCoRT is a definition matching task where the model selects the correct standard French definition for a Quebec French term from a list of candidates.
|
| 208 |
|
| 209 |
+
- **[RTE3-Fr](https://aclanthology.org/2024.lrec-main.1065/)**
|
| 210 |
French version of RTE3 for textual entailment recognition.
|
| 211 |
|
| 212 |
+
- **[SICK-fr](https://huggingface.co/datasets/Lajavaness/SICK-fr)**
|
| 213 |
This task also has pairs of sentences and notes them on 2 dimensions, relatedness and entailment. While relatedness scales from 1 to 5, entailement is a choice between entails, contradicts or neutral.
|
| 214 |
|
| 215 |
+
- **[STS22](https://competitions.codalab.org/competitions/33835)**
|
| 216 |
This task evaluates whether pairs of news articles, written in different languages, cover the same story. It focuses on document-level similarity, where systems rate article pairs on a 4-point scale from most to least similar.
|
| 217 |
|
| 218 |
+
- **[Wino-X-LM](https://aclanthology.org/2021.emnlp-main.670/)**
|
| 219 |
Pronoun resolution task: choose between two referents in a sentence with an ambiguous pronoun.
|
| 220 |
|
| 221 |
+
- **[Wino-X-MT](https://aclanthology.org/2021.emnlp-main.670/)**
|
| 222 |
Translation-based pronoun resolution: choose which of two French translations uses the correct gendered pronoun.
|
| 223 |
|
| 224 |
+
- **[WSD-Fr](https://aclanthology.org/W19-0422/)**
|
| 225 |
WSD-Fr is a word sense disambiguation task where the model must identify the correct meaning of an ambiguous verb in context, as part of the FLUE benchmark.
|
| 226 |
|
| 227 |
+
- **[XNLI-fr](https://aclanthology.org/D18-1269)**
|
| 228 |
This task consists of pairs of sentences where the goal is to determine the relation between the two sentences, this relation can be either entailement, neutral or contradiction.
|
| 229 |
|
| 230 |
### Language
|