Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -249,7 +249,7 @@ Each instance is formatted as a multiple-choice question with a single-token ans
|
|
| 249 |
Each task is capped at **8,000 examples** to ensure scalability while retaining task diversity.
|
| 250 |
All tasks are converted to multiple-choice format with controlled answer distributions to avoid label bias.
|
| 251 |
|
| 252 |
-
|
| 253 |
|
| 254 |
| Genre | Tasks |
|
| 255 |
|--------------------|------------------------------------------------------------------------|
|
|
@@ -260,6 +260,47 @@ All tasks are converted to multiple-choice format with controlled answer distrib
|
|
| 260 |
| Self-reflection | HaluEval, Toxic, Stereoset |
|
| 261 |
| Multilinguality | LTI, M-POS, M-Amazon, mLAMA, XNLI |
|
| 262 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 263 |
## 📄 Format
|
| 264 |
|
| 265 |
Each example includes:
|
|
|
|
| 249 |
Each task is capped at **8,000 examples** to ensure scalability while retaining task diversity.
|
| 250 |
All tasks are converted to multiple-choice format with controlled answer distributions to avoid label bias.
|
| 251 |
|
| 252 |
+
The genres and the involved tasks are summarized in the table below.
|
| 253 |
|
| 254 |
| Genre | Tasks |
|
| 255 |
|--------------------|------------------------------------------------------------------------|
|
|
|
|
| 260 |
| Self-reflection | HaluEval, Toxic, Stereoset |
|
| 261 |
| Multilinguality | LTI, M-POS, M-Amazon, mLAMA, XNLI |
|
| 262 |
|
| 263 |
+
|
| 264 |
+
### Linguistic
|
| 265 |
+
|
| 266 |
+
- **POS**: Part-of-speech tagging using Universal Dependencies. Given a sentence with a highlighted word, the model predicts its POS tag.
|
| 267 |
+
- **CHUNK**: Phrase chunking from CoNLL-2000. The task is to determine the syntactic chunk type (e.g., NP, VP) of a given word.
|
| 268 |
+
- **NER**: Named entity recognition from CoNLL-2003. Predicts the entity type (e.g., PERSON, ORG) for a marked word.
|
| 269 |
+
- **GED**: Grammatical error detection from the cLang-8 dataset. Each query asks whether a sentence contains a grammatical error.
|
| 270 |
+
|
| 271 |
+
### Content Classification
|
| 272 |
+
|
| 273 |
+
- **IMDB**: Sentiment classification using IMDB reviews. The model predicts whether a review is “positive” or “negative”.
|
| 274 |
+
- **Amazon**: Review rating classification (1–5 stars) using Amazon reviews.
|
| 275 |
+
- **Agnews**: Topic classification into four news categories: World, Sports, Business, Sci/Tech.
|
| 276 |
+
|
| 277 |
+
### Natural Language Inference
|
| 278 |
+
|
| 279 |
+
- **MNLI**: Multi-genre natural language inference. Given a premise and a hypothesis, predict whether the relation is entailment, contradiction, or neutral.
|
| 280 |
+
- **PAWS**: Paraphrase identification. Given two similar sentences, determine if they are paraphrases (yes/no).
|
| 281 |
+
- **SWAG**: Commonsense inference. Choose the most plausible continuation from four candidate endings for a given context.
|
| 282 |
+
|
| 283 |
+
### Factuality
|
| 284 |
+
|
| 285 |
+
- **FEVER**: Fact verification. Classify claims into “SUPPORTED”, “REFUTED”, or “NOT ENOUGH INFO”.
|
| 286 |
+
- **MyriadLAMA**: Factual knowledge probing across diverse relation types. Predict the correct object of a subject-relation pair.
|
| 287 |
+
- **CSQA**: Commonsense QA (CommonsenseQA). Answer multiple-choice questions requiring general commonsense.
|
| 288 |
+
- **TempLAMA**: Temporal knowledge probing. Given a temporal relation (e.g., “born in”), predict the correct year or time entity.
|
| 289 |
+
|
| 290 |
+
### Self-Reflection
|
| 291 |
+
|
| 292 |
+
- **HaluEval**: Hallucination detection. Given a generated sentence, determine if it contains hallucinated content.
|
| 293 |
+
- **Toxic**: Toxic comment classification. Binary task to predict whether a comment is toxic.
|
| 294 |
+
- **Stereoset**: Stereotype detection. Determine whether a given sentence reflects a stereotypical, anti-stereotypical, or unrelated bias.
|
| 295 |
+
|
| 296 |
+
### Multilinguality
|
| 297 |
+
|
| 298 |
+
- **LTI**: Language identification from a multilingual set of short text snippets.
|
| 299 |
+
- **M-POS**: Multilingual POS tagging using Universal Dependencies in different languages.
|
| 300 |
+
- **M-Amazon**: Sentiment classification in different languages using multilingual Amazon reviews.
|
| 301 |
+
- **mLAMA**: Multilingual factual knowledge probing, using the mLAMA dataset.
|
| 302 |
+
- **XNLI**: Cross-lingual natural language inference across multiple languages, adapted to multiple-choice format.
|
| 303 |
+
|
| 304 |
## 📄 Format
|
| 305 |
|
| 306 |
Each example includes:
|