metadata
dataset_info:
features:
- name: id
dtype: string
- name: dataset
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2268293
num_examples: 10687
download_size: 1254741
dataset_size: 2268293
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This dataset was constructed as part of the EPFL Modern NLP (MNLP) course project to train and evaluate large language models on multiple-choice question answering (MCQA) tasks focused on scientific reasoning.
It aggregates and reformats 10,687 unique examples from five high-quality academic and biomedical QA datasets, applying consistent structure, question normalization, and cross-source deduplication.
📊 Dataset Composition
| Source Dataset | Link | Questions Used | Description |
|---|---|---|---|
| ARC-Challenge | ai2_arc | 1,119 | Harder science exam questions requiring multi-step reasoning |
| ARC-Easy | ai2_arc | 2,251 | Simpler science questions at the elementary/middle school level |
| QASC | qasc | 3,000 (subset) | A filtered and deduplicated subset of the QASC dataset, which was originally larger (~8,000+ examples). Only 3,000 unique and diverse questions were selected for balance |
| OpenBookQA | openbookqa | 3,317 | 4-option science questions, filtered to keep humanScore ≥ 1 |
| PubMedQA | pubmed_qa | 1,000 | Biomedical questions with Yes/No/Maybe answers based on PubMed abstracts |
🧪 Preprocessing Pipeline
- Normalization: All questions were lowercased and stripped of whitespace for consistency.
- Deduplication: Each question was hashed (
md5(lowercase question)) to detect and eliminate duplicates across datasets. - Filtering:
- OpenBookQA was filtered to retain only questions with
humanScore ≥ 1. - PubMedQA was filtered to retain only labeled questions with answers in {yes, no, maybe}.
- QASC was sampled and capped at 3,000 unique questions to ensure dataset balance.
- OpenBookQA was filtered to retain only questions with
- Unified formatting: All entries follow the same JSON schema across sources.
📦 Format
Each sample follows this structure:
{
"id": "qasc_481",
"dataset": "qasc",
"question": "What do bees use to make honey?",
"options": ["nectar", "pollen", "water", "leaves"],
"answer": "A"
}