qasper / README.md
leonli66's picture
Upload README.md with huggingface_hub
524af34 verified
metadata
configs:
  - config_name: memwrap
    data_files:
      - split: test
        path: memwrap/qasper.jsonl
  - config_name: plain
    data_files:
      - split: test
        path: plain/qasper.jsonl

QASPER Benchmark

Question Answering on Scientific Papers - NLP research paper comprehension benchmark.

Overview

Metric Value
Papers 416 (test set)
Questions 1,370 (answerable)
Answer Types Free-form, extractive, yes/no
Context Full paper (title, abstract, sections)

Source

Based on QASPER dataset by AllenAI.

Paper: A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers

Variants

  • memwrap: Paper content wrapped with <|memory_start|> / <|memory_end|> tags
  • plain: Raw paper content without memory tags

Usage

from datasets import load_dataset

# Load memwrap variant
ds = load_dataset("tonychenxyz/qasper", "memwrap", split="test")

# Load plain variant
ds = load_dataset("tonychenxyz/qasper", "plain", split="test")

Scoring

Uses qasper_log_perplexity scoring function:

  • Evaluates model performance using log perplexity of generated answer tokens
  • Lower log perplexity indicates better performance
  • Matches the perplexity-based evaluation used in cardridge baselines

Target answers are stored in extra_info.ground_truth.answer.

Citation

@inproceedings{dasigi2021qasper,
  title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
  author={Dasigi, Pradeep and Lo, Kyle and Beltagy, Iz and Cohan, Arman and Smith, Noah A and Gardner, Matt},
  booktitle={NAACL},
  year={2021}
}