sata-bench-raw / README.md
sata-bench's picture
Update README.md
727cc25 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - question-answering
  - text-classification
  - zero-shot-classification
  - multiple-choice
tags:
  - multi-choice
  - question-answering
pretty_name: sata-bench-basic
size_categories:
  - 1K<n<10K

Cite

@misc{xu2025satabenchselectapplybenchmark, title={SATA-BENCH: Select All That Apply Benchmark for Multiple Choice Questions}, author={Weijie Xu and Shixian Cui and Xi Fang and Chi Xue and Stephanie Eckman and Chandan Reddy}, year={2025}, eprint={2506.00643}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.00643}, }

Select-All-That-Apply Benchmark (SATA-bench) Dataset Desciption

SATA-Bench-raw is a multi-domain benchmark designed for 'Select-all-that-apply' questions. This dataset contains:

  • Sata questions from several subjects, including reading, news, law, and biomedicine,
  • ~8k questions with varying difficulty levels, multiple correct answers, and complex distractor options.
  • Each question has more or equal to one correct answer and multiple distractors presented.

This dataset was designed to uncover selection bias of LLMs in multi-choice multi-answer setttings.

SATA-BENCH Dataset Overview
SATA-BENCH is diverse in topics with a balance between readability and confusion score. d1: Reading Comprehension, d2: Toxicity, d3: News, d4: Biomedicine, d5: Laws, and d6: Events.

Warnings

This dataset is not labeled by human. The question from this dataset may be unclear. The dataset itself may contain incorrect answers. Please refer to sata-bench/sata_bench for a small subset of human labeled questions.