| | --- |
| | license: cc-by-nc-4.0 |
| | task_categories: |
| | - question-answering |
| | - text-classification |
| | - zero-shot-classification |
| | - multiple-choice |
| | tags: |
| | - multi-choice |
| | - question-answering |
| | pretty_name: sata-bench-basic |
| | size_categories: |
| | - 1K<n<10K |
| | --- |
| | # Cite |
| |
|
| | @misc{xu2025satabenchselectapplybenchmark, |
| | title={SATA-BENCH: Select All That Apply Benchmark for Multiple Choice Questions}, |
| | author={Weijie Xu and Shixian Cui and Xi Fang and Chi Xue and Stephanie Eckman and Chandan Reddy}, |
| | year={2025}, |
| | eprint={2506.00643}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2506.00643}, |
| | } |
| | |
| | # Select-All-That-Apply Benchmark (SATA-bench) Dataset Desciption |
| | SATA-Bench is a multi-domain benchmark designed for 'Select-all-that-apply' questions. |
| | This dataset contains: |
| | - Sata questions from several subjects, including reading, news, law, and biomedicine, |
| | - 1.5K+ questions with varying difficulty levels, one correct answer, and complex distractor options. |
| | - Each question has one correct answers and multiple distractors presented. |
| |
|
| | This dataset was designed to uncover selection bias of LLMs in multi-choice multi-answer setttings. |
| | A comprehensive evaluation of sota LLMs on SATA-Bench has been performed. |
| |
|
| |
|
| | <figure> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/65837d95692e41e9ed027b35/Ti4a5xvR7hZunG_-iJIb1.png" |
| | alt="SATA-BENCH Dataset Overview"> |
| | <figcaption>SATA-BENCH is diverse in topics with a balance between readability and |
| | confusion score. d1: Reading Comprehension, d2: Toxicity, d3: News, d4: Biomedicine, d5: Laws, and d6: Events.</figcaption> |
| | </figure> |
| | |
| | Please refer to sata-bench/sata_bench for a small subset of human labeled questions with multiple correct answers |
| | |