File size: 3,288 Bytes
12c65a5 02a768a dde662d 6782032 dde662d 6782032 12c65a5 02a768a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | ---
language:
- en
task_categories:
- question-answering
- table-question-answering
tags:
- scientific-reasoning
- tabular-data
- complex-reasoning
- algorithmic-reasoning
- math
pretty_name: SciTaRC
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: paper
dtype: string
- name: relevant_tables
list:
list: string
- name: tables
list:
list: string
- name: fulltext
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: plan
dtype: string
splits:
- name: test
num_bytes: 48991529
num_examples: 371
download_size: 13748575
dataset_size: 48991529
---
# Dataset Card for SciTaRC
## Dataset Description
- **Paper:** SciTaRC: Benchmarking QA on Scientific Tabular Data that Requires Language Reasoning and Complex Computation
### Dataset Summary
**SciTaRC** (Scientific Table Reasoning and Computation) is an expert-authored benchmark designed to evaluate Large Language Models (LLMs) on complex question-answering tasks over real-world scientific tables.
Unlike existing benchmarks that focus on simple table-text integration or single-step operations, SciTaRC focuses on **composite reasoning**—requiring models to execute interdependent operations such as descriptive analysis, complex arithmetic, and ranking across detailed scientific tables. To facilitate granular diagnosis of model failures, every instance includes an expert-annotated **pseudo-code plan** that explicitly outlines the algorithmic reasoning steps required to reach the correct answer.
## Dataset Structure
The dataset is provided as a single `test` split containing 370 expert-annotated instances.
### Data Instances
A typical instance contains the question, the ground truth answer, the expert-authored pseudo-code plan, the LaTeX representations of the relevant tables, and the full text of the source paper.
### Data Fields
Each JSON object in the dataset contains the following fields:
- `paper` *(string)*: The arXiv ID of the source scientific paper (e.g., `"2401.06769"`).
- `question` *(string)*: The complex, multi-step question asked about the tabular data.
- `answer` *(string)*: The ground-truth answer.
- `plan` *(string)*: The expert-authored pseudo-code blueprint. It explicitly structures the logical and mathematical operations required to solve the question (e.g., `SELECT`, `LOOP`, `COMPUTE`).
- `relevant_tables` *(list of lists of strings)*: The exact LaTeX source code for the specific table(s) required to answer the question.
- `tables` *(list of lists of strings)*: The LaTeX source code for all tables and figures extracted from the paper.
- `fulltext` *(string)*: The complete LaTeX source text of the original scientific paper, providing full context.
## Citation
If you use this dataset, please cite the original paper:
```bibtex
@misc{scitarc2026,
title={SciTaRC: Benchmarking QA on Scientific Tabular Data that Requires Language Reasoning and Complex Computation},
author={Wang, Hexuan and Ren, Yaxuan and Bommireddypalli, Srikar and Chen, Shuxian and Prabhudesai, Adarsh and Baral, Elina and Zhou, Rongkun and Koehn, Philipp},
year={2026},
url={[Insert ArXiv URL here]}
} |