scitarc / README.md
mattwang123's picture
Upload dataset
6782032 verified
metadata
language:
  - en
task_categories:
  - question-answering
  - table-question-answering
tags:
  - scientific-reasoning
  - tabular-data
  - complex-reasoning
  - algorithmic-reasoning
  - math
pretty_name: SciTaRC
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: paper
      dtype: string
    - name: relevant_tables
      list:
        list: string
    - name: tables
      list:
        list: string
    - name: fulltext
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: plan
      dtype: string
  splits:
    - name: test
      num_bytes: 48991529
      num_examples: 371
  download_size: 13748575
  dataset_size: 48991529

Dataset Card for SciTaRC

Dataset Description

  • Paper: SciTaRC: Benchmarking QA on Scientific Tabular Data that Requires Language Reasoning and Complex Computation

Dataset Summary

SciTaRC (Scientific Table Reasoning and Computation) is an expert-authored benchmark designed to evaluate Large Language Models (LLMs) on complex question-answering tasks over real-world scientific tables.

Unlike existing benchmarks that focus on simple table-text integration or single-step operations, SciTaRC focuses on composite reasoning—requiring models to execute interdependent operations such as descriptive analysis, complex arithmetic, and ranking across detailed scientific tables. To facilitate granular diagnosis of model failures, every instance includes an expert-annotated pseudo-code plan that explicitly outlines the algorithmic reasoning steps required to reach the correct answer.

Dataset Structure

The dataset is provided as a single test split containing 370 expert-annotated instances.

Data Instances

A typical instance contains the question, the ground truth answer, the expert-authored pseudo-code plan, the LaTeX representations of the relevant tables, and the full text of the source paper.

Data Fields

Each JSON object in the dataset contains the following fields:

  • paper (string): The arXiv ID of the source scientific paper (e.g., "2401.06769").
  • question (string): The complex, multi-step question asked about the tabular data.
  • answer (string): The ground-truth answer.
  • plan (string): The expert-authored pseudo-code blueprint. It explicitly structures the logical and mathematical operations required to solve the question (e.g., SELECT, LOOP, COMPUTE).
  • relevant_tables (list of lists of strings): The exact LaTeX source code for the specific table(s) required to answer the question.
  • tables (list of lists of strings): The LaTeX source code for all tables and figures extracted from the paper.
  • fulltext (string): The complete LaTeX source text of the original scientific paper, providing full context.

Citation

If you use this dataset, please cite the original paper:

@misc{scitarc2026,
  title={SciTaRC: Benchmarking QA on Scientific Tabular Data that Requires Language Reasoning and Complex Computation},
  author={Wang, Hexuan and Ren, Yaxuan and Bommireddypalli, Srikar and Chen, Shuxian and Prabhudesai, Adarsh and Baral, Elina and Zhou, Rongkun and Koehn, Philipp},
  year={2026},
  url={[Insert ArXiv URL here]}
}