SPARTA / README.md
pshlego's picture
Remove IMDB references from dataset card
d2bb44a verified
metadata
license: cc-by-nc-4.0
task_categories:
  - question-answering
  - table-question-answering
language:
  - en
tags:
  - table-qa
  - text-table-qa
  - multi-hop-reasoning
  - sql
  - benchmark
pretty_name: SPARTA
size_categories:
  - 1K<n<10K
configs:
  - config_name: workload_movie
    default: true
    data_files:
      - split: test
        path: workload_movie/test-*
      - split: validation
        path: workload_movie/validation-*
  - config_name: workload_nba
    data_files:
      - split: test
        path: workload_nba/test-*
      - split: validation
        path: workload_nba/validation-*
  - config_name: workload_medical
    data_files:
      - split: test
        path: workload_medical/test-*
      - split: validation
        path: workload_medical/validation-*

SPARTA Benchmark

Paper | Code | Project Page | Leaderboard

SPARTA introduces a groundbreaking benchmark for tree-structured multi-hop question answering (QA) across text and tables, addressing the critical shortcomings of existing datasets like HybridQA and OTT-QA, which suffer from shallow reasoning, annotation errors, and limited scale. By constructing a unified reference fact database that merges source tables with grounding tables derived from unstructured passages, our end-to-end framework automates the generation of thousands of high-fidelity QA pairs—requiring only a quarter of the annotation effort—while incorporating advanced operations like aggregation, grouping, and deep nested predicates. Innovative techniques such as provenance-based refinement and realistic-structure enforcement ensure executable, semantically sound queries that mimic real-world complexity, spanning domains like NBA, movies, and medicine. On SPARTA, state-of-the-art models plummet by over 30 F1 points, exposing gaps in cross-modal reasoning and paving the way for more robust QA systems.

Dataset Structure

The dataset contains 3 configs across 3 domains (Movie, NBA, Medical), each with test and validation splits.

Domains

  • Movie: Movie database with films, genres, ratings, and biographical text
  • NBA: Basketball statistics with player info, teams, awards, and game narratives
  • Medical: Healthcare data with appointments, billing, treatments, and patient records

Configs

Config Splits Description
workload_movie test, validation 565 + 565 Movie domain queries
workload_nba test, validation 565 + 565 NBA domain queries
workload_medical test, validation 565 + 565 Medical domain queries

Splits

  • test: question_id, question, table (3 fields only, answers withheld)
  • validation: includes SQL query, answer, query metadata (13 fields)

Validation Split Fields

Field Type Description
question_id string Unique query ID ({domain}:{idx})
question string Natural language question
table list[string] Relevant table names
sql_query string SQL query
answer list[string] Ground truth answers
is_nested bool Whether query is nested
is_aggregated bool Whether query uses aggregation
height int Query nesting depth
breadth dict Breadth per level
max_breadth int Maximum breadth
type string Query type description
clause_usage dict SQL clause usage (WHERE, GROUP BY, etc.)
aggregate_usage dict Aggregate function usage (SUM, COUNT, etc.)

Usage

from datasets import load_dataset

# Load test split (questions only)
test_ds = load_dataset("pshlego/SPARTA", "workload_nba", split="test")

# Load validation split (with answers and metadata)
val_ds = load_dataset("pshlego/SPARTA", "workload_nba", split="validation")

License

CC BY-NC 4.0