File size: 4,001 Bytes
526ba03
cadec4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
526ba03
cadec4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
526ba03
cadec4f
 
 
6e6ce46
a2a8a6f
6e6ce46
cadec4f
 
 
d2bb44a
cadec4f
 
d2bb44a
cadec4f
 
 
 
 
 
 
 
 
 
 
691a166
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cadec4f
 
 
 
 
 
691a166
cadec4f
 
691a166
 
cadec4f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: cc-by-nc-4.0
task_categories:
  - question-answering
  - table-question-answering
language:
  - en
tags:
  - table-qa
  - text-table-qa
  - multi-hop-reasoning
  - sql
  - benchmark
pretty_name: SPARTA
size_categories:
  - 1K<n<10K
configs:
  - config_name: workload_movie
    default: true
    data_files:
      - split: test
        path: workload_movie/test-*
      - split: validation
        path: workload_movie/validation-*
  - config_name: workload_nba
    data_files:
      - split: test
        path: workload_nba/test-*
      - split: validation
        path: workload_nba/validation-*
  - config_name: workload_medical
    data_files:
      - split: test
        path: workload_medical/test-*
      - split: validation
        path: workload_medical/validation-*
---

# SPARTA Benchmark

[Paper](https://openreview.net/pdf?id=8KE9qvKhM4) | [Code](https://github.com/pshlego/SPARTA) | [Project Page](https://sparta-projectpage.github.io/) | [Leaderboard](https://sparta.postech.ac.kr/)

SPARTA introduces a groundbreaking benchmark for tree-structured multi-hop question answering (QA) across text and tables, addressing the critical shortcomings of existing datasets like HybridQA and OTT-QA, which suffer from shallow reasoning, annotation errors, and limited scale. By constructing a unified reference fact database that merges source tables with grounding tables derived from unstructured passages, our end-to-end framework automates the generation of thousands of high-fidelity QA pairs—requiring only a quarter of the annotation effort—while incorporating advanced operations like aggregation, grouping, and deep nested predicates. Innovative techniques such as provenance-based refinement and realistic-structure enforcement ensure executable, semantically sound queries that mimic real-world complexity, spanning domains like NBA, movies, and medicine. On SPARTA, state-of-the-art models plummet by over 30 F1 points, exposing gaps in cross-modal reasoning and paving the way for more robust QA systems.

## Dataset Structure

The dataset contains **3 configs** across 3 domains (Movie, NBA, Medical), each with test and validation splits.

### Domains
- **Movie**: Movie database with films, genres, ratings, and biographical text
- **NBA**: Basketball statistics with player info, teams, awards, and game narratives
- **Medical**: Healthcare data with appointments, billing, treatments, and patient records

### Configs

| Config | Splits | Description |
|--------|--------|-------------|
| `workload_movie` | test, validation | 565 + 565 Movie domain queries |
| `workload_nba` | test, validation | 565 + 565 NBA domain queries |
| `workload_medical` | test, validation | 565 + 565 Medical domain queries |

### Splits

- **test**: `question_id`, `question`, `table` (3 fields only, answers withheld)
- **validation**: includes SQL query, answer, query metadata (13 fields)

### Validation Split Fields
| Field | Type | Description |
|-------|------|-------------|
| `question_id` | string | Unique query ID (`{domain}:{idx}`) |
| `question` | string | Natural language question |
| `table` | list[string] | Relevant table names |
| `sql_query` | string | SQL query |
| `answer` | list[string] | Ground truth answers |
| `is_nested` | bool | Whether query is nested |
| `is_aggregated` | bool | Whether query uses aggregation |
| `height` | int | Query nesting depth |
| `breadth` | dict | Breadth per level |
| `max_breadth` | int | Maximum breadth |
| `type` | string | Query type description |
| `clause_usage` | dict | SQL clause usage (WHERE, GROUP BY, etc.) |
| `aggregate_usage` | dict | Aggregate function usage (SUM, COUNT, etc.) |

## Usage

```python
from datasets import load_dataset

# Load test split (questions only)
test_ds = load_dataset("pshlego/SPARTA", "workload_nba", split="test")

# Load validation split (with answers and metadata)
val_ds = load_dataset("pshlego/SPARTA", "workload_nba", split="validation")
```

## License

CC BY-NC 4.0