Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,910 Bytes
f75fe80
101d54a
f75fe80
 
 
 
 
 
2b3772d
 
 
 
f75fe80
 
49c2f17
d583cb6
49c2f17
 
f75fe80
 
 
 
 
 
a8077f8
 
 
 
 
434e04a
 
a8077f8
7f40211
434e04a
 
 
 
 
 
 
 
 
 
 
7f40211
 
a8077f8
434e04a
a8077f8
434e04a
 
 
 
7f40211
434e04a
a8077f8
 
 
7f40211
 
 
 
434e04a
a8077f8
7f40211
 
 
a8077f8
 
7f40211
 
 
 
 
 
 
 
 
 
 
 
 
a8077f8
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: mit
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: metadata
    dtype: string
  - name: task
    dtype: string
  - name: dataset
    dtype: string
  splits:
  - name: train
    num_bytes: 403096431
    num_examples: 28136
  download_size: 124690622
  dataset_size: 403096431
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for MMTU

## Dataset Summary

|[**🛠️GitHub**](https://github.com/MMTU-Benchmark/MMTU/tree/main) |[**🏆Leaderboard**](#leaderboard)|[**📖 Paper**](https://arxiv.org/abs/2506.05587) |

<!-- add link -->
MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark (NeurIPS'25) 
by [Junjie Xing](https://www.microsoft.com/en-us/research/people/junjiexing/), 
[Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/), 
[Mengyu Zhou](https://www.microsoft.com/en-us/research/people/mezho/), 
[Haoyu Dong](https://www.microsoft.com/en-us/research/people/hadong/), 
[Shi Han](https://www.microsoft.com/en-us/research/people/shihan/), 
[Lingjiao Chen](https://www.microsoft.com/en-us/research/people/lingjiaochen/), 
[Dongmei Zhang](https://www.microsoft.com/en-us/research/people/dongmeiz/), 
[Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/).

Tables and table-based use cases play a crucial role in many real-world applications, such as spreadsheets, databases, and computational notebooks, which traditionally require expert-level users like data engineers, analysts, and database administrators to operate. Although LLMs have shown remarkable progress in working with tables, comprehensive benchmarking of such capabilities remains limited, often narrowly focusing on tasks like NL-to-SQL and Table-QA, while overlooking the broader spectrum of real-world tasks that professional users face today. 

We introduce **MMTU**, a large-scale benchmark with around **28K questions** across **25 real-world table tasks**, designed to comprehensively evaluate models ability to understand, reason, and manipulate real tables at the expert-level. These tasks are drawn from decades' worth of computer science research on tabular data, with a focus on complex table tasks faced by professional users. We show that MMTU require a combination of skills -- including table understanding, reasoning, and coding -- that remain challenging for today's frontier models, where even frontier reasoning models like OpenAI GPT-5 and DeepSeek
R1 score only around 69% and 57% respectively, suggesting significant room for improvement.

<img width="839" alt="mmtu" src="https://github.com/user-attachments/assets/95dd2a05-755e-40cf-a6cb-9d2953394241" />

## Dataset Creation
MMTU was developed through the meticulous curation of 52 datasets across 25 task categories, each carefully labeled by computer science researchers, in decades’ worth of research on tabular data from communities such as data management (SIGMOD/VLDB), programming languages (PLDI/POPL), and web data (WWW/WSDM).  The benchmark emphasizes real-world, complex table tasks encountered by professional users—tasks that demand advanced skills in table understanding, coding, and reasoning. Plesae see the table below for key statistics of the benchmark.

<div align="center">
  <img src="https://github.com/user-attachments/assets/f6410469-6a7a-44d9-843e-c6acf19278bc" width="400"/>
</div>

A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.

## Update • October 2025

To improve the quality of **MMTU**, we performed an LLM-based filtering process. After filtering, the number of questions in **MMTU** has been reduced to **28,136**.

## Leaderboard

Below is a portion of our evaluation results. For the complete leaderboard and additional details, please visit the **[MMTU Leaderboard](https://huggingface.co/datasets/MMTU-benchmark/MMTU)**.


| **Model Type** | **Model**           | **MMTU Score**     | 
|----------------|---------------------|----------------------|
| Reasoning      | GPT-5               | **0.696 ± 0.01**     |
| Reasoning      | o3                  | 0.691 ± 0.01         |
| Reasoning      | GPT-5-mini          | 0.667 ± 0.01         |
| Reasoning      | Gemini-2.5-Pro      | 0.665 ± 0.01         |
| Reasoning      | o4-mini (2024-11-20)| 0.660 ± 0.01         |
| Reasoning      | Deepseek-R1         | 0.597 ± 0.01         |
| Chat           | GPT-5-Chat          | 0.577 ± 0.01         |
| Chat           | Deepseek-V3         | 0.555 ± 0.01         |
| Chat           | GPT-4o (2024-11-20) | 0.507 ± 0.01         |
| Chat           | Llama-3.3-70B       | 0.454 ± 0.01         |
| Chat           | Mistral-Large-2411  | 0.446 ± 0.01         |
| Chat           | Mistral-Small-2503  | 0.417 ± 0.01         |
| Chat           | GPT-4o-mini (2024-07-18)| 0.400 ± 0.01         |

## Language

English

## Data Structure

### Data Fields

- prompt: The prompt presented in the MMTU instance.
- metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
- task: The specific subtask category within the MMTU framework to which the instance belongs.
- dataset: The original source dataset from which the MMTU instance is derived.