license: mit
dataset_info:
features:
- name: prompt
dtype: string
- name: metadata
dtype: string
- name: task
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 403096431
num_examples: 28136
download_size: 124690622
dataset_size: 403096431
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for MMTU
Dataset Summary
|🛠️GitHub |🏆Leaderboard|📖 Paper |
MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark (NeurIPS'25) by Junjie Xing, Yeye He, Mengyu Zhou, Haoyu Dong, Shi Han, Lingjiao Chen, Dongmei Zhang, Surajit Chaudhuri, and H. V. Jagadish.
Tables and table-based use cases play a crucial role in many real-world applications, such as spreadsheets, databases, and computational notebooks, which traditionally require expert-level users like data engineers, analysts, and database administrators to operate. Although LLMs have shown remarkable progress in working with tables, comprehensive benchmarking of such capabilities remains limited, often narrowly focusing on tasks like NL-to-SQL and Table-QA, while overlooking the broader spectrum of real-world tasks that professional users face today.
We introduce MMTU, a large-scale benchmark with around 28K questions across 25 real-world table tasks, designed to comprehensively evaluate models ability to understand, reason, and manipulate real tables at the expert-level. These tasks are drawn from decades' worth of computer science research on tabular data, with a focus on complex table tasks faced by professional users. We show that MMTU require a combination of skills -- including table understanding, reasoning, and coding -- that remain challenging for today's frontier models, where even frontier reasoning models like OpenAI GPT-5 and DeepSeek R1 score only around 69% and 57% respectively, suggesting significant room for improvement.
Dataset Creation
MMTU was developed through the meticulous curation of 52 datasets across 25 task categories, each carefully labeled by computer science researchers, in decades’ worth of research on tabular data from communities such as data management (SIGMOD/VLDB), programming languages (PLDI/POPL), and web data (WWW/WSDM). The benchmark emphasizes real-world, complex table tasks encountered by professional users—tasks that demand advanced skills in table understanding, coding, and reasoning. Plesae see the table below for key statistics of the benchmark.
A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.
Update • October 2025
To improve the quality of MMTU, we performed an LLM-based filtering process. After filtering, the number of questions in MMTU has been reduced to 28,136.
Leaderboard
Below is a portion of our evaluation results. For the complete leaderboard and additional details, please visit the MMTU Leaderboard.
| Model Type | Model | MMTU Score |
|---|---|---|
| Reasoning | GPT-5 | 0.696 ± 0.01 |
| Reasoning | o3 | 0.691 ± 0.01 |
| Reasoning | GPT-5-mini | 0.667 ± 0.01 |
| Reasoning | Gemini-2.5-Pro | 0.665 ± 0.01 |
| Reasoning | o4-mini (2024-11-20) | 0.660 ± 0.01 |
| Reasoning | Deepseek-R1 | 0.597 ± 0.01 |
| Chat | GPT-5-Chat | 0.577 ± 0.01 |
| Chat | Deepseek-V3 | 0.555 ± 0.01 |
| Chat | GPT-4o (2024-11-20) | 0.507 ± 0.01 |
| Chat | Llama-3.3-70B | 0.454 ± 0.01 |
| Chat | Mistral-Large-2411 | 0.446 ± 0.01 |
| Chat | Mistral-Small-2503 | 0.417 ± 0.01 |
| Chat | GPT-4o-mini (2024-07-18) | 0.400 ± 0.01 |
Language
English
Data Structure
Data Fields
- prompt: The prompt presented in the MMTU instance.
- metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
- task: The specific subtask category within the MMTU framework to which the instance belongs.
- dataset: The original source dataset from which the MMTU instance is derived.