Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,283 Bytes
f75fe80
 
 
 
 
 
 
2b3772d
 
 
 
f75fe80
 
525a31e
f75fe80
525a31e
 
f75fe80
 
 
 
 
54c44a7
 
f75fe80
a8077f8
 
 
 
 
 
 
 
54c44a7
 
 
a8077f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54c44a7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: metadata
    dtype: string
  - name: task
    dtype: string
  - name: dataset
    dtype: string
  splits:
  - name: train
    num_bytes: 432885480
    num_examples: 30647
  download_size: 132698519
  dataset_size: 432885480
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- table-question-answering
---

# Dataset Card for MMTU

## Dataset Summary

MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark 
by Junjie Xing, [Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/), Mengyu Zhou, Haoyu Dong, Shi Han, Lingjiao Chen, Dongmei Zhang, [Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/).

[Paper](https://huggingface.co/papers/2506.05587)
[Code](https://github.com/MMTU-Benchmark/MMTU)

This is a large-scale benchmark designed to evaluate the table reasoning capabilities of large language models (LLMs). It consists of over 30,000 questions across 25 real-world table tasks, focusing on deep understanding, reasoning, and manipulation of tabular data.

These tasks are curated from decades of computer science research and represent challenges encountered by expert users in real applications, making MMTU a rigorous test for LLMs aspiring to professional-level table understanding.

A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.

## Leaderboards

| **Model Type** | **Model**           | **MMTU Score**     | 
|----------------|---------------------|----------------------|
| Reasoning      | o4-mini             | **0.637 ± 0.01**     |
| Reasoning      | Deepseek-R1         | 0.557 ± 0.01         |
| Chat           | Deepseek-V3         | 0.517 ± 0.01         |
| Chat           | GPT-4o              | 0.490 ± 0.01         |
| Chat           | Llama-3.3-70B       | 0.438 ± 0.01         |
| Chat           | Mistral-Large       | 0.430 ± 0.01         |
| Chat           | Mistral-Small       | 0.402 ± 0.01         |
| Chat           | GPT-4o-mini         | 0.386 ± 0.01         |
| Chat           | Llama-3.1-8B        | 0.259 ± 0.01         |

## Language

English

## Data Structure

### Data Fields

- prompt: The prompt presented in the MMTU instance.
- metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
- task: The specific subtask category within the MMTU framework to which the instance belongs.
- dataset: The original source dataset from which the MMTU instance is derived.

## Dataset Creation

Please refer to Section 3.2 in the paper.