Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
gavinxing commited on
Commit
434e04a
·
verified ·
1 Parent(s): a8077f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -17
README.md CHANGED
@@ -26,28 +26,46 @@ configs:
26
 
27
  ## Dataset Summary
28
 
 
 
29
  <!-- add link -->
30
  MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark
31
- by Junjie Xing, [Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/), Mengyu Zhou, Haoyu Dong, Shi Han, Lingjiao Chen, Dongmei Zhang, [Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/).
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- This is a large-scale benchmark designed to evaluate the table reasoning capabilities of large language models (LLMs). It consists of over 30,000 questions across 25 real-world table tasks, focusing on deep understanding, reasoning, and manipulation of tabular data.
34
 
35
- These tasks are curated from decades of computer science research and represent challenges encountered by expert users in real applications, making MMTU a rigorous test for LLMs aspiring to professional-level table understanding.
 
 
 
 
 
36
 
37
  A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.
38
 
39
- ## Leaderboards
40
 
41
  | **Model Type** | **Model** | **MMTU Score** |
42
  |----------------|---------------------|----------------------|
43
- | Reasoning | o4-mini | **0.637 ± 0.01** |
44
- | Reasoning | Deepseek-R1 | 0.557 ± 0.01 |
45
  | Chat | Deepseek-V3 | 0.517 ± 0.01 |
46
- | Chat | GPT-4o | 0.490 ± 0.01 |
47
  | Chat | Llama-3.3-70B | 0.438 ± 0.01 |
48
- | Chat | Mistral-Large | 0.430 ± 0.01 |
49
- | Chat | Mistral-Small | 0.402 ± 0.01 |
50
- | Chat | GPT-4o-mini | 0.386 ± 0.01 |
51
  | Chat | Llama-3.1-8B | 0.259 ± 0.01 |
52
 
53
  ## Language
@@ -62,10 +80,3 @@ English
62
  - metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
63
  - task: The specific subtask category within the MMTU framework to which the instance belongs.
64
  - dataset: The original source dataset from which the MMTU instance is derived.
65
-
66
-
67
- ## Dataset Creation
68
-
69
- Please refer to Section 3.2 in the paper. <!-- add link -->
70
-
71
-
 
26
 
27
  ## Dataset Summary
28
 
29
+ |[**🛠️GitHub**](https://github.com/MMTU-Benchmark/MMTU/tree/main) |[**🏆Leaderboard**](#leaderboard)|[**📖 Paper**](https://arxiv.org/abs/2506.05587) |
30
+
31
  <!-- add link -->
32
  MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark
33
+ by [Junjie Xing](https://www.microsoft.com/en-us/research/people/junjiexing/),
34
+ [Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/),
35
+ [Mengyu Zhou](https://www.microsoft.com/en-us/research/people/mezho/),
36
+ [Haoyu Dong](https://www.microsoft.com/en-us/research/people/hadong/),
37
+ [Shi Han](https://www.microsoft.com/en-us/research/people/shihan/),
38
+ [Lingjiao Chen](https://www.microsoft.com/en-us/research/people/lingjiaochen/),
39
+ [Dongmei Zhang](https://www.microsoft.com/en-us/research/people/dongmeiz/),
40
+ [Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/).
41
+
42
+ Tables and table-based use cases play a crucial role in many real-world applications, such as spreadsheets, databases, and computational notebooks, which traditionally require expert-level users like data engineers, analysts, and database administrators to operate. Although LLMs have shown remarkable progress in working with tables, comprehensive benchmarking of such capabilities remains limited, often narrowly focusing on tasks like NL-to-SQL and Table-QA, while overlooking the broader spectrum of real-world tasks that professional users face today.
43
+
44
+ We introduce **MMTU**, a large-scale benchmark with over **30K questions** across **25 real-world table tasks**, designed to comprehensively evaluate models ability to understand, reason, and manipulate real tables at the expert-level. These tasks are drawn from decades' worth of computer science research on tabular data, with a focus on complex table tasks faced by professional users. We show that MMTU require a combination of skills -- including table understanding, reasoning, and coding -- that remain challenging for today's frontier models, where even frontier reasoning models like OpenAI o4-mini and DeepSeek R1 score only around 60%, suggesting significant room for improvement. Our evaluation code is available at [GitHub](https://github.com/MMTU-Benchmark/MMTU/tree/main).
45
 
46
+ <img width="839" alt="mmtu" src="https://github.com/user-attachments/assets/95dd2a05-755e-40cf-a6cb-9d2953394241" />
47
 
48
+ ## Dataset Creation
49
+ MMTU was developed through the meticulous curation of 52 datasets across 25 task categories, each carefully labeled by computer science researchers, in decades’ worth of research on tabular data from communities such as data management (SIGMOD/VLDB), programming languages (PLDI/POPL), and web data (WWW/WSDM). The benchmark emphasizes real-world, complex table tasks encountered by professional users—tasks that demand advanced skills in table understanding, coding, and reasoning. Plesae see the table below for key statistics of the benchmark.
50
+
51
+ <div align="center">
52
+ <img src="https://github.com/user-attachments/assets/fc59e5fc-964b-4716-8e31-657edbdd7edb" width="400"/>
53
+ </div>
54
 
55
  A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.
56
 
57
+ ## Leaderboard
58
 
59
  | **Model Type** | **Model** | **MMTU Score** |
60
  |----------------|---------------------|----------------------|
61
+ | Reasoning | o4-mini (2024-11-20)| **0.639 ± 0.01** |
62
+ | Reasoning | Deepseek-R1 | 0.596 ± 0.01 |
63
  | Chat | Deepseek-V3 | 0.517 ± 0.01 |
64
+ | Chat | GPT-4o (2024-11-20) | 0.491 ± 0.01 |
65
  | Chat | Llama-3.3-70B | 0.438 ± 0.01 |
66
+ | Chat | Mistral-Large-2411 | 0.430 ± 0.01 |
67
+ | Chat | Mistral-Small-2503 | 0.402 ± 0.01 |
68
+ | Chat | GPT-4o-mini (2024-07-18)| 0.386 ± 0.01 |
69
  | Chat | Llama-3.1-8B | 0.259 ± 0.01 |
70
 
71
  ## Language
 
80
  - metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
81
  - task: The specific subtask category within the MMTU framework to which the instance belongs.
82
  - dataset: The original source dataset from which the MMTU instance is derived.