Update README.md
Browse files
README.md
CHANGED
|
@@ -21,3 +21,51 @@ configs:
|
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
| 23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
| 23 |
---
|
| 24 |
+
|
| 25 |
+
# Dataset Card for MMTU
|
| 26 |
+
|
| 27 |
+
## Dataset Summary
|
| 28 |
+
|
| 29 |
+
<!-- add link -->
|
| 30 |
+
MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark
|
| 31 |
+
by Junjie Xing, [Yeye He](https://www.microsoft.com/en-us/research/people/yeyehe/), Mengyu Zhou, Haoyu Dong, Shi Han, Lingjiao Chen, Dongmei Zhang, [Surajit Chaudhuri](https://www.microsoft.com/en-us/research/people/surajitc/), and [H. V. Jagadish](https://web.eecs.umich.edu/~jag/).
|
| 32 |
+
|
| 33 |
+
This is a large-scale benchmark designed to evaluate the table reasoning capabilities of large language models (LLMs). It consists of over 30,000 questions across 25 real-world table tasks, focusing on deep understanding, reasoning, and manipulation of tabular data.
|
| 34 |
+
|
| 35 |
+
These tasks are curated from decades of computer science research and represent challenges encountered by expert users in real applications, making MMTU a rigorous test for LLMs aspiring to professional-level table understanding.
|
| 36 |
+
|
| 37 |
+
A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.
|
| 38 |
+
|
| 39 |
+
## Leaderboards
|
| 40 |
+
|
| 41 |
+
| **Model Type** | **Model** | **MMTU Score** |
|
| 42 |
+
|----------------|---------------------|----------------------|
|
| 43 |
+
| Reasoning | o4-mini | **0.637 ± 0.01** |
|
| 44 |
+
| Reasoning | Deepseek-R1 | 0.557 ± 0.01 |
|
| 45 |
+
| Chat | Deepseek-V3 | 0.517 ± 0.01 |
|
| 46 |
+
| Chat | GPT-4o | 0.490 ± 0.01 |
|
| 47 |
+
| Chat | Llama-3.3-70B | 0.438 ± 0.01 |
|
| 48 |
+
| Chat | Mistral-Large | 0.430 ± 0.01 |
|
| 49 |
+
| Chat | Mistral-Small | 0.402 ± 0.01 |
|
| 50 |
+
| Chat | GPT-4o-mini | 0.386 ± 0.01 |
|
| 51 |
+
| Chat | Llama-3.1-8B | 0.259 ± 0.01 |
|
| 52 |
+
|
| 53 |
+
## Language
|
| 54 |
+
|
| 55 |
+
English
|
| 56 |
+
|
| 57 |
+
## Data Structure
|
| 58 |
+
|
| 59 |
+
### Data Fields
|
| 60 |
+
|
| 61 |
+
- prompt: The prompt presented in the MMTU instance.
|
| 62 |
+
- metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
|
| 63 |
+
- task: The specific subtask category within the MMTU framework to which the instance belongs.
|
| 64 |
+
- dataset: The original source dataset from which the MMTU instance is derived.
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
## Dataset Creation
|
| 68 |
+
|
| 69 |
+
Please refer to Section 3.2 in the paper. <!-- add link -->
|
| 70 |
+
|
| 71 |
+
|