Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
MMTU / README.md
nielsr's picture
nielsr HF Staff
Add task category and links to paper and code
54c44a7 verified
|
raw
history blame
3.28 kB
metadata
dataset_info:
  features:
    - name: prompt
      dtype: string
    - name: metadata
      dtype: string
    - name: task
      dtype: string
    - name: dataset
      dtype: string
  splits:
    - name: train
      num_bytes: 432885480
      num_examples: 30647
  download_size: 132698519
  dataset_size: 432885480
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - table-question-answering

Dataset Card for MMTU

Dataset Summary

MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark by Junjie Xing, Yeye He, Mengyu Zhou, Haoyu Dong, Shi Han, Lingjiao Chen, Dongmei Zhang, Surajit Chaudhuri, and H. V. Jagadish.

Paper Code

This is a large-scale benchmark designed to evaluate the table reasoning capabilities of large language models (LLMs). It consists of over 30,000 questions across 25 real-world table tasks, focusing on deep understanding, reasoning, and manipulation of tabular data.

These tasks are curated from decades of computer science research and represent challenges encountered by expert users in real applications, making MMTU a rigorous test for LLMs aspiring to professional-level table understanding.

A complete list of tasks: 'table-transform-by-relationalization', 'table-transform-by-output-schema', 'table-transform-by-output-table', 'Entity matching', 'Schema matching', 'Head value matching', 'data-imputation', 'error-detection', 'list-to-table', 'semantic-join', 'equi-join-detect', 'program-transform-by-example', 'formula-by-context', 'semantic-transform-by-example', 'arithmetic-relationship', 'functional-relationship', 'string-relationship', 'Needle-in-a-haystack-table', 'Needle-in-a-haystack-index', 'NL-2-SQL', 'Table Question Answering', 'Fact Verification', 'Column type annotation', 'Column property annotation', 'Cell entity annotation'.

Leaderboards

Model Type Model MMTU Score
Reasoning o4-mini 0.637 ± 0.01
Reasoning Deepseek-R1 0.557 ± 0.01
Chat Deepseek-V3 0.517 ± 0.01
Chat GPT-4o 0.490 ± 0.01
Chat Llama-3.3-70B 0.438 ± 0.01
Chat Mistral-Large 0.430 ± 0.01
Chat Mistral-Small 0.402 ± 0.01
Chat GPT-4o-mini 0.386 ± 0.01
Chat Llama-3.1-8B 0.259 ± 0.01

Language

English

Data Structure

Data Fields

  • prompt: The prompt presented in the MMTU instance.
  • metadata: Supplementary information associated with the MMTU instance, typically used for evaluation purposes.
  • task: The specific subtask category within the MMTU framework to which the instance belongs.
  • dataset: The original source dataset from which the MMTU instance is derived.

Dataset Creation

Please refer to Section 3.2 in the paper.