TABLET-Small / README.md
alonsoapp's picture
Update README.md
44771aa verified
metadata
dataset_info:
  features:
    - name: example_id
      dtype: string
    - name: task
      dtype:
        class_label:
          names:
            '0': col_type
            '1': ent_link
            '2': fetaqa
            '3': hitab
            '4': hybridqa
            '5': infotabs
            '6': merged_cell_detection
            '7': rel_extraction
            '8': row_column_extraction
            '9': struct_aware_parse
            '10': tabfact
            '11': table_cell_extraction
            '12': table_cell_location
            '13': table_instruction
            '14': table_recognition
            '15': table_size_detection
            '16': tabmwp
            '17': tat-qa
            '18': totto
            '19': wikibio
            '20': wikitq
    - name: src_example_ids
      dtype: string
    - name: table_id
      dtype: string
    - name: table_seed_id
      dtype: string
    - name: table_seed_dataset
      dtype: string
    - name: table_page_title
      dtype: string
    - name: table_section_title
      dtype: string
    - name: table_variant
      dtype: string
    - name: img_source
      dtype:
        class_label:
          names:
            '0': PubTabNet
            '1': TABMWP
            '2': seed_render
            '3': wikipedia
    - name: input
      dtype: large_string
    - name: output
      dtype: string
    - name: split
      dtype:
        class_label:
          names:
            '0': dev
            '1': test
            '2': train
    - name: metadata
      dtype: string
    - name: table_wiki_page_id
      dtype: string
    - name: table_wiki_old_id
      dtype: string
    - name: table_html
      dtype: large_string
    - name: table_img
      dtype: image
  splits:
    - name: wikibio_train
      num_bytes: 11073920562
      num_examples: 140000
    - name: tatqa_train
      num_bytes: 114858299
      num_examples: 2201
    - name: hybridqa_train
      num_bytes: 14932215002
      num_examples: 62670
    - name: table_instruction_train
      num_bytes: 12180312819
      num_examples: 136944
    - name: tabmwp_train
      num_bytes: 326184930
      num_examples: 23059
    - name: hitab_train
      num_bytes: 1026591527
      num_examples: 7417
    - name: table_recognition_train
      num_bytes: 788489258
      num_examples: 6927
    - name: table_cell_extraction_train
      num_bytes: 2102295132
      num_examples: 7727
    - name: table_size_detection_train
      num_bytes: 1695496703
      num_examples: 7800
    - name: merged_cell_detection_train
      num_bytes: 993975274
      num_examples: 7500
    - name: table_cell_location_train
      num_bytes: 1227016660
      num_examples: 7708
    - name: tabfact_train
      num_bytes: 10356555069
      num_examples: 87717
    - name: totto_train
      num_bytes: 36076773336
      num_examples: 110934
    - name: row_column_extraction_train
      num_bytes: 2261383645
      num_examples: 7721
    - name: infotabs_train
      num_bytes: 1486818637
      num_examples: 16538
    - name: wikitq_train
      num_bytes: 3661437235
      num_examples: 14152
    - name: struct_aware_parse_train
      num_bytes: 2711066295
      num_examples: 126581
    - name: fetaqa_train
      num_bytes: 366072076
      num_examples: 3006
  download_size: 97565644868
  dataset_size: 103381462459
configs:
  - config_name: default
    data_files:
      - split: wikibio_train
        path: data/wikibio_train-*
      - split: tatqa_train
        path: data/tatqa_train-*
      - split: hybridqa_train
        path: data/hybridqa_train-*
      - split: table_instruction_train
        path: data/table_instruction_train-*
      - split: tabmwp_train
        path: data/tabmwp_train-*
      - split: hitab_train
        path: data/hitab_train-*
      - split: fetaqa_train
        path: data/fetaqa_train-*
      - split: table_recognition_train
        path: data/table_recognition_train-*
      - split: table_cell_extraction_train
        path: data/table_cell_extraction_train-*
      - split: table_size_detection_train
        path: data/table_size_detection_train-*
      - split: merged_cell_detection_train
        path: data/merged_cell_detection_train-*
      - split: table_cell_location_train
        path: data/table_cell_location_train-*
      - split: tabfact_train
        path: data/tabfact_train-*
      - split: totto_train
        path: data/totto_train-*
      - split: row_column_extraction_train
        path: data/row_column_extraction_train-*
      - split: infotabs_train
        path: data/infotabs_train-*
      - split: wikitq_train
        path: data/wikitq_train-*
      - split: struct_aware_parse_train
        path: data/struct_aware_parse_train-*

TABLET-Small

This is the Small sized train set of the TABLET dataset. It contains the train examples for 14 TABLET tasks.
Each task is capped at 140,000 examples, resulting in a total of 776,602 training examples across 14 tasks.
This dataset is self-contained, each example includes a table image, its HTML representation, and the associated task data.
However, if you're interested in downloading just the TABLET tables, check out TABLET-tables.

All TABLET Subsets:

  • (train) TABLET-Small: The smallest TABLET subset, including 776,602 examples across 14 tasks.
  • (train) TABLET-Medium: Includes all examples from TABLET-Small, plus Column Type Annotation, Entity Linking, and Relation Extraction tasks. Each task is capped at 140,000 examples, resulting in a total of 1,117,217 training examples across 17 tasks.
  • (train) TABLET-Large: Includes all examples from TABLET-Medium with no cap on task size, resulting in a total of 3,505,311 training examples across 17 tasks.
  • (dev) TABLET-dev: The development set of TABLET.
  • (test) TABLET-test: The test set of TABLET.

For more information, see our paper, website, and GitHub repository.

Using the Dataset

Given its size, we recommend streaming the dataset instead of downloading it entirely to disk:

from datasets import load_dataset
dataset = load_dataset('alonsoapp/TABLET-Small', split='fetaqa_train', streaming=True)
print(next(iter(dataset)))

Data Fields

Each sample within the dataset is structured with the following fields:

  • example_id: Unique identifier for the example.
  • task: The name of the task this example belongs to.
  • src_example_ids: IDs of the original examples from the source dataset, formatted as {"Dataset name": "id"}. Use the get_original_example helper function from our published code to easily retrieve the source example.
  • table_id: Unique identifier for the table.
  • table_seed_id: ID referencing the table in its original (seed) dataset.
  • table_seed_dataset: Name of the dataset where the table originated, typically matching the source dataset of the example.
  • table_page_title: For tables sourced from Wikipedia, the corresponding page title.
  • table_section_title: For Wikipedia tables, the title of the section where the table appears.
  • table_variant: Either "raw" or "highlighted". Some examples visually highlight specific cells and this field indicates whether the table is unmodified (raw) or includes highlights (highlighted).
  • img_source: Source of the table image. That is, whether the image comes from the Wikipedia visualization (wikipedia), a synthetic renderization from the data in the soruce dataset (seed_render), or directly copied from the original visualization of the table of the source dataset (PubTabNet, TabMWP).
  • input: The instructified input used for training and evaluation (see paper). The input can be rephrased using information in metadata.
  • output: The expected model output for the given input.
  • split: Dataset split: train, dev, or test.
  • metadata: Atomic data for the example to enable reconstruction or rephrasing of the instruction. Each key indicates a data element, the value can be obtained from either the 'input' or the 'output' strings using the substring defined by the character indexes in 'idx'. Use the get_metadata helper function from our published code to retrieve these values.
  • table_wiki_page_id: For Wikipedia tables, the page ID corresponding to the article containing the table (useful for Wikipedia API queries).
  • table_wiki_old_id: For Wikipedia tables, the “old ID” identifying the article version at the crawl time.
  • table_html: HTML representation of the table. Use the render_table helper function from our code to render it in its original style. For highlighted variants, highlighted cells use the CSS class demeter_highlighted_cell. Remove any decorators for this class in the CSS to render identically to the raw version.
  • table_img: The image representation of the table.

Citation

If you find TABLET useful in your research, please consider citing it by the following BibTeX entry.

@misc{alonso2025tabletlargescaledatasetrobust,
      title={TABLET: A Large-Scale Dataset for Robust Visual Table Understanding}, 
      author={Iñigo Alonso and Imanol Miranda and Eneko Agirre and Mirella Lapata},
      year={2025},
      eprint={2509.21205},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.21205}, 
}