File size: 9,780 Bytes
f31687f bd862c0 21c7e9f a547012 a09e021 cd8779d 6b9e25b 58b3c2a 1afdf98 34a7353 b3a119b fa89cf0 eaa0414 ba1c565 1670e92 a508651 a16b63f ac417ee f31687f bd862c0 21c7e9f a547012 a09e021 cd8779d b70a42d 6b9e25b 58b3c2a 1afdf98 34a7353 b3a119b fa89cf0 eaa0414 ba1c565 1670e92 a508651 a16b63f f31687f 4e9f963 44771aa 4e9f963 2925545 4e9f963 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | ---
dataset_info:
features:
- name: example_id
dtype: string
- name: task
dtype:
class_label:
names:
'0': col_type
'1': ent_link
'2': fetaqa
'3': hitab
'4': hybridqa
'5': infotabs
'6': merged_cell_detection
'7': rel_extraction
'8': row_column_extraction
'9': struct_aware_parse
'10': tabfact
'11': table_cell_extraction
'12': table_cell_location
'13': table_instruction
'14': table_recognition
'15': table_size_detection
'16': tabmwp
'17': tat-qa
'18': totto
'19': wikibio
'20': wikitq
- name: src_example_ids
dtype: string
- name: table_id
dtype: string
- name: table_seed_id
dtype: string
- name: table_seed_dataset
dtype: string
- name: table_page_title
dtype: string
- name: table_section_title
dtype: string
- name: table_variant
dtype: string
- name: img_source
dtype:
class_label:
names:
'0': PubTabNet
'1': TABMWP
'2': seed_render
'3': wikipedia
- name: input
dtype: large_string
- name: output
dtype: string
- name: split
dtype:
class_label:
names:
'0': dev
'1': test
'2': train
- name: metadata
dtype: string
- name: table_wiki_page_id
dtype: string
- name: table_wiki_old_id
dtype: string
- name: table_html
dtype: large_string
- name: table_img
dtype: image
splits:
- name: wikibio_train
num_bytes: 11073920562
num_examples: 140000
- name: tatqa_train
num_bytes: 114858299
num_examples: 2201
- name: hybridqa_train
num_bytes: 14932215002
num_examples: 62670
- name: table_instruction_train
num_bytes: 12180312819
num_examples: 136944
- name: tabmwp_train
num_bytes: 326184930
num_examples: 23059
- name: hitab_train
num_bytes: 1026591527
num_examples: 7417
- name: table_recognition_train
num_bytes: 788489258
num_examples: 6927
- name: table_cell_extraction_train
num_bytes: 2102295132
num_examples: 7727
- name: table_size_detection_train
num_bytes: 1695496703
num_examples: 7800
- name: merged_cell_detection_train
num_bytes: 993975274
num_examples: 7500
- name: table_cell_location_train
num_bytes: 1227016660
num_examples: 7708
- name: tabfact_train
num_bytes: 10356555069
num_examples: 87717
- name: totto_train
num_bytes: 36076773336
num_examples: 110934
- name: row_column_extraction_train
num_bytes: 2261383645
num_examples: 7721
- name: infotabs_train
num_bytes: 1486818637
num_examples: 16538
- name: wikitq_train
num_bytes: 3661437235
num_examples: 14152
- name: struct_aware_parse_train
num_bytes: 2711066295
num_examples: 126581
- name: fetaqa_train
num_bytes: 366072076
num_examples: 3006
download_size: 97565644868
dataset_size: 103381462459
configs:
- config_name: default
data_files:
- split: wikibio_train
path: data/wikibio_train-*
- split: tatqa_train
path: data/tatqa_train-*
- split: hybridqa_train
path: data/hybridqa_train-*
- split: table_instruction_train
path: data/table_instruction_train-*
- split: tabmwp_train
path: data/tabmwp_train-*
- split: hitab_train
path: data/hitab_train-*
- split: fetaqa_train
path: data/fetaqa_train-*
- split: table_recognition_train
path: data/table_recognition_train-*
- split: table_cell_extraction_train
path: data/table_cell_extraction_train-*
- split: table_size_detection_train
path: data/table_size_detection_train-*
- split: merged_cell_detection_train
path: data/merged_cell_detection_train-*
- split: table_cell_location_train
path: data/table_cell_location_train-*
- split: tabfact_train
path: data/tabfact_train-*
- split: totto_train
path: data/totto_train-*
- split: row_column_extraction_train
path: data/row_column_extraction_train-*
- split: infotabs_train
path: data/infotabs_train-*
- split: wikitq_train
path: data/wikitq_train-*
- split: struct_aware_parse_train
path: data/struct_aware_parse_train-*
---
### TABLET-Small
This is the _Small_ sized **train set** of the **TABLET** dataset. It contains the train examples for 14 **TABLET** tasks.
Each task is capped at **140,000 examples**, resulting in a total of **776,602 training examples** across **14 tasks**.
This dataset is self-contained, each example includes a table image, its HTML representation, and the associated task data.
However, if you're interested in downloading just the TABLET tables, check out [TABLET-tables](https://huggingface.co/datasets/alonsoapp/TABLET-tables).
All TABLET Subsets:
- _(train)_ [**TABLET-Small**](https://huggingface.co/datasets/alonsoapp/TABLET-Small): The smallest TABLET subset, including **776,602 examples** across **14 tasks**.
- _(train)_ [**TABLET-Medium**](https://huggingface.co/datasets/alonsoapp/TABLET-Medium): Includes all examples from _TABLET-Small_, plus **Column Type Annotation**, **Entity Linking**, and **Relation Extraction** tasks. Each task is capped at **140,000 examples**, resulting in a total of **1,117,217 training examples** across **17 tasks**.
- _(train)_ [**TABLET-Large**](https://huggingface.co/datasets/alonsoapp/TABLET-Large): Includes all examples from _TABLET-Medium_ with **no cap** on task size, resulting in a total of **3,505,311 training examples** across **17 tasks**.
- _(dev)_ [**TABLET-dev**](https://huggingface.co/datasets/alonsoapp/TABLET-dev): The **development** set of TABLET.
- _(test)_ [**TABLET-test**](https://huggingface.co/datasets/alonsoapp/TABLET-test): The **test** set of TABLET.
For more information, see our [paper](https://arxiv.org/pdf/2509.21205), [website](https://precious-panda-5ce815.netlify.app/tablet/), and [GitHub repository](https://github.com/AlonsoApp/TABLET).
#### Using the Dataset
Given its size, we recommend [streaming](https://huggingface.co/docs/datasets/stream) the dataset instead of downloading it entirely to disk:
```python
from datasets import load_dataset
dataset = load_dataset('alonsoapp/TABLET-Small', split='fetaqa_train', streaming=True)
print(next(iter(dataset)))
```
#### Data Fields
Each sample within the dataset is structured with the following fields:
* **`example_id`**: Unique identifier for the example.
* **`task`**: The name of the task this example belongs to.
* **`src_example_ids`**: IDs of the original examples from the source dataset, formatted as `{"Dataset name": "id"}`. Use the `get_original_example` helper function from [our published code](https://github.com/AlonsoApp/TABLET) to easily retrieve the source example.
* **`table_id`**: Unique identifier for the table.
* **`table_seed_id`**: ID referencing the table in its original (seed) dataset.
* **`table_seed_dataset`**: Name of the dataset where the table originated, typically matching the source dataset of the example.
* **`table_page_title`**: For tables sourced from Wikipedia, the corresponding page title.
* **`table_section_title`**: For Wikipedia tables, the title of the section where the table appears.
* **`table_variant`**: Either "raw" or "highlighted". Some examples visually highlight specific cells and this field indicates whether the table is unmodified (raw) or includes highlights (highlighted).
* **`img_source`**: Source of the table image. That is, whether the image comes from the Wikipedia visualization (wikipedia), a synthetic renderization from the data in the soruce dataset (seed_render), or directly copied from the original visualization of the table of the source dataset (PubTabNet, TabMWP).
* **`input`**: The _instructified_ input used for training and evaluation (see [paper](https://arxiv.org/pdf/2509.21205)). The input can be rephrased using information in `metadata`.
* **`output`**: The expected model output for the given `input`.
* **`split`**: Dataset split: `train`, `dev`, or `test`.
* **`metadata`**: Atomic data for the example to enable reconstruction or rephrasing of the instruction. Each key indicates a data element, the value can be obtained from either the _'input'_ or the _'output'_ strings using the substring defined by the character indexes in 'idx'. Use the get_metadata helper function from [our published code](https://github.com/AlonsoApp/TABLET) to retrieve these values.
* **`table_wiki_page_id`**: For Wikipedia tables, the page ID corresponding to the article containing the table (useful for Wikipedia API queries).
* **`table_wiki_old_id`**: For Wikipedia tables, the “old ID” identifying the article version at the crawl time.
* **`table_html`**: HTML representation of the table. Use the `render_table` helper function from [our code](https://github.com/AlonsoApp/TABLET) to render it in its original style. For highlighted variants, highlighted cells use the CSS class `demeter_highlighted_cell`. Remove any decorators for this class in the CSS to render identically to the raw version.
* **`table_img`**: The image representation of the table.
#### Citation
If you find **TABLET** useful in your research, please consider citing it by the following BibTeX entry.
```bibtex
@misc{alonso2025tabletlargescaledatasetrobust,
title={TABLET: A Large-Scale Dataset for Robust Visual Table Understanding},
author={Iñigo Alonso and Imanol Miranda and Eneko Agirre and Mirella Lapata},
year={2025},
eprint={2509.21205},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.21205},
}
``` |