| | --- |
| | license: other |
| | license_name: other |
| | license_link: >- |
| | https://huggingface.co/datasets/approximatelabs/tablib-v1-sample/blob/main/README.md |
| | task_categories: |
| | - tabular-classification |
| | - tabular-regression |
| | language: |
| | - en |
| | pretty_name: T4 (The Tremendous TabLib Trawl) |
| | size_categories: |
| | - 1B<n<10B |
| | --- |
| | |
| | The Tremendous TabLib Trawl (T4) is a dataset for training tabular foundation models. |
| | The dataset is described in detail in our paper, ["Large Scale Transfer Learning for Tabular Data via Language Modeling."](https://arxiv.org/abs/2406.12031) |
| | The paper also includes a datasheet for this dataset. |
| |
|
| | T4 consists of a set of Parquet files (described below). For examples and infrastructure showing how to train a lannguage model |
| | on T4, see our open-source Python library, [rtfm](https://github.com/mlfoundations/rtfm), which was used to train TabuLa-8B on T4. |
| |
|
| | # Files and Directory Structure |
| |
|
| | The T4 dataset contains approximately 3.1M tables. Each table is a separate Parquet file, named according to the `content_hash` of the dataset in TabLib. |
| | The dataset is stored in "chunk" subdirectores, which represent batches of tables from the preprocessing phase. |
| | Each chunk directory (e.g. `chunk-0000`) is stored as a single .zip file; unzip these files to access the underlying Parquet files. |
| |
|
| | The dataset occupies a total of 219GB compressed (1.34TB uncompressed) on disk. |
| |
|
| | # License and Acceptable Use |
| |
|
| | We release this dataset under the same license as the original corpuse from which it was derived, TabLib. |
| |
|
| | **By using this dataset, you are acknowledging that you have permission to access the TabLib dataset, |
| | and you agree to abide by the terms of use and license of TabLib.** |
| |
|
| | TabLib can be accessed on [HF Datasets](https://huggingface.co/datasets/approximatelabs/tablib-v1-full), and you can read more about TabLib in the associated [paper](https://arxiv.org/abs/2310.07875) and [blog post](https://www.approximatelabs.com/blog/tablib). |
| |
|
| | We claim no affiliation with the original creators of TabLib, and this dataset release is not associated with Approximate Labs |
| | (but we are grateful to the original TabLib authors for their contributions to the research community and for releasing TabLib). |
| |
|