The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Retrieve-and-Verify: Column Annotation Datasets and Resources (CTA / CPA)
This dataset repository accompanies the paper:
Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations
Zhihao Ding, Yongkang Sun, Jieming Shi. Proc. ACM Manag. Data (Dec 2025).
The work targets two column annotation tasks:
- Column Type Annotation (CTA): assign a semantic type to a target column.
- Column Property Annotation (CPA) (a.k.a. column relation/property annotation): assign a semantic relation/property between a target column and another column.
Citation
If you use this dataset, please cite:
@article{10.1145/3769823,
author = {Ding, Zhihao and Sun, Yongkang and Shi, Jieming},
title = {Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations},
year = {2025},
issue_date = {December 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {3},
number = {6},
url = {https://doi.org/10.1145/3769823},
doi = {10.1145/3769823},
abstract = {Tables are a prevalent format for structured data, yet their metadata, such as semantic types and column relationships, is often incomplete or ambiguous. Column annotation tasks, including Column Type Annotation (CTA) and Column Property Annotation (CPA), address this by leveraging table context, which are critical for data management. Existing methods typically serialize all columns in a table into pretrained language models to incorporate context, but this coarse-grained approach often degrades performance in wide tables with many irrelevant or misleading columns. To address this, we propose a novel retrieve-and-verify context selection framework for accurate column annotation, introducing two methods: REVEAL and REVEAL+. In REVEAL, we design an efficient unsupervised retrieval technique to select compact, informative column contexts by balancing semantic relevance and diversity, and develop context-aware encoding techniques with role embeddings and target-context pair training to effectively differentiate target and context columns. To further improve performance, in REVEAL+, we design a verification model that refines the selected context by directly estimating its quality for specific annotation tasks. To achieve this, we formulate a novel column context verification problem as a classification task and then develop the verification model. Moreover, in REVEAL+, we develop a top-down verification inference technique to ensure efficiency by reducing the search space for high-quality context subsets from exponential to quadratic. Extensive experiments on six benchmark datasets demonstrate that our methods consistently outperform state-of-the-art baselines.},
journal = {Proc. ACM Manag. Data},
month = dec,
articleno = {358},
numpages = {27},
keywords = {column annotation, context selection, embeddings, table understanding}
}
Dataset summary
This repository provides dataset artifacts for running and reproducing experiments in the paper above.
Benchmarks used in the paper
| Benchmark | # Tables | # Types | Total # Cols | # Labeled Cols | Min/Max/Avg Cols per Table |
|---|---|---|---|---|---|
| GitTablesDB | 3,737 | 101 | 45,304 | 5,433 | 1 / 193 / 12.1 |
| GitTablesSC | 2,853 | 53 | 34,148 | 3,863 | 1 / 150 / 12.0 |
| SOTAB-CTA | 24,275 | 91 | 195,543 | 64,884 | 3 / 30 / 8.1 |
| SOTAB-CPA | 20,686 | 176 | 196,831 | 74,216 | 3 / 31 / 9.5 |
| WikiTable-CTA | 406,706 | 255 | 2,393,027 | 654,670 | 1 / 99 / 5.9 |
| WikiTable-CPA | 55,970 | 121 | 306,265 | 62,954 | 2 / 38 / 5.5 |
Note: Some benchmarks (e.g., SOTAB / WikiTable variants) may have their own original licenses/terms. Please follow the upstream license and redistribution requirements if you are re-hosting any derived artifacts.
What is included in this Hugging Face dataset repository?
- GitTablesDB raw CSV data:
data/gt-semtab22-dbpedia-all/(as used by our codebase) - GitTablesSC raw CSV data:
data/gt-semtab22-schema-property-all/ - (Optional) Processed/standardized splits for training and evaluation used by REVEAL / REVEAL+
TODO:specify exact files and split definitions if you include them (train/valid/test, seeds, etc.)
- (Optional) Label vocabularies / mappings
TODO:list where type/property ID-to-name mappings live (e.g.,label_map.json)
If you do not redistribute SOTAB or WikiTable-derived artifacts here, you can still provide:
- scripts to download/convert them, and
- pointers to official sources.
Dataset structure
Task name mapping (paper β codebase)
| Paper Name | Codebase Task Name |
|---|---|
| GitTablesDB | gt-semtab22-dbpedia-all |
| GitTablesSC | gt-semtab22-schema-property-all |
| SOTAB-CTA | sotab |
| SOTAB-CPA | sotab-re |
| WikiTables-CTA | turl |
| WikiTables-CPA | turl-re |
- Downloads last month
- 9