REVEAL / README.md
Tommy-DING's picture
Update README.md
cd1abe9 verified
metadata
pretty_name: Retrieve-and-Verify (REVEAL / REVEAL+) Column Annotation Datasets
tags:
  - table-understanding
  - column-annotation
  - semantic-typing
  - relation-extraction
  - column-type-annotation
  - column-property-annotation
  - data-management
  - CTA
  - CPA
task_categories:
  - text-classification
  - token-classification
  - other
license: cc-by-4.0
annotations_creators:
  - machine-generated
  - expert-generated

Retrieve-and-Verify: Column Annotation Datasets and Resources (CTA / CPA)

This dataset repository accompanies the paper:

Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations
Zhihao Ding, Yongkang Sun, Jieming Shi. Proc. ACM Manag. Data (Dec 2025).

The work targets two column annotation tasks:

  • Column Type Annotation (CTA): assign a semantic type to a target column.
  • Column Property Annotation (CPA) (a.k.a. column relation/property annotation): assign a semantic relation/property between a target column and another column.

Citation

If you use this dataset, please cite:

@article{10.1145/3769823,
  author = {Ding, Zhihao and Sun, Yongkang and Shi, Jieming},
  title = {Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations},
  year = {2025},
  issue_date = {December 2025},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  volume = {3},
  number = {6},
  url = {https://doi.org/10.1145/3769823},
  doi = {10.1145/3769823},
  abstract = {Tables are a prevalent format for structured data, yet their metadata, such as semantic types and column relationships, is often incomplete or ambiguous. Column annotation tasks, including Column Type Annotation (CTA) and Column Property Annotation (CPA), address this by leveraging table context, which are critical for data management. Existing methods typically serialize all columns in a table into pretrained language models to incorporate context, but this coarse-grained approach often degrades performance in wide tables with many irrelevant or misleading columns. To address this, we propose a novel retrieve-and-verify context selection framework for accurate column annotation, introducing two methods: REVEAL and REVEAL+. In REVEAL, we design an efficient unsupervised retrieval technique to select compact, informative column contexts by balancing semantic relevance and diversity, and develop context-aware encoding techniques with role embeddings and target-context pair training to effectively differentiate target and context columns. To further improve performance, in REVEAL+, we design a verification model that refines the selected context by directly estimating its quality for specific annotation tasks. To achieve this, we formulate a novel column context verification problem as a classification task and then develop the verification model. Moreover, in REVEAL+, we develop a top-down verification inference technique to ensure efficiency by reducing the search space for high-quality context subsets from exponential to quadratic. Extensive experiments on six benchmark datasets demonstrate that our methods consistently outperform state-of-the-art baselines.},
  journal = {Proc. ACM Manag. Data},
  month = dec,
  articleno = {358},
  numpages = {27},
  keywords = {column annotation, context selection, embeddings, table understanding}
}

Dataset summary

This repository provides dataset artifacts for running and reproducing experiments in the paper above.

Benchmarks used in the paper

Benchmark # Tables # Types Total # Cols # Labeled Cols Min/Max/Avg Cols per Table
GitTablesDB 3,737 101 45,304 5,433 1 / 193 / 12.1
GitTablesSC 2,853 53 34,148 3,863 1 / 150 / 12.0
SOTAB-CTA 24,275 91 195,543 64,884 3 / 30 / 8.1
SOTAB-CPA 20,686 176 196,831 74,216 3 / 31 / 9.5
WikiTable-CTA 406,706 255 2,393,027 654,670 1 / 99 / 5.9
WikiTable-CPA 55,970 121 306,265 62,954 2 / 38 / 5.5

What is included in this Hugging Face dataset repository?

  • GitTablesDB (gt-semtab22-dbpedia-all): raw CSV tables with 5-fold splits.
  • GitTablesSC (gt-semtab22-schema-property-all): raw CSV tables with 5-fold splits.
  • SOTAB-CTA / SOTAB-CPA / WikiTables-CTA / WikiTables-CPA: official train/validation/test splits.

For each dataset, type_vocab.txt provides the mapping between type IDs and raw type names.

Dataset structure

Task name mapping (paper ↔ codebase)

Paper Name Codebase Task Name
GitTablesDB gt-semtab22-dbpedia-all
GitTablesSC gt-semtab22-schema-property-all
SOTAB-CTA sotab
SOTAB-CPA sotab-re
WikiTables-CTA turl
WikiTables-CPA turl-re

Data schema (columns)

Column Type Description
table_id string Identifier of the source table.
column_id int 0-based index of the target column within the table.
label string Ground-truth type id. For multi-class tasks, a single type ID (-1 indicates unlabeled). For multi-label tasks, a binary list encoded as a string (e.g., [1,0,0,...]).
data string Cell values of the target column in original order, serialized as a single string.