Datasets:
Modalities:
Text
Tags:
table-understanding
column-annotation
semantic-typing
relation-extraction
column-type-annotation
column-property-annotation
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,103 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: Retrieve-and-Verify (REVEAL / REVEAL+) Column Annotation Datasets
|
| 3 |
+
tags:
|
| 4 |
+
- table-understanding
|
| 5 |
+
- column-annotation
|
| 6 |
+
- semantic-typing
|
| 7 |
+
- relation-extraction
|
| 8 |
+
- column-type-annotation
|
| 9 |
+
- column-property-annotation
|
| 10 |
+
- data-management
|
| 11 |
+
- CTA
|
| 12 |
+
- CPA
|
| 13 |
+
task_categories:
|
| 14 |
+
- text-classification
|
| 15 |
+
- token-classification
|
| 16 |
+
- other
|
| 17 |
+
license: cc-by-4.0
|
| 18 |
+
annotations_creators:
|
| 19 |
+
- machine-generated
|
| 20 |
+
- expert-generated
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
# Retrieve-and-Verify: Column Annotation Datasets and Resources (CTA / CPA)
|
| 24 |
+
|
| 25 |
+
This dataset repository accompanies the paper:
|
| 26 |
+
|
| 27 |
+
**Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations**
|
| 28 |
+
Zhihao Ding, Yongkang Sun, Jieming Shi. *Proc. ACM Manag. Data* (Dec 2025).
|
| 29 |
+
|
| 30 |
+
The work targets two column annotation tasks:
|
| 31 |
+
|
| 32 |
+
- **Column Type Annotation (CTA)**: assign a semantic type to a target column.
|
| 33 |
+
- **Column Property Annotation (CPA)** (a.k.a. column relation/property annotation): assign a semantic relation/property between a target column and another column.
|
| 34 |
+
|
| 35 |
+
## Citation
|
| 36 |
+
|
| 37 |
+
If you use this dataset, please cite:
|
| 38 |
+
|
| 39 |
+
```bibtex
|
| 40 |
+
@article{10.1145/3769823,
|
| 41 |
+
author = {Ding, Zhihao and Sun, Yongkang and Shi, Jieming},
|
| 42 |
+
title = {Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations},
|
| 43 |
+
year = {2025},
|
| 44 |
+
issue_date = {December 2025},
|
| 45 |
+
publisher = {Association for Computing Machinery},
|
| 46 |
+
address = {New York, NY, USA},
|
| 47 |
+
volume = {3},
|
| 48 |
+
number = {6},
|
| 49 |
+
url = {https://doi.org/10.1145/3769823},
|
| 50 |
+
doi = {10.1145/3769823},
|
| 51 |
+
abstract = {Tables are a prevalent format for structured data, yet their metadata, such as semantic types and column relationships, is often incomplete or ambiguous. Column annotation tasks, including Column Type Annotation (CTA) and Column Property Annotation (CPA), address this by leveraging table context, which are critical for data management. Existing methods typically serialize all columns in a table into pretrained language models to incorporate context, but this coarse-grained approach often degrades performance in wide tables with many irrelevant or misleading columns. To address this, we propose a novel retrieve-and-verify context selection framework for accurate column annotation, introducing two methods: REVEAL and REVEAL+. In REVEAL, we design an efficient unsupervised retrieval technique to select compact, informative column contexts by balancing semantic relevance and diversity, and develop context-aware encoding techniques with role embeddings and target-context pair training to effectively differentiate target and context columns. To further improve performance, in REVEAL+, we design a verification model that refines the selected context by directly estimating its quality for specific annotation tasks. To achieve this, we formulate a novel column context verification problem as a classification task and then develop the verification model. Moreover, in REVEAL+, we develop a top-down verification inference technique to ensure efficiency by reducing the search space for high-quality context subsets from exponential to quadratic. Extensive experiments on six benchmark datasets demonstrate that our methods consistently outperform state-of-the-art baselines.},
|
| 52 |
+
journal = {Proc. ACM Manag. Data},
|
| 53 |
+
month = dec,
|
| 54 |
+
articleno = {358},
|
| 55 |
+
numpages = {27},
|
| 56 |
+
keywords = {column annotation, context selection, embeddings, table understanding}
|
| 57 |
+
}
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Dataset summary
|
| 61 |
+
|
| 62 |
+
This repository provides dataset artifacts for running and reproducing experiments in the paper above.
|
| 63 |
+
|
| 64 |
+
### Benchmarks used in the paper
|
| 65 |
+
|
| 66 |
+
| Benchmark | # Tables | # Types | Total # Cols | # Labeled Cols | Min/Max/Avg Cols per Table |
|
| 67 |
+
|---|---:|---:|---:|---:|---:|
|
| 68 |
+
| GitTablesDB | 3,737 | 101 | 45,304 | 5,433 | 1 / 193 / 12.1 |
|
| 69 |
+
| GitTablesSC | 2,853 | 53 | 34,148 | 3,863 | 1 / 150 / 12.0 |
|
| 70 |
+
| SOTAB-CTA | 24,275 | 91 | 195,543 | 64,884 | 3 / 30 / 8.1 |
|
| 71 |
+
| SOTAB-CPA | 20,686 | 176 | 196,831 | 74,216 | 3 / 31 / 9.5 |
|
| 72 |
+
| WikiTable-CTA | 406,706 | 255 | 2,393,027 | 654,670 | 1 / 99 / 5.9 |
|
| 73 |
+
| WikiTable-CPA | 55,970 | 121 | 306,265 | 62,954 | 2 / 38 / 5.5 |
|
| 74 |
+
|
| 75 |
+
## What is included in this Hugging Face dataset repository?
|
| 76 |
+
|
| 77 |
+
- **GitTablesDB (`gt-semtab22-dbpedia-all`)**: raw CSV tables with **5-fold** splits.
|
| 78 |
+
- **GitTablesSC (`gt-semtab22-schema-property-all`)**: raw CSV tables with **5-fold** splits.
|
| 79 |
+
- **SOTAB-CTA / SOTAB-CPA / WikiTables-CTA / WikiTables-CPA**: official **train/validation/test** splits.
|
| 80 |
+
|
| 81 |
+
For each dataset, `type_vocab.txt` provides the mapping between **type IDs** and **raw type names**.
|
| 82 |
+
|
| 83 |
+
## Dataset structure
|
| 84 |
+
|
| 85 |
+
### Task name mapping (paper ↔ codebase)
|
| 86 |
+
|
| 87 |
+
| Paper Name | Codebase Task Name |
|
| 88 |
+
|---|---|
|
| 89 |
+
| GitTablesDB | `gt-semtab22-dbpedia-all` |
|
| 90 |
+
| GitTablesSC | `gt-semtab22-schema-property-all` |
|
| 91 |
+
| SOTAB-CTA | `sotab` |
|
| 92 |
+
| SOTAB-CPA | `sotab-re` |
|
| 93 |
+
| WikiTables-CTA | `turl` |
|
| 94 |
+
| WikiTables-CPA | `turl-re` |
|
| 95 |
+
|
| 96 |
+
### Data schema (columns)
|
| 97 |
+
|
| 98 |
+
| Column | Type | Description |
|
| 99 |
+
|---|---|---|
|
| 100 |
+
| `table_id` | `string` | Identifier of the source table. |
|
| 101 |
+
| `column_id` | `int` | 0-based index of the target column within the table. |
|
| 102 |
+
| `label` | `string` | Ground-truth type id. For multi-class tasks, a single type ID (`-1` indicates unlabeled). For multi-label tasks, a binary list encoded as a string (e.g., `[1,0,0,...]`). |
|
| 103 |
+
| `data` | `string` | Cell values of the target column in original order, serialized as a single string. |
|