File size: 5,709 Bytes
2616fbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b55716
2616fbd
 
 
 
 
 
 
 
 
 
a050925
 
 
2616fbd
 
 
 
 
c85c83c
2616fbd
 
 
 
 
 
 
 
 
 
 
 
a050925
2616fbd
 
 
 
 
 
 
 
 
a050925
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2616fbd
 
 
 
 
5b55716
2616fbd
a050925
2616fbd
5b55716
 
 
 
 
 
2616fbd
 
 
 
 
 
 
 
 
 
 
5b55716
2616fbd
 
 
a050925
2616fbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a050925
 
7ec5c5e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
license: cc-by-4.0
datasets:
- Voice49/dber
pretty_name: DB-ER  Dataset for Database Entity Recognition
language:
- en
tags:
- db-er
- schema-linking
- text-to-sql
- ner
- token-classification
task_categories:
- token-classification
task_ids:
- named-entity-recognition
size_categories:
- 10K<n<100K
---

# DB-ER — Dataset for Database Entity Recognition

## Dataset Summary
**DB-ER** is a token-level dataset for **Database Entity Recognition (DB-ER)** in **natural-language queries (NLQs)** paired with SQL. The task is to tag each token as one of **Table**, **Column**, **Value**, or **O** (non-entity).  
Each example includes: the NLQ, database identifier, a canonical dataset id, the paired SQL query, a tokenized question, a compact **entity→token** reverse index, an explicit **entities** table (typed schema/value items), and CoNLL-style **DB‑ER tags**.

---

## Fields
- `question_id` *(int)* — Example id
- `db_id` *(str)* — Database identifier
- `dber_id` *(str)* — Canonical id linking back to the source file/split (BIRD, SPIDER)
- `question` *(str)* — NLQ text
- `SQL` *(str)* — Paired SQL query
- `tokens` *(List[str])* — Tokenized NLQ
- `entities` *(List[Object])* — Typed DB items referenced in the SQL; each item has:
  - `id` *(int)* — Local entity id (unique within the example)
  - `type` *("table"|"column"|"value")*
  - `value` *(str)* — Surface form from the DB schema or literal value
- `entity_to_token` *(List[Object])* — Reverse index:
  - `entity_id` *(int)* — Refers to an `entities[*].id`
  - `token_idxs` *(List[int])* — Token indices composing that entity in `tokens`
- `dber_tags` *(List[str])***CoNLL-style IOB2** tags over `tokens`

---

## Splits

**Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**

| Split             | # Examples |
|-------------------|-----------:|
| `human_train`     | **500**  |
| `human_test`      | **500**    |
| `synthetic_train` | **15,026** |

`synthetic_train` is produced via our **auto-annotation pipeline**, which aligns SQL-referenced entities to NLQ spans using string-similarity candidates (Jaccard 3-gram / Levenshtein) and a **non-overlapping ILP** selection objective. See **Annotation** below.

---

## Example
```json
{
  "question_id": 13692,
  "db_id": "retail_complains",
  "dber_id": "bird:train.json:282",
  "question": "Among the clients born between 1980 and 2000, list the name of male clients who complained through referral.",
  "SQL": "SELECT T1.first, T1.middle, T1.last FROM client AS T1 INNER JOIN events AS T2 ON T1.client_id = T2.Client_ID WHERE T1.year BETWEEN 1980 AND 2000 AND T1.sex = 'Male' AND T2.`Submitted via` = 'Referral'",
  "tokens": ["Among","the","clients","born","between","1980","and","2000",",","list","the","name","of","male","clients","who","complained","through","referral","."],
  "entities": [
    {"id": 0, "type": "column", "value": "first"},
    {"id": 1, "type": "column", "value": "middle"},
    {"id": 2, "type": "column", "value": "last"},
    {"id": 3, "type": "table", "value": "client"},
    {"id": 4, "type": "table", "value": "events"},
    {"id": 5, "type": "column", "value": "client_id"},
    {"id": 6, "type": "column", "value": "year"},
    {"id": 7, "type": "value", "value": "1980"},
    {"id": 8, "type": "value", "value": "2000"},
    {"id": 9, "type": "column", "value": "sex"},
    {"id": 10, "type": "value", "value": "Male"},
    {"id": 11, "type": "column", "value": "Submitted via"},
    {"id": 12, "type": "value", "value": "Referral"}
  ]
  "entity_to_token": [
    ...,
    {"entity_id":3,"token_idxs":[2]},
    {"entity_id":5,"token_idxs":[14]},
    {"entity_id":7,"token_idxs":[5]},
    {"entity_id":8,"token_idxs":[7]},
    {"entity_id":10,"token_idxs":[13]},
    {"entity_id":12,"token_idxs":[18]},
    ...
  ],
  "dber_tags": ["O","O","B-TABLE","O","O","B-VALUE","O","B-VALUE","O","O","O","O","O","B-VALUE","B-COLUMN","O","O","O","B-VALUE","O"]
}
```

<!-- ---

## Usage

### Load from Hub
```python
from datasets import load_dataset
ds = load_dataset("Voice49/dber")
```

### Load JSONL files
```python
from datasets import load_dataset

data_files = {
    "human_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_train.jsonl",
    "human_test": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_test.jsonl",
    "synthetic_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/synthetic_train.jsonl",
}
ds = load_dataset("json", data_files=data_files)
print(ds)
``` -->

---

## Annotation
- **Human**: collaborative web UI with schema and SQL visible during labeling.
- **Synthetic**: for each NLQ–SQL pair, generate candidate spans with Jaccard/Levenshtein, then solve a **non-overlapping ILP** to select spans maximizing similarity. Hyperparameters are validated on human data.

---

## Data provenance
- **Sources:** text-to-SQL benchmarks BIRD (https://bird-bench.github.io/) and Spider (https://yale-lily.github.io/spider).
- **Transform:** NLQ–SQL pairs → DB-ER annotations via the synthetic pipeline; human annotations provide gold labels and validation.

<!-- ---

---

## Citation
If you use **DB-ER**, please cite:

```bibtex
@inproceedings{fu2025dber,
  title     = {Database Entity Recognition with Data Augmentation and Deep Learning},
  author    = {Zikun Fu and Chen Yang and Kourosh Davoudi and Ken Q. Pu},
  booktitle = {Proc. IEEE International Conference on Information Reuse and Integration for Data Science (IRI)},
  address   = {San Jose, CA, USA},
  year      = {2025}
}
``` -->

---

## Release notes  
- **v1.1 (2025-08-26):** HF Data Viewer compatibility update
- **v1.0:** Initial public release