Voice49 commited on
Commit
2616fbd
·
verified ·
1 Parent(s): 12dd988

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -3
README.md CHANGED
@@ -1,3 +1,155 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ ---
5
+ datasets:
6
+ - Voice49/dber
7
+ pretty_name: DB-ER — Dataset for Database Entity Recognition
8
+ language:
9
+ - en
10
+ tags:
11
+ - db-er
12
+ - schema-linking
13
+ - text-to-sql
14
+ - ner
15
+ - token-classification
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - named-entity-recognition
20
+ size_categories:
21
+ - 10K<n<100K
22
+ license: other
23
+ ---
24
+
25
+ # DB-ER — Dataset for Database Entity Recognition
26
+
27
+ ## Dataset Summary
28
+ **DB-ER** is a token-level dataset for **Database Entity Recognition (DB-ER)** in **natural-language queries (NLQs)** paired with SQL. The task is to tag each token as one of **Table**, **Column**, **Value**, or **O** (non-entity).
29
+ Each example includes: the NLQ, database identifier, a canonical dataset id, the paired SQL query, a tokenized question, a compact **entity→token** reverse index, an explicit **entities** table (typed schema/value items), and CoNLL-style **DB‑ER tags**.
30
+
31
+ ---
32
+
33
+ ## What’s inside
34
+
35
+ ### Labels
36
+ - **4-class:** `Table`, `Column`, `Value`, `O`
37
+
38
+ ### Fields per example
39
+ - `question_id` *(int)* — Example id
40
+ - `db_id` *(str)* — Database identifier
41
+ - `dber_id` *(str)* — Canonical id linking back to the source file/split (BIRD, SPIDER)
42
+ - `question` *(str)* — NLQ text
43
+ - `SQL` *(str)* — Paired SQL query
44
+ - `tokens` *(List[str])* — Tokenized NLQ
45
+ - `entities` *(List[Object])* — Typed DB items referenced in the SQL; each item has:
46
+ - `id` *(int)* — Local entity id (unique within the example)
47
+ - `type` *("table"|"column"|"value")*
48
+ - `value` *(str)* — Surface form from the DB schema or literal value
49
+ - `entity_to_token` *(Dict[str, List[int]])* — Mapping entity to token
50
+ - `dber_tags` *(List[str])* — **CoNLL-style IOB2** tags over `tokens`
51
+
52
+ ---
53
+
54
+ ## Splits
55
+ **Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**
56
+
57
+ | Split | # Examples |
58
+ |-------------------|-----------:|
59
+ | `human_train` | **500** |
60
+ | `human_test` | **500** |
61
+ | `synthetic_train` | **15,026** |
62
+
63
+ `synthetic_train` is produced via our **auto-annotation pipeline**, which aligns SQL-referenced entities to NLQ spans using string-similarity candidates (Jaccard 3-gram / Levenshtein) and a **non-overlapping ILP** selection objective. See **Annotation** below.
64
+
65
+ ---
66
+
67
+ ## Example instances
68
+ ```json
69
+ {
70
+ "question_id": 13692,
71
+ "db_id": "retail_complains",
72
+ "dber_id": "bird:train.json:282",
73
+ "question": "Among the clients born between 1980 and 2000, list the name of male clients who complained through referral.",
74
+ "SQL": "SELECT T1.first, T1.middle, T1.last FROM client AS T1 INNER JOIN events AS T2 ON T1.client_id = T2.Client_ID WHERE T1.year BETWEEN 1980 AND 2000 AND T1.sex = 'Male' AND T2.`Submitted via` = 'Referral'",
75
+ "tokens": ["Among","the","clients","born","between","1980","and","2000",",","list","the","name","of","male","clients","who","complained","through","referral","."],
76
+ "entities": [
77
+ {"id":0,"type":"column","value":"first"},
78
+ {"id":1,"type":"column","value":"middle"},
79
+ {"id":2,"type":"column","value":"last"},
80
+ {"id":3,"type":"table","value":"client"},
81
+ {"id":4,"type":"table","value":"events"},
82
+ {"id":5,"type":"column","value":"client_id"},
83
+ {"id":6,"type":"column","value":"year"},
84
+ {"id":7,"type":"value","value":"1980"},
85
+ {"id":8,"type":"value","value":"2000"},
86
+ {"id":9,"type":"column","value":"sex"}
87
+ {"id":10,"type":"value","value":"Male"},
88
+ {"id":11,"type":"column","value":"Submitted via"},
89
+ {"id":12,"type":"value","value":"Referral"},
90
+ ],
91
+ "entity_to_token": {"3":[2],"7":[5],"8":[7],"10":[13],"5":[14],"12":[18]},
92
+ "dber_tags": ["O","O","B-TABLE","O","O","B-VALUE","O","B-VALUE","O","O","O","O","O","B-VALUE","B-COLUMN","O","O","O","B-VALUE","O"]
93
+ }
94
+ ```
95
+
96
+ ---
97
+
98
+ ## How to load
99
+
100
+ ### Load JSONL files
101
+ ```python
102
+ from datasets import load_dataset
103
+
104
+ data_files = {
105
+ "human_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_train.jsonl",
106
+ "human_test": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_test.jsonl",
107
+ "synthetic_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/synthetic_train.jsonl",
108
+ }
109
+ ds = load_dataset("json", data_files=data_files)
110
+ print(ds)
111
+ print(ds["human_train"][0])
112
+ ```
113
+
114
+ ### Load from the Hub
115
+ ```python
116
+ from datasets import load_dataset
117
+ ds = load_dataset("Voice49/dber")
118
+ ```
119
+
120
+ ---
121
+
122
+ ## Annotation (human + synthetic)
123
+ - **Human**: collaborative web UI with schema and SQL visible during labeling.
124
+ - **Synthetic**: for each NLQ–SQL pair, generate candidate spans with Jaccard/Levenshtein, then solve a **non-overlapping ILP** to select spans maximizing similarity. Hyperparameters are validated on human data.
125
+
126
+ ---
127
+
128
+ ## Data provenance
129
+ - **Sources:** text-to-SQL benchmarks BIRD (https://bird-bench.github.io/) and Spider (https://yale-lily.github.io/spider).
130
+ - **Transform:** NLQ–SQL pairs → DB-ER annotations via the synthetic pipeline; human annotations provide gold labels and validation.
131
+
132
+ <!-- ---
133
+
134
+ ## Licensing
135
+ - **License:** `other` (see the repository `LICENSE` for terms). Research use only unless otherwise permitted.
136
+
137
+ ---
138
+
139
+ ## Citation
140
+ If you use **DB-ER**, please cite:
141
+
142
+ ```bibtex
143
+ @inproceedings{fu2025dber,
144
+ title = {Database Entity Recognition with Data Augmentation and Deep Learning},
145
+ author = {Zikun Fu and Chen Yang and Kourosh Davoudi and Ken Q. Pu},
146
+ booktitle = {Proc. IEEE International Conference on Information Reuse and Integration for Data Science (IRI)},
147
+ address = {San Jose, CA, USA},
148
+ year = {2025}
149
+ }
150
+ ``` -->
151
+
152
+ ---
153
+
154
+ ## Release notes
155
+ - **v1.0:** Initial public release