Datasets:
license: cc-by-4.0
datasets:
- Voice49/dber
pretty_name: DB-ER — Dataset for Database Entity Recognition
language:
- en
tags:
- db-er
- schema-linking
- text-to-sql
- ner
- token-classification
task_categories:
- token-classification
task_ids:
- named-entity-recognition
size_categories:
- 10K<n<100K
DB-ER — Dataset for Database Entity Recognition
Dataset Summary
DB-ER is a token-level dataset for Database Entity Recognition (DB-ER) in natural-language queries (NLQs) paired with SQL. The task is to tag each token as one of Table, Column, Value, or O (non-entity).
Each example includes: the NLQ, database identifier, a canonical dataset id, the paired SQL query, a tokenized question, a compact entity→token reverse index, an explicit entities table (typed schema/value items), and CoNLL-style DB‑ER tags.
Fields
question_id(int) — Example iddb_id(str) — Database identifierdber_id(str) — Canonical id linking back to the source file/split (BIRD, SPIDER)question(str) — NLQ textSQL(str) — Paired SQL querytokens(List[str]) — Tokenized NLQentities(List[Object]) — Typed DB items referenced in the SQL; each item has:id(int) — Local entity id (unique within the example)type("table"|"column"|"value")value(str) — Surface form from the DB schema or literal value
entity_to_token(List[Object]) — Reverse index:entity_id(int) — Refers to anentities[*].idtoken_idxs(List[int]) — Token indices composing that entity intokens
dber_tags(List[str]) — CoNLL-style IOB2 tags overtokens
Splits
Entity token prevalence is consistent across splits: ~29% entity vs. ~71% O.
| Split | # Examples |
|---|---|
human_train |
500 |
human_test |
500 |
synthetic_train |
15,026 |
synthetic_train is produced via our auto-annotation pipeline, which aligns SQL-referenced entities to NLQ spans using string-similarity candidates (Jaccard 3-gram / Levenshtein) and a non-overlapping ILP selection objective. See Annotation below.
Example
{
"question_id": 13692,
"db_id": "retail_complains",
"dber_id": "bird:train.json:282",
"question": "Among the clients born between 1980 and 2000, list the name of male clients who complained through referral.",
"SQL": "SELECT T1.first, T1.middle, T1.last FROM client AS T1 INNER JOIN events AS T2 ON T1.client_id = T2.Client_ID WHERE T1.year BETWEEN 1980 AND 2000 AND T1.sex = 'Male' AND T2.`Submitted via` = 'Referral'",
"tokens": ["Among","the","clients","born","between","1980","and","2000",",","list","the","name","of","male","clients","who","complained","through","referral","."],
"entities": [
{"id": 0, "type": "column", "value": "first"},
{"id": 1, "type": "column", "value": "middle"},
{"id": 2, "type": "column", "value": "last"},
{"id": 3, "type": "table", "value": "client"},
{"id": 4, "type": "table", "value": "events"},
{"id": 5, "type": "column", "value": "client_id"},
{"id": 6, "type": "column", "value": "year"},
{"id": 7, "type": "value", "value": "1980"},
{"id": 8, "type": "value", "value": "2000"},
{"id": 9, "type": "column", "value": "sex"},
{"id": 10, "type": "value", "value": "Male"},
{"id": 11, "type": "column", "value": "Submitted via"},
{"id": 12, "type": "value", "value": "Referral"}
]
"entity_to_token": [
...,
{"entity_id":3,"token_idxs":[2]},
{"entity_id":5,"token_idxs":[14]},
{"entity_id":7,"token_idxs":[5]},
{"entity_id":8,"token_idxs":[7]},
{"entity_id":10,"token_idxs":[13]},
{"entity_id":12,"token_idxs":[18]},
...
],
"dber_tags": ["O","O","B-TABLE","O","O","B-VALUE","O","B-VALUE","O","O","O","O","O","B-VALUE","B-COLUMN","O","O","O","B-VALUE","O"]
}
Annotation
- Human: collaborative web UI with schema and SQL visible during labeling.
- Synthetic: for each NLQ–SQL pair, generate candidate spans with Jaccard/Levenshtein, then solve a non-overlapping ILP to select spans maximizing similarity. Hyperparameters are validated on human data.
Data provenance
- Sources: text-to-SQL benchmarks BIRD (https://bird-bench.github.io/) and Spider (https://yale-lily.github.io/spider).
- Transform: NLQ–SQL pairs → DB-ER annotations via the synthetic pipeline; human annotations provide gold labels and validation.
Release notes
- v1.1 (2025-08-26): HF Data Viewer compatibility update
- v1.0: Initial public release