Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
10K - 100K
License:
Change entity_to_token to list-of-objects schema; update README accordingly
Browse files- README.md +1 -1
- human_test.jsonl +0 -0
- human_train.jsonl +0 -0
- synthetic_train.jsonl +2 -2
README.md
CHANGED
|
@@ -43,7 +43,7 @@ Each example includes: the NLQ, database identifier, a canonical dataset id, the
|
|
| 43 |
- `id` *(int)* — Local entity id (unique within the example)
|
| 44 |
- `type` *("table"|"column"|"value")*
|
| 45 |
- `value` *(str)* — Surface form from the DB schema or literal value
|
| 46 |
-
- `entity_to_token` *(
|
| 47 |
- `dber_tags` *(List[str])* — **CoNLL-style IOB2** tags over `tokens`
|
| 48 |
|
| 49 |
---
|
|
|
|
| 43 |
- `id` *(int)* — Local entity id (unique within the example)
|
| 44 |
- `type` *("table"|"column"|"value")*
|
| 45 |
- `value` *(str)* — Surface form from the DB schema or literal value
|
| 46 |
+
- `entity_to_token` *(List[Object])* — Each item has: `entity_id` *(int)*, `token_idxs` *(List[int])*
|
| 47 |
- `dber_tags` *(List[str])* — **CoNLL-style IOB2** tags over `tokens`
|
| 48 |
|
| 49 |
---
|
human_test.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
human_train.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
synthetic_train.jsonl
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:80207b3be9163d39b992d5bc21edbe2b737b14b6a00d1fd541db84f1f25be0a3
|
| 3 |
+
size 25162599
|