Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
10K - 100K
License:
Normalize entity_to_token schema (0..19) and document split groups; update human_train, human_test, synthetic_train
Browse files- README.md +9 -0
- human_test.jsonl +0 -0
- human_train.jsonl +0 -0
- synthetic_train.jsonl +2 -2
README.md
CHANGED
|
@@ -49,6 +49,15 @@ Each example includes: the NLQ, database identifier, a canonical dataset id, the
|
|
| 49 |
---
|
| 50 |
|
| 51 |
## Splits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
**Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**
|
| 53 |
|
| 54 |
| Split | # Examples |
|
|
|
|
| 49 |
---
|
| 50 |
|
| 51 |
## Splits
|
| 52 |
+
|
| 53 |
+
### Split groups
|
| 54 |
+
|
| 55 |
+
- Human
|
| 56 |
+
- Human_train (`human_train`)
|
| 57 |
+
- Human_test (`human_test`)
|
| 58 |
+
- Synthetic
|
| 59 |
+
- Synthetic_train (`synthetic_train`)
|
| 60 |
+
|
| 61 |
**Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**
|
| 62 |
|
| 63 |
| Split | # Examples |
|
human_test.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
human_train.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
synthetic_train.jsonl
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e06a98bc289b05fb179d7bb5857cc28746e0fa9755e5c9464b7cd266f5450a3
|
| 3 |
+
size 17048559
|