Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
fix task_ids
Browse files
README.md
CHANGED
|
@@ -15,10 +15,11 @@ source_datasets:
|
|
| 15 |
- original
|
| 16 |
task_categories:
|
| 17 |
- token-classification
|
| 18 |
-
task_ids:
|
| 19 |
-
- token-classification-other-word-tokenization
|
| 20 |
paperswithcode_id: null
|
| 21 |
pretty_name: best2009
|
|
|
|
|
|
|
| 22 |
---
|
| 23 |
|
| 24 |
# Dataset Card for `best2009`
|
|
@@ -187,4 +188,4 @@ Character type features:
|
|
| 187 |
|
| 188 |
### Contributions
|
| 189 |
|
| 190 |
-
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
|
|
|
|
| 15 |
- original
|
| 16 |
task_categories:
|
| 17 |
- token-classification
|
| 18 |
+
task_ids: []
|
|
|
|
| 19 |
paperswithcode_id: null
|
| 20 |
pretty_name: best2009
|
| 21 |
+
tags:
|
| 22 |
+
- word-tokenization
|
| 23 |
---
|
| 24 |
|
| 25 |
# Dataset Card for `best2009`
|
|
|
|
| 188 |
|
| 189 |
### Contributions
|
| 190 |
|
| 191 |
+
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
|