Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Chinese
Size:
100K - 1M
ArXiv:
Tags:
lexical semantics
word-sense disambiguation
chinese
traditional chinese
chinese wordnet
academia sinica balanced corpus
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -100,7 +100,7 @@ A typical instance in the dataset:
|
|
| 100 |
|
| 101 |
### Data Splits
|
| 102 |
|
| 103 |
-
|
| 104 |
|
| 105 |
| Split Name | Number of Rows |
|
| 106 |
| :--------- | :------------- |
|
|
|
|
| 100 |
|
| 101 |
### Data Splits
|
| 102 |
|
| 103 |
+
The dataset has the following splits:
|
| 104 |
|
| 105 |
| Split Name | Number of Rows |
|
| 106 |
| :--------- | :------------- |
|