Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Languages:
Chinese
Size:
100K - 1M
Tags:
text-correction
License:
Commit
·
40f6a4b
1
Parent(s):
031baba
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ task_categories:
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Dataset Card for CSC
|
| 13 |
-
|
| 14 |
- **Repository:** https://github.com/shibing624/pycorrector
|
| 15 |
|
| 16 |
## Dataset Description
|
|
@@ -19,13 +19,13 @@ Chinese Spelling Correction (CSC) is a task to detect and correct misspelled cha
|
|
| 19 |
|
| 20 |
CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings.
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
|
| 25 |
### Original Dataset Summary
|
| 26 |
|
| 27 |
-
- test.json 和 dev.json 为 **SIGHAN数据集**, 包括SIGHAN13 14 15,来自 [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html) ,文件大小:
|
| 28 |
-
- train.json 为 **
|
| 29 |
|
| 30 |
如果只想用SIGHAN数据集,可以这样取数据:
|
| 31 |
```python
|
|
@@ -39,6 +39,7 @@ print(test_ds[0])
|
|
| 39 |
```
|
| 40 |
|
| 41 |
### Supported Tasks and Leaderboards
|
|
|
|
| 42 |
|
| 43 |
The dataset designed for csc task training pretrained language models.
|
| 44 |
|
|
@@ -80,53 +81,6 @@ An example of "train" looks as follows:
|
|
| 80 |
|---------------|------:|--:|--:|
|
| 81 |
| CSC | 251835条 | 27981条 | 1100条 |
|
| 82 |
|
| 83 |
-
## Dataset Creation
|
| 84 |
-
|
| 85 |
-
### Curation Rationale
|
| 86 |
-
|
| 87 |
-
[More Information Needed]
|
| 88 |
-
|
| 89 |
-
### Source Data
|
| 90 |
-
|
| 91 |
-
#### Initial Data Collection and Normalization
|
| 92 |
-
|
| 93 |
-
[More Information Needed]
|
| 94 |
-
|
| 95 |
-
#### Who are the source language producers?
|
| 96 |
-
|
| 97 |
-
[More Information Needed]
|
| 98 |
-
|
| 99 |
-
### Annotations
|
| 100 |
-
|
| 101 |
-
#### Annotation process
|
| 102 |
-
|
| 103 |
-
[More Information Needed]
|
| 104 |
-
|
| 105 |
-
#### Who are the annotators?
|
| 106 |
-
|
| 107 |
-
[More Information Needed]
|
| 108 |
-
|
| 109 |
-
### Personal and Sensitive Information
|
| 110 |
-
|
| 111 |
-
[More Information Needed]
|
| 112 |
-
|
| 113 |
-
## Considerations for Using the Data
|
| 114 |
-
|
| 115 |
-
### Social Impact of Dataset
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
### Discussion of Biases
|
| 119 |
-
|
| 120 |
-
[More Information Needed]
|
| 121 |
-
|
| 122 |
-
### Other Known Limitations
|
| 123 |
-
|
| 124 |
-
## Additional Information
|
| 125 |
-
|
| 126 |
-
### Dataset Curators
|
| 127 |
-
|
| 128 |
-
[More Information Needed]
|
| 129 |
-
|
| 130 |
### Licensing Information
|
| 131 |
|
| 132 |
The dataset is available under the Apache 2.0.
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Dataset Card for CSC
|
| 13 |
+
中文拼写纠错数据集
|
| 14 |
- **Repository:** https://github.com/shibing624/pycorrector
|
| 15 |
|
| 16 |
## Dataset Description
|
|
|
|
| 19 |
|
| 20 |
CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings.
|
| 21 |
|
| 22 |
+
中文拼写纠错数据集,共27万条,是通过原始SIGHAN13、14、15年数据集和Wang271k数据集合并整理后得到,json格式,带错误字符位置信息。
|
| 23 |
|
| 24 |
|
| 25 |
### Original Dataset Summary
|
| 26 |
|
| 27 |
+
- test.json 和 dev.json 为 **SIGHAN数据集**, 包括SIGHAN13 14 15,来自 [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html) ,文件大小:339kb,4千条。
|
| 28 |
+
- train.json 为 **Wang271k数据集**,包括 Wang271k ,来自 [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml) ,文件大小:93MB,27万条。
|
| 29 |
|
| 30 |
如果只想用SIGHAN数据集,可以这样取数据:
|
| 31 |
```python
|
|
|
|
| 39 |
```
|
| 40 |
|
| 41 |
### Supported Tasks and Leaderboards
|
| 42 |
+
中文拼写纠错任务
|
| 43 |
|
| 44 |
The dataset designed for csc task training pretrained language models.
|
| 45 |
|
|
|
|
| 81 |
|---------------|------:|--:|--:|
|
| 82 |
| CSC | 251835条 | 27981条 | 1100条 |
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
### Licensing Information
|
| 85 |
|
| 86 |
The dataset is available under the Apache 2.0.
|