File size: 5,131 Bytes
cec47a8 f6dc2dc cec47a8 f6dc2dc cec47a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
# CL²GEC:A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction
**CL²GEC** is a benchmark for **Chinese grammatical error correction (GEC)** in **scholarly writing** with a **continual-learning** protocol. The corpus covers **10 first-level disciplines** (Law, Management, Education, Economics, Natural Sciences, History, Agricultural Sciences, Literature, Arts, Philosophy). Each sample contains an errorful sentence (`source`) and one or more corrected references (`references`). Standard **train / validation / test** splits are provided and may be used **per-discipline** to study sequential/continual learning behavior such as forgetting and transfer.
---
## Supported Tasks and Leaderboards
**Grammatical Error Correction (GEC)** / **Text-to-Text Generation**
- **Input**: a Chinese sentence containing grammatical/usage errors.
- **Output**: a semantically equivalent, grammatically correct sentence.
**Recommended Metrics**
- GEC metrics: **Precision / Recall / F0.5** (e.g., via ChERRANT).
- Continual-learning (optional): **Average Performance** and **Backward Transfer (BWT)** computed over task sequences defined by the ordered disciplines.
---
## Dataset Structure
### Data Instances
Below is a recommended public JSON schema:
```json
{
"id": "0",
"source": "总体上看,仍有许多案件以不适用调解制度。",
"references": [
"总体上看,依然有许多案件不适宜使用调解制度来解决。"
],
"category": "法学",
"edits": [
{
"src_interval": [7, 9],
"tgt_interval": [7, 9],
"src_content": ["不", "适", "用"],
"tgt_content": ["不", "适", "宜"]
}
]
}
```
### Data Fields
- **id** *(string)*: unique sample identifier.
- **source** *(string)*: original sentence with errors.
- **references** *(list[string])*: one or more corrected sentences.
- **category** *(string)*: first-level discipline.
- **edits** *(list[object], optional)*: token/character-level edits (if provided).
### Data Splits
| Split | #Samples | Notes |
| ---------- | -------: | ------------------- |
| train | 7,000 | training data |
| validation | 1,000 | development set |
| test | 2,000 | held-out evaluation |
---
## Categories (Disciplines)
Below are the 10 discipline labels (Chinese) with suggested English names:
| Chinese (label in data) | English |
| ----------------------- | ---------- |
| 法学 | Law |
| 管理 | Management |
| 教育 | Education |
| 经济学 | Economics |
| 理学 | Sciences |
| 历史学 | History |
| 农学 | Agronomy |
| 文学 | Literature |
| 哲学 | Philosophy |
| 艺术学 | Arts |
---
## Collection and Annotation
- **Sources**: Extracted from CNKI Academic PDFs, covering 10 first-level disciplines and 100 second-level disciplines; only abstracts and main text are retained; non-linguistic content such as references, acknowledgments, formulas, tables, and figure captions are removed; sentence-level segmentation uses LTP. Anonymization is also performed.
- **Annotation**:
1. Multi-model consistency error detection to screen candidates (e.g., GECToR, Chinese-BART, etc.);
2. LLM pre-rewrite as weak references;
3. Dual independent annotation (by senior annotators with the same subject background), unifying style, revision, and merging;
4. 100% review by domain experts to ensure publication-level quality, supplementing with multiple references when necessary.
---
## Intended Uses
- Research on **Chinese GEC** for scholarly prose.
- Cross-domain robustness and **discipline-aware** modeling.
- **Continual learning** studies focusing on forgetting/transfer across disciplines.
---
## Ethical Considerations & Privacy
- Texts are anonymized and cleaned to remove sensitive information.
- Sentences are taken from academic texts and contain academic terminology; when the model is made available for public use, the risks and scope of application should be declared and misuse should be avoided.
- Ensure that upstream content complies with platform/journal usage policies and your chosen **license** clearly states permitted uses.
---
## Citation
If you use this dataset in your research, please cite (replace with your paper details):
```bibtex
@misc{qin2025cl2gec,
title = {CL$^2$GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction},
author = {Shang Qin and Jingheng Ye and Yinghui Li and Hai-Tao Zheng and Qi Li and Jinxiao Shan and Zhixing Li and Hong-Gee Kim},
year = {2025},
eprint = {2509.13672},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2509.13672}
}
```
---
## Changelog
- **v1.0.0**: initial public release; includes train/validation/test splits, field schema, usage examples, and evaluation guidance. |