minhleduc commited on
Commit
a64af84
·
verified ·
1 Parent(s): 9b14727

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md CHANGED
@@ -28,4 +28,145 @@ configs:
28
  path: data/test-*
29
  - split: validation
30
  path: data/validation-*
 
 
 
 
 
 
 
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  path: data/test-*
29
  - split: validation
30
  path: data/validation-*
31
+ license: mit
32
+ task_categories:
33
+ - fill-mask
34
+ - text-generation
35
+ - token-classification
36
+ language:
37
+ - vi
38
  ---
39
+
40
+ ## Dataset Card for Vietnamese Text Correction Dataset
41
+
42
+ ## Dataset Description
43
+
44
+ This dataset contains Vietnamese text pairs for training and evaluating text correction models. Each example consists of an erroneous text and its corrected version, making it ideal for:
45
+
46
+ - **Grammar correction**
47
+ - **Spelling correction**
48
+ - **Text normalization**
49
+ - **Language model fine-tuning**
50
+
51
+ ### Dataset Summary
52
+
53
+ - **Language**: Vietnamese (vi)
54
+ - **Format**: Text correction pairs
55
+ - **Size**: ~4.0M examples across train/validation/test splits
56
+ - **Domain**: General Vietnamese text from various sources
57
+ - **Collection Method**: Automated collection and human verification
58
+
59
+ ## Dataset Structure
60
+
61
+ ### Data Instances
62
+
63
+ A typical data point looks like this:
64
+
65
+ ```json
66
+ {
67
+ "correct_text": "Đây là một câu tiếng Việt chuẩn xác.",
68
+ "error_text": "Đây là một câu tieengs Việt chuẩn xác.",
69
+ "__index_level_0__": 42
70
+ }
71
+ ```
72
+
73
+ ### Data Fields
74
+
75
+ - **correct_text**: The corrected Vietnamese text (string)
76
+ - **error_text**: The original text with errors (string)
77
+
78
+ ### Data Splits
79
+
80
+ | Split | Examples | Size |
81
+ |-------|----------|------|
82
+ | **train** | 3,175,684 | 895 MB |
83
+ | **test** | 396,961 | 112 MB |
84
+ | **validation** | 396,961 | 112 MB |
85
+
86
+ ## Dataset Creation
87
+
88
+ ### Source Data
89
+
90
+ The dataset was created by collecting Vietnamese text from multiple sources including:
91
+ - Web crawling of Vietnamese websites
92
+ - Social media posts
93
+ - News articles
94
+ - User-generated content
95
+
96
+ ### Processing Steps
97
+
98
+ 1. **Text Collection**: Gathered raw Vietnamese text from diverse sources
99
+ 2. **Error Injection**: Applied various error types (spelling, grammar, typos)
100
+ 3. **Human Annotation**: Native speakers corrected the erroneous texts
101
+ 4. **Quality Filtering**: Removed low-quality or nonsensical examples
102
+ 5. **Deduplication**: Ensured no duplicate text pairs
103
+
104
+ ## Usage
105
+
106
+ ### Loading the Dataset
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ # Load the dataset
112
+ dataset = load_dataset("8Opt/vn-text-correction-0001")
113
+
114
+ # Access different splits
115
+ train_data = dataset['train']
116
+ test_data = dataset['test']
117
+ val_data = dataset['validation']
118
+
119
+ # Example usage
120
+ example = train_data[0]
121
+ print(f"Original: {example['error_text']}")
122
+ print(f"Corrected: {example['correct_text']}")
123
+ ```
124
+
125
+ ### For Training
126
+
127
+ ```python
128
+ # Prepare for training a correction model
129
+ def preprocess_function(examples):
130
+ inputs = [f"Correct this text: {err}" for err in examples['error_text']]
131
+ targets = examples['correct_text']
132
+ return {'input_text': inputs, 'target_text': targets}
133
+
134
+ # Apply preprocessing
135
+ tokenized_datasets = dataset.map(preprocess_function, batched=True)
136
+ ```
137
+
138
+ ## Evaluation
139
+
140
+ ### Metrics
141
+
142
+ Common evaluation metrics for this dataset include:
143
+ - **BLEU Score**: Measures n-gram overlap with reference corrections
144
+ - **ROUGE Score**: Evaluates summary-quality corrections
145
+ - **Character/Word Error Rate**: Traditional correction metrics
146
+ - **Human Evaluation**: Manual assessment of correction quality
147
+
148
+ ## Limitations and Bias
149
+
150
+ - **Domain Coverage**: Dataset may not cover all Vietnamese domains equally
151
+ - **Error Types**: Focuses on common errors; rare error types may be underrepresented
152
+ - **Regional Variations**: Northern/Central/Southern Vietnamese differences may exist
153
+ - **Contemporary Usage**: May not capture very recent slang or terminology
154
+
155
+ ## Ethical Considerations
156
+
157
+ - All text data is publicly available Vietnamese content
158
+ - No personally identifiable information included
159
+ - Dataset intended for research and educational purposes only
160
+
161
+
162
+ ## Contributing
163
+
164
+ We welcome contributions! If you find issues or want to improve the dataset:
165
+
166
+ 1. Open an issue on the https://huggingface.co/datasets/8Opt/vn-text-correction-0001.
167
+ 2. Submit a pull request with improvements
168
+ 3. Report any data quality issues or annotation errors
169
+
170
+ ## Contact
171
+
172
+ For questions or feedback about this dataset, please contact: [minh.leduc.0210@gmail.com]