Datasets:
File size: 4,583 Bytes
9b14727 a64af84 9b14727 a64af84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
---
dataset_info:
features:
- name: correct_text
dtype: string
- name: error_text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 895314409
num_examples: 3175684
- name: test
num_bytes: 111815675
num_examples: 396961
- name: validation
num_bytes: 112054245
num_examples: 396961
download_size: 742034799
dataset_size: 1119184329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
license: mit
task_categories:
- fill-mask
- text-generation
- token-classification
language:
- vi
---
## Dataset Card for Vietnamese Text Correction Dataset
## Dataset Description
This dataset contains Vietnamese text pairs for training and evaluating text correction models. Each example consists of an erroneous text and its corrected version, making it ideal for:
- **Grammar correction**
- **Spelling correction**
- **Text normalization**
- **Language model fine-tuning**
### Dataset Summary
- **Language**: Vietnamese (vi)
- **Format**: Text correction pairs
- **Size**: ~4.0M examples across train/validation/test splits
- **Domain**: General Vietnamese text from various sources
- **Collection Method**: Automated collection and human verification
## Dataset Structure
### Data Instances
A typical data point looks like this:
```json
{
"correct_text": "Đây là một câu tiếng Việt chuẩn xác.",
"error_text": "Đây là một câu tieengs Việt chuẩn xác.",
"__index_level_0__": 42
}
```
### Data Fields
- **correct_text**: The corrected Vietnamese text (string)
- **error_text**: The original text with errors (string)
### Data Splits
| Split | Examples | Size |
|-------|----------|------|
| **train** | 3,175,684 | 895 MB |
| **test** | 396,961 | 112 MB |
| **validation** | 396,961 | 112 MB |
## Dataset Creation
### Source Data
The dataset was created by collecting Vietnamese text from multiple sources including:
- Web crawling of Vietnamese websites
- Social media posts
- News articles
- User-generated content
### Processing Steps
1. **Text Collection**: Gathered raw Vietnamese text from diverse sources
2. **Error Injection**: Applied various error types (spelling, grammar, typos)
3. **Human Annotation**: Native speakers corrected the erroneous texts
4. **Quality Filtering**: Removed low-quality or nonsensical examples
5. **Deduplication**: Ensured no duplicate text pairs
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("8Opt/vn-text-correction-0001")
# Access different splits
train_data = dataset['train']
test_data = dataset['test']
val_data = dataset['validation']
# Example usage
example = train_data[0]
print(f"Original: {example['error_text']}")
print(f"Corrected: {example['correct_text']}")
```
### For Training
```python
# Prepare for training a correction model
def preprocess_function(examples):
inputs = [f"Correct this text: {err}" for err in examples['error_text']]
targets = examples['correct_text']
return {'input_text': inputs, 'target_text': targets}
# Apply preprocessing
tokenized_datasets = dataset.map(preprocess_function, batched=True)
```
## Evaluation
### Metrics
Common evaluation metrics for this dataset include:
- **BLEU Score**: Measures n-gram overlap with reference corrections
- **ROUGE Score**: Evaluates summary-quality corrections
- **Character/Word Error Rate**: Traditional correction metrics
- **Human Evaluation**: Manual assessment of correction quality
## Limitations and Bias
- **Domain Coverage**: Dataset may not cover all Vietnamese domains equally
- **Error Types**: Focuses on common errors; rare error types may be underrepresented
- **Regional Variations**: Northern/Central/Southern Vietnamese differences may exist
- **Contemporary Usage**: May not capture very recent slang or terminology
## Ethical Considerations
- All text data is publicly available Vietnamese content
- No personally identifiable information included
- Dataset intended for research and educational purposes only
## Contributing
We welcome contributions! If you find issues or want to improve the dataset:
1. Open an issue on the https://huggingface.co/datasets/8Opt/vn-text-correction-0001.
2. Submit a pull request with improvements
3. Report any data quality issues or annotation errors
## Contact
For questions or feedback about this dataset, please contact: [minh.leduc.0210@gmail.com] |