Datasets:
metadata
dataset_info:
features:
- name: correct_text
dtype: string
- name: error_text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 895314409
num_examples: 3175684
- name: test
num_bytes: 111815675
num_examples: 396961
- name: validation
num_bytes: 112054245
num_examples: 396961
download_size: 742034799
dataset_size: 1119184329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
license: mit
task_categories:
- fill-mask
- text-generation
- token-classification
language:
- vi
Dataset Card for Vietnamese Text Correction Dataset
Dataset Description
This dataset contains Vietnamese text pairs for training and evaluating text correction models. Each example consists of an erroneous text and its corrected version, making it ideal for:
- Grammar correction
- Spelling correction
- Text normalization
- Language model fine-tuning
Dataset Summary
- Language: Vietnamese (vi)
- Format: Text correction pairs
- Size: ~4.0M examples across train/validation/test splits
- Domain: General Vietnamese text from various sources
- Collection Method: Automated collection and human verification
Dataset Structure
Data Instances
A typical data point looks like this:
{
"correct_text": "Đây là một câu tiếng Việt chuẩn xác.",
"error_text": "Đây là một câu tieengs Việt chuẩn xác.",
"__index_level_0__": 42
}
Data Fields
- correct_text: The corrected Vietnamese text (string)
- error_text: The original text with errors (string)
Data Splits
| Split | Examples | Size |
|---|---|---|
| train | 3,175,684 | 895 MB |
| test | 396,961 | 112 MB |
| validation | 396,961 | 112 MB |
Dataset Creation
Source Data
The dataset was created by collecting Vietnamese text from multiple sources including:
- Web crawling of Vietnamese websites
- Social media posts
- News articles
- User-generated content
Processing Steps
- Text Collection: Gathered raw Vietnamese text from diverse sources
- Error Injection: Applied various error types (spelling, grammar, typos)
- Human Annotation: Native speakers corrected the erroneous texts
- Quality Filtering: Removed low-quality or nonsensical examples
- Deduplication: Ensured no duplicate text pairs
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("8Opt/vn-text-correction-0001")
# Access different splits
train_data = dataset['train']
test_data = dataset['test']
val_data = dataset['validation']
# Example usage
example = train_data[0]
print(f"Original: {example['error_text']}")
print(f"Corrected: {example['correct_text']}")
For Training
# Prepare for training a correction model
def preprocess_function(examples):
inputs = [f"Correct this text: {err}" for err in examples['error_text']]
targets = examples['correct_text']
return {'input_text': inputs, 'target_text': targets}
# Apply preprocessing
tokenized_datasets = dataset.map(preprocess_function, batched=True)
Evaluation
Metrics
Common evaluation metrics for this dataset include:
- BLEU Score: Measures n-gram overlap with reference corrections
- ROUGE Score: Evaluates summary-quality corrections
- Character/Word Error Rate: Traditional correction metrics
- Human Evaluation: Manual assessment of correction quality
Limitations and Bias
- Domain Coverage: Dataset may not cover all Vietnamese domains equally
- Error Types: Focuses on common errors; rare error types may be underrepresented
- Regional Variations: Northern/Central/Southern Vietnamese differences may exist
- Contemporary Usage: May not capture very recent slang or terminology
Ethical Considerations
- All text data is publicly available Vietnamese content
- No personally identifiable information included
- Dataset intended for research and educational purposes only
Contributing
We welcome contributions! If you find issues or want to improve the dataset:
- Open an issue on the https://huggingface.co/datasets/8Opt/vn-text-correction-0001.
- Submit a pull request with improvements
- Report any data quality issues or annotation errors
Contact
For questions or feedback about this dataset, please contact: [minh.leduc.0210@gmail.com]