metadata
language: en
license: mit
datasets:
- jfleg
tags:
- grammar-correction
- t5
- english
pipeline_tag: text2text-generation
widget:
- text: 'correct grammar: She dont like to eat vegetables but she like fruits.'
- text: 'correct grammar: They goes to the store every day.'
- text: 'correct grammar: He have been working here for five years.'
Grammar Correction Model
This model is fine-tuned to correct grammatical errors in English text. It's based on T5 and specifically trained on essay correction data.
Model Description
- Model Type: T5
- Task: Grammar Correction
- Training Data: Custom essay dataset with grammatical errors and corrections
- Output: Grammatically corrected text
Usage
from transformers import T5ForConditionalGeneration, T5Tokenizer
# Load model and tokenizer
model = T5ForConditionalGeneration.from_pretrained("Tegence/grammar-correction-model")
tokenizer = T5Tokenizer.from_pretrained("Tegence/grammar-correction-model")
# Prepare input (add the prefix "correct grammar: ")
incorrect_text = "She dont like to eat vegetables but she like fruits."
input_text = f"correct grammar: {incorrect_text}"
# Tokenize and generate
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
corrected_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(corrected_text)
# Expected output: "She doesn't like to eat vegetables but she likes fruits."
Limitations
- Works best with English text
- Performance may vary for technical or domain-specific content
- Very long or complex sentences may be challenging to correct
Citation
If you use this model in your research, please cite:
@misc{grammar-correction-model,
author = {AdmitEase},
title = {Grammar Correction Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Tegence/grammar-correction-model}}
}