File size: 1,967 Bytes
66ee441
000b3da
 
 
 
 
 
 
 
 
 
 
 
 
66ee441
 
000b3da
66ee441
000b3da
66ee441
000b3da
66ee441
000b3da
 
 
 
66ee441
000b3da
66ee441
000b3da
 
66ee441
000b3da
2d7c1c4
 
66ee441
000b3da
 
 
66ee441
000b3da
 
 
 
66ee441
000b3da
 
 
66ee441
000b3da
66ee441
000b3da
 
 
66ee441
000b3da
66ee441
000b3da
66ee441
000b3da
 
 
 
 
 
2d7c1c4
000b3da
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
language: en
license: mit
datasets:
  - jfleg
tags:
  - grammar-correction
  - t5
  - english
pipeline_tag: text2text-generation
widget:
  - text: "correct grammar: She dont like to eat vegetables but she like fruits."
  - text: "correct grammar: They goes to the store every day."
  - text: "correct grammar: He have been working here for five years."
---

# Grammar Correction Model

This model is fine-tuned to correct grammatical errors in English text. It's based on T5 and specifically trained on essay correction data.

## Model Description

- **Model Type:** T5
- **Task:** Grammar Correction
- **Training Data:** Custom essay dataset with grammatical errors and corrections
- **Output:** Grammatically corrected text

## Usage

```python
from transformers import T5ForConditionalGeneration, T5Tokenizer

# Load model and tokenizer
model = T5ForConditionalGeneration.from_pretrained("Tegence/grammar-correction-model")
tokenizer = T5Tokenizer.from_pretrained("Tegence/grammar-correction-model")

# Prepare input (add the prefix "correct grammar: ")
incorrect_text = "She dont like to eat vegetables but she like fruits."
input_text = f"correct grammar: {incorrect_text}"

# Tokenize and generate
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
corrected_text = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(corrected_text)
# Expected output: "She doesn't like to eat vegetables but she likes fruits."
```

## Limitations

- Works best with English text
- Performance may vary for technical or domain-specific content
- Very long or complex sentences may be challenging to correct

## Citation

If you use this model in your research, please cite:

```bibtex
@misc{grammar-correction-model,
  author = {AdmitEase},
  title = {Grammar Correction Model},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/Tegence/grammar-correction-model}}
}
```