Instructions to use grammarly/coedit-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use grammarly/coedit-large with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-large") model = AutoModelForSeq2SeqLM.from_pretrained("grammarly/coedit-large") - Notebooks
- Google Colab
- Kaggle
Commit ·
a4ecfba
1
Parent(s): 2f1c947
Update README.md
Browse files
README.md
CHANGED
|
@@ -70,7 +70,7 @@ from transformers import AutoTokenizer, T5ForConditionalGeneration
|
|
| 70 |
|
| 71 |
tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-large")
|
| 72 |
model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-large")
|
| 73 |
-
input_text = 'Fix grammatical errors in this sentence:
|
| 74 |
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
|
| 75 |
outputs = model.generate(input_ids, max_length=256)
|
| 76 |
edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
|
|
|
| 70 |
|
| 71 |
tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-large")
|
| 72 |
model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-large")
|
| 73 |
+
input_text = 'Fix grammatical errors in this sentence: When I grow up, I start to understand what he said is quite right.'
|
| 74 |
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
|
| 75 |
outputs = model.generate(input_ids, max_length=256)
|
| 76 |
edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|