How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("waleko/codereviewer-finetuned-msg")
model = AutoModelForSeq2SeqLM.from_pretrained("waleko/codereviewer-finetuned-msg")
Quick Links

CodeReviewer

Fine-tuning Process

We fine-tuned the original CodeReviewer model on the Comment Generation dataset using a single A100 GPU, 8 CPU cores, and 20 GB of RAM. The fine-tuning process was run for 12 hours.

For further information please consider taking a look at the original CodeReviewer paper.

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for waleko/codereviewer-finetuned-msg