Instructions to use microsoft/codereviewer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/codereviewer with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("microsoft/codereviewer") model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/codereviewer") - Notebooks
- Google Colab
- Kaggle
Is it possible to quantise this model using GGUF?
#8
by shrikrishnaholla - opened
I am interested in using this using Ollama. However, GGUF my Repo doesn't seem to be able to detect / convert it. Has anyone tried to quantise this model? How would one approach it?