How to use kmeanskaran/gemma-code-instruct-finetune-test with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="kmeanskaran/gemma-code-instruct-finetune-test")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("kmeanskaran/gemma-code-instruct-finetune-test") model = AutoModelForCausalLM.from_pretrained("kmeanskaran/gemma-code-instruct-finetune-test")
Model is trained on Gemma-2B model for assiting in programming.
Chat template
Files info