How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="reedmayhew/gemma3-4B-cvwreply",
	filename="gemma3-4B-cvwreply.Q8_0.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Uploaded finetuned model

  • Developed by: reedmayhew
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3-4b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
-
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for reedmayhew/gemma3-4B-cvwreply

Quantized
(83)
this model