How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cassioblaz/gemma3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "cassioblaz/gemma3",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/cassioblaz/gemma3:Q8_0
Quick Links

Uploaded finetuned model

  • Developed by: cassioblaz
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3-27b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
11
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cassioblaz/gemma3

Quantized
(9)
this model