Ma7ee7's picture
Add README
f870890 verified
metadata
tags:
  - gguf
  - llama.cpp
  - unsloth

gemma-270m-thinking-0126 : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: ./llama.cpp/llama-cli -hf Ma7ee7/gemma-270m-thinking-0126 --jinja
  • For multimodal models: ./llama.cpp/llama-mtmd-cli -hf Ma7ee7/gemma-270m-thinking-0126 --jinja

Available Model files:

  • gemma-3-270m-it.Q8_0.gguf

Ollama

An Ollama Modelfile is included for easy deployment.

Note

The model's BOS token behavior was adjusted for GGUF compatibility. This was trained 2x faster with Unsloth