| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - text-generation | |
| - gguf | |
| - gemma | |
| - summarization | |
| base_model: google/gemma-3-270m-it | |
| model_type: gemma | |
| # summarizer | |
| Fine-tuned Gemma-3-270M for task summarization and branch naming | |
| ## Model Details | |
| - **Base Model**: google/gemma-3-270m-it | |
| - **Format**: GGUF (quantized for efficient inference) | |
| - **Quantization**: Q4_K_M | |
| - **Use Case**: Generating concise task titles and git branch names | |
| ## Training | |
| - **Training Run**: [https://wandb.ai/vanpelt/summarizer/runs/0t4lcgpb](https://wandb.ai/vanpelt/summarizer/runs/0t4lcgpb) | |
| ## Usage | |
| ### With Ollama | |
| ```bash | |
| ollama pull hf.co/vanpelt/summarizer | |
| ollama run hf.co/vanpelt/summarizer | |
| ``` | |
| ### With llama.cpp | |
| ```bash | |
| # Download the GGUF file | |
| huggingface-cli download vanpelt/summarizer gemma3-270m-summarizer-Q4_K_M.gguf | |
| # Run with llama.cpp | |
| ./main -m gemma3-270m-summarizer-Q4_K_M.gguf -p 'Your prompt here' | |
| ``` | |
| ## Files | |
| - `tokenizer.json` (31.8 MB) | |
| - `tokenizer_config.json` (1.1 MB) | |
| - `added_tokens.json` (0.0 MB) | |
| - `chat_template.jinja` (0.0 MB) | |
| - `Modelfile` (0.0 MB) | |
| - `template` (0.0 MB) | |
| - `system` (0.0 MB) | |
| - `model.safetensors` (511.4 MB) | |
| - `gemma3-270m-summarizer-Q4_K_M.gguf` (241.4 MB) | |
| - `special_tokens_map.json` (0.0 MB) | |
| - `config.json` (0.0 MB) | |
| - `params` (0.0 MB) | |
| - `tokenizer.model` (4.5 MB) | |