--- base_model: swiss-ai/Apertus-8B-Instruct-2509 tags: - gguf - apertus - coding - sft --- # apertus-8b-coding-gguf GGUF conversion of [Colby/apertus-8b-coding](https://huggingface.co/Colby/apertus-8b-coding), a LoRA fine-tune of [swiss-ai/Apertus-8B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509) for coding assistance. ## Quantizations | File | Format | Size | |------|--------|------| | apertus-8b-coding-f16.gguf | FP16 | ~16 GB | | apertus-8b-coding-q8_0.gguf | Q8_0 | ~8 GB | | apertus-8b-coding-q5_k_m.gguf | Q5_K_M | ~5 GB | | apertus-8b-coding-q4_k_m.gguf | Q4_K_M | ~4 GB | ## Ollama usage ```bash hf download Colby/apertus-8b-coding-gguf apertus-8b-coding-q4_k_m.gguf ollama create apertus-coding:8b -f Modelfile # FROM ./apertus-8b-coding-q4_k_m.gguf ollama run apertus-coding:8b ```