| license: apache-2.0 | |
| library_name: llama.cpp | |
| pipeline_tag: text-generation | |
| tags: | |
| - gguf | |
| - quantized | |
| language: | |
| - en | |
| base_model: fdtn-ai/Foundation-Sec-8B-Instruct | |
| base_model_relation: quantized | |
| # Foundation-Sec-8B-Instruct — GGUF (Q4_K_M) | |
| Public GGUF quantization (`Q4_K_M`) for local inference (llama.cpp / LM Studio / Ollama). | |
| - File: `Foundation-Sec-8B-Instruct-Q4_K_M.gguf` | |
| - Pull via Git LFS. | |