File size: 3,356 Bytes
36088fa 9b3c11b 36088fa 9b3c11b 640cc41 9b3c11b 36088fa 1acbf39 640cc41 1acbf39 36088fa 1acbf39 640cc41 9b3c11b 640cc41 1acbf39 a79b19f 1acbf39 36088fa 1acbf39 640cc41 1acbf39 36088fa 1acbf39 36088fa 9b3c11b 36088fa 640cc41 36088fa 9b3c11b 36088fa 2e39b36 36088fa 9b3c11b 2e39b36 9b3c11b 36088fa 1acbf39 36088fa 640cc41 2e39b36 9b3c11b 36088fa 1acbf39 36088fa 9b3c11b 36088fa 640cc41 36088fa 2e39b36 36088fa 640cc41 2e39b36 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ---
license: mit
language:
- en
tags:
- gguf
- llama.cpp
- mistral
- instruct
- lora
- horror
- roleplay
pipeline_tag: text-generation
---
<!--
GHOSTAI • HORROR GGUF (7B)
Professional Hugging Face README for a quantized-only GGUF release.
-->
<p align="center">
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:0b0b0f,50:2b0a2a,100:0b0b0f&height=160§ion=header&text=GHOSTAI%20—%20HORROR%20GGUF%20(7B)&fontSize=44&fontColor=EAEAEA&animation=twinkling" />
</p>
<p align="center">
<img alt="Format" src="https://img.shields.io/badge/Format-GGUF-8A2BE2?style=for-the-badge">
<img alt="Runtime" src="https://img.shields.io/badge/Runtime-llama.cpp-5B2C83?style=for-the-badge">
<img alt="Model Size" src="https://img.shields.io/badge/Model-7B-3A0CA3?style=for-the-badge">
<img alt="Theme" src="https://img.shields.io/badge/Theme-Horror-8B0000?style=for-the-badge">
<a href="https://www.linkedin.com/in/ccengineering/">
<img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-ccengineering-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
</p>
<p align="center">
<strong>GHOSTAI — HORROR GGUF (7B)</strong><br/>
A focused, horror-themed 7B model released exclusively in quantized GGUF format for the <strong>llama.cpp</strong> ecosystem.<br/>
<sub>Quantized-only release. No FP16 weights included.</sub>
</p>
---
## Overview
**GHOSTAI** is a compact, atmosphere-driven horror model designed for narrative generation, roleplay, and dark storytelling.
It prioritizes tone, pacing, and vivid imagery over generic assistant behavior.
This repository provides **multiple GGUF quantizations**, allowing you to choose the best balance of quality, speed, and memory usage for your hardware.
The model runs:
- Fully on **CPU**
- With optional **GPU offload** (CUDA / Metal / Vulkan builds of llama.cpp)
Quantization choice is independent of whether you use CPU or GPU.
---
## Files
| File | Quant | Approx size | Rough RAM needed (4k ctx) |
|---|---:|---:|---:|
| `ghostai-horror-7b.Q8_0.gguf` | Q8_0 | ~7.2 GB | ~10–11 GB |
| `ghostai-horror-7b.Q6_K.gguf` | Q6_K | ~5.5 GB | ~8–9 GB |
| `ghostai-horror-7b.Q5_K_M.gguf` | Q5_K_M | ~4.8 GB | ~7–8 GB |
| `ghostai-horror-7b.Q5_K_S.gguf` | Q5_K_S | ~4.7 GB | ~7–8 GB |
| `ghostai-horror-7b.Q4_K_M.gguf` | Q4_K_M | ~4.1 GB | ~6–7 GB |
| `ghostai-horror-7b.Q4_K_S.gguf` | Q4_K_S | ~3.9 GB | ~6–7 GB |
| `ghostai-horror-7b.Q3_K_M.gguf` | Q3_K_M | ~3.3 GB | ~5–6 GB |
| `ghostai-horror-7b.Q3_K_S.gguf` | Q3_K_S | ~3.0 GB | ~5–6 GB |
| `ghostai-horror-7b.Q2_K.gguf` | Q2_K | ~2.5 GB | ~4–5 GB |
| `ghostai-horror-7b.TQ1_0.gguf` | TQ1_0 | ~1.6 GB | ~3–4 GB |
Notes:
- “Rough RAM needed” assumes **~4k context** and typical llama.cpp overhead.
- For **8k context**, plan **+1–2 GB** extra.
- GPU offload can shift some load to VRAM, but you still need system RAM.
---
## Recommended Downloads
- Best default: **`Q4_K_M`**
- More quality (more RAM): **`Q5_K_M`**, **`Q6_K`**, **`Q8_0`**
- Low RAM: **`Q3_K_S`**, **`Q2_K`**
- Ultra-small / experimental: **`TQ1_0`** (expect noticeable quality loss)
---
## Quickstart (llama.cpp)
### 1) Run on CPU
```bash
./llama-cli \
-m ghostai-horror-7b.Q4_K_M.gguf \
-c 4096 \
-t 8 \
-p "You are GHOSTAI. Speak like a calm horror narrator. Keep it tight and vivid."
|