Format Runtime Model Size Theme LinkedIn

GHOSTAI β€” Christmas Spirit GGUF (7B)
A holiday-forward 7B model tuned for cozy storytelling, cheerful roleplay, and warm seasonal vibes β€” shipped in GGUF for the llama.cpp ecosystem.
Quantized builds included for most users. An optional F16 GGUF is included for maximum fidelity.


Overview

GHOSTAI: Christmas Spirit is designed to produce cozy, wholesome, and festive outputs: winter scenes, gift-giving stories, cheerful dialog, holiday recipes, and family-friendly roleplay.

This repository provides multiple GGUF variants, so you can choose the best balance of quality, speed, and memory usage for your hardware.

You can run it:

  • CPU-only
  • With GPU offload (CUDA / Metal / Vulkan builds of llama.cpp)

Quant choice is independent of CPU vs GPU; GPU offload is controlled by runtime flags (example: -ngl).


Files (this release)

Sizes below reflect the exported files in this repo.

File Quant Approx size Rough RAM needed (4k ctx)
christmas_mistral_v1.f16.gguf f16 ~13.5 GB ~16–18 GB
christmas_mistral_v1.Q8_0.gguf Q8_0 ~7.2 GB ~10–11 GB
christmas_mistral_v1.Q6_K.gguf Q6_K ~5.5 GB ~8–9 GB
christmas_mistral_v1.Q5_K_M.gguf Q5_K_M ~4.8 GB ~7–8 GB
christmas_mistral_v1.Q5_K_S.gguf Q5_K_S ~4.7 GB ~7–8 GB
christmas_mistral_v1.Q4_K_M.gguf Q4_K_M ~4.1 GB ~6–7 GB
christmas_mistral_v1.Q4_K_S.gguf Q4_K_S ~3.9 GB ~6–7 GB
christmas_mistral_v1.Q3_K_M.gguf Q3_K_M ~3.3 GB ~5–6 GB
christmas_mistral_v1.Q3_K_S.gguf Q3_K_S ~3.0 GB ~5–6 GB
christmas_mistral_v1.Q2_K.gguf Q2_K ~2.5 GB ~4–5 GB
christmas_mistral_v1.TQ1_0.gguf TQ1_0 ~1.6 GB ~3–4 GB

RAM notes (rough):

  • Assumes ~4k context and typical llama.cpp overhead.
  • For 8k context, plan +1–2 GB extra (or more depending on runner/settings).
  • GPU offload can shift some load to VRAM; you still need system RAM.

Recommended downloads

  • Best default: christmas_mistral_v1.Q4_K_M.gguf
  • Higher quality: Q5_K_M, Q6_K, Q8_0
  • Low RAM: Q3_K_S, Q2_K
  • Ultra-small / experimental: TQ1_0 (expect noticeable quality loss)
  • Maximum fidelity: f16 (largest)

Quickstart (llama.cpp)

CPU-only (simple + portable)

./llama-cli \
  -m christmas_mistral_v1.Q4_K_M.gguf \
  -ngl 0 \
  -c 4096 \
  -p "You are GHOSTAI Christmas Spirit. Write a cozy winter story set in a small town, with a warm ending."
Downloads last month
315
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support