Historical Egyptology 7B (GGUF)

License: MIT Format: GGUF Runtime: llama.cpp Base: Mistral-7B-Instruct Quantized Models Hugging Face LinkedIn

Historical Egyptology 7B β€” a Mistral-7B-Instruct fine-tune infused with the grandeur, mystery, and wisdom of ancient Egypt.
Perfect for immersive historical roleplay, myth retellings, educational narratives, pharaonic decrees, scribe chronicles, and Nile-soaked storytelling.


COMPILING DATA WIP check tomm morn should be fully out

✨ Overview

This model was fine-tuned (LoRA r=64, alpha=64, 3 epochs, 860 cleaned examples) on historical Egyptology texts using Unsloth.
It excels at evocative, period-flavored prose β€” invoking gods (Ra, Anubis, Isis, Thoth), pharaohs, dynasties, mummification rites, pyramid construction, Nile mythology, and the atmosphere of temples and tombs.

Best uses:

  • Deep historical fiction & alternate-history tales
  • Educational explainers (mummification, Book of the Dead, Old/Middle/New Kingdom)
  • Roleplay as pharaoh, high priest, scribe, tomb robber, explorer, deity…
  • Creative local assistants with ancient-Egypt personality

Not optimized for:

  • Strict modern factual accuracy (flavor > precision)
  • Advanced math, coding, or technical reasoning
  • Extremely long context without rope/extensions

Training snapshot (2026-02-05 run):

  • Base: Mistral-7B-Instruct (~7.4B params)
  • LoRA: 167,772,160 trainable params (2.26%)
  • Dataset: 860 examples, max seq len 3072
  • Epochs: 3 | Steps: 162 | Final loss: 1.7763
  • Runtime: ~11.5 hours (1Γ— GPU)

πŸ“œ Model Details

  • Model name: Historical Egyptology 7B
  • Base model: Mistral-7B-Instruct
  • Fine-tuning: LoRA (merged via Unsloth)
  • Parameters: ~7.4 billion
  • Context length: 3072 native (tested up to 8192+ in llama.cpp)
  • Language: English + ancient Egyptian stylistic terms
  • License: MIT (subject to base model license)

πŸ—Ώ Quantized Files

All files are from the same merged fine-tune checkpoint β€” only quantization level changes.

File Quant Bits Approx. Size VRAM est. (4k ctx) Recommendation
egypt-7b-v1.TQ1_0.gguf TQ1_0 ~1 ~1.5 GB < 2 GB Ultra-low memory (experimental)
egypt-7b-v1.Q2_K.gguf Q2_K ~2.5 ~2.6 GB ~3 GB Very low RAM
egypt-7b-v1.Q3_K_S.gguf Q3_K_S ~3.5 ~3.0 GB ~3.5 GB Low-memory sweet spot
egypt-7b-v1.Q3_K_M.gguf Q3_K_M ~3.8 ~3.3 GB ~4 GB Balanced low-RAM
egypt-7b-v1.Q4_K_S.gguf Q4_K_S ~4.5 ~3.8 GB ~4.5 GB Good quality / low VRAM
egypt-7b-v1.Q4_K_M.gguf Q4_K_M ~4.8 ~4.1 GB ~5 GB Default – best overall balance
egypt-7b-v1.Q5_K_S.gguf Q5_K_S ~5.5 ~4.6 GB ~5.5 GB Higher quality
egypt-7b-v1.Q5_K_M.gguf Q5_K_M ~5.7 ~4.8 GB ~6 GB Recommended for best quality
egypt-7b-v1.Q6_K.gguf Q6_K ~6.6 ~5.4 GB ~6.5 GB Very good detail
egypt-7b-v1.Q8_0.gguf Q8_0 8 ~7.1 GB ~8 GB Near-lossless reference

Quick picks:

  • Most users β†’ Q4_K_M or Q5_K_M (great balance on 6–8 GB cards)
  • Tight hardware β†’ Q3_K_M or Q4_K_S
  • Maximum fidelity β†’ Q8_0

πŸš€ Usage Examples (llama.cpp)

CLI – basic generation

./llama-cli \
  -m egypt-7b-v1.Q4_K_M.gguf \
  -ngl 35 \                        # adjust GPU layers (0 = CPU only)
  -c 4096 \
  --color \
  -p "You are a high priest of Amun-Ra in Karnak during the reign of Thutmose III. A young acolyte asks you to explain the sacred meaning of the benben stone and its connection to creation. Speak in solemn, evocative language." \
  -n 1024
Downloads last month
218
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support