GemmaBible
A fine-tuned Gemma 4 E4B model specialized in biblical scholarship, theology, and Bible study. Grounded in the Berean Standard Bible (BSB).
What it does
- Quotes Scripture precisely from the BSB with proper citations
- Provides Greek and Hebrew word studies with transliterations and Strong's numbers
- Presents Protestant, Catholic, and Orthodox perspectives on debated topics
- Analyzes passages with scholarly hermeneutics and cited sources
- Detects and corrects common misquotations
- Stays within theological boundaries — declines non-theological requests
Training
- Base model: google/gemma-4-E4B-it (4.5B effective parameters)
- Method: QLoRA (rank 64, alpha 64) with Unsloth
- Data: ~8,000 instruction examples across 11 specialized generators covering comparative theology, Greek/Hebrew exegesis, systematic theology, creedal analysis, and more
- Hardware: NVIDIA RTX PRO 6000 (96GB)
- Epochs: 3 | Final loss: 0.40
Usage
With Ollama (GGUF)
Download merged.Q5_K_M.gguf and create a Modelfile:
FROM ./merged.Q5_K_M.gguf
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER num_ctx 4096
ollama create gemmabible -f Modelfile
ollama run gemmabible "What does John 3:16 mean?"
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("rhemabible/GemmaBible")
tokenizer = AutoTokenizer.from_pretrained("rhemabible/GemmaBible")
Files
| File | Format | Size | Use |
|---|---|---|---|
model.safetensors |
SafeTensors | ~15 GB | Full precision weights |
merged.Q5_K_M.gguf |
GGUF | ~5.5 GB | Ollama / LM Studio / llama.cpp |
Limitations
- Trained on BSB text; may be less accurate with other Bible translations
- 4.5B parameters — less capacity for nuanced multi-turn theological debate compared to larger models
- Not a substitute for pastoral counsel or formal theological education
- Downloads last month
- 67
Hardware compatibility
Log In to add your hardware
5-bit
Model tree for rhemabible/GemmaBible
Base model
google/gemma-4-E4B-it