llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Logos Theological 9B (GGUF)
Gemma 2 9B fine-tuned for structural biblical analysis. Identifies patterns โ kenosis, authority, inversion โ without claiming theological authority. Engages mystery without resolving it.
Usage
Requires Ollama.
# Create model
ollama create logos-bible -f Modelfile
# Chat
ollama run logos-bible
Model Details
- Base: Google Gemma 2 9B
- Format: GGUF (quantized)
- Size: ~6.2 GB
- RAM required: 16 GB minimum
- Research: The Instrument Trap
Access
This is a gated model. Request access and it will be manually approved.
License
Research use. Contact Rafael Rodriguez for commercial licensing.
- Downloads last month
- 42
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="LumenSyntax/logos-theological-9b-gguf", filename="logos-theological-9b.gguf", )