Text Generation
GGUF
English
epistemological-safety
ai-safety
truth-verification
instrument-trap
logos
quantized
conversational
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)Logos 10v2 — Gemma 3 1B Q4_K_M (Edge/Demo)
Quantized version of the Logos 10v2 epistemological classifier for edge deployment and demonstration purposes.
IMPORTANT: Edge-Only Model
This quantized model has known quality degradation. In testing, Q4_K_M falsely approved dangerous claims that the F16 version correctly rejected.
Do NOT use this model as a primary verifier. For production use, deploy the F16 version.
Benchmark Results (F16 version)
| Metric | Score |
|---|---|
| Epistemological safety | 97.7% |
| Hallucination | 0.00% |
| Dangerous failures | 1.9% |
Note: These are F16 results. Q4_K_M quantization degrades quality — expect lower accuracy, especially on borderline cases.
Access
This model requires approved access. Request access using the form above and describe your intended use case.
Connection to Research
This model is part of the evidence for "The Instrument Trap" (DOI: 10.5281/zenodo.18716474).
License
Gemma Terms of Use (inherited from base model google/gemma-3-1b-it).
- Downloads last month
- 8
Hardware compatibility
Log In to add your hardware
4-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="LumenSyntax/logos10v2-gemma3-1b-Q4_K_M", filename="logos10v2-gemma3-1b-Q4_K_M.gguf", )