Logic Reasoner v2
Logic Reasoner v2 is a verdict-style reasoning and verification model packaged for Ollama and distributed as GGUF. It is designed for operational, infrastructure, and automation workflows that require deterministic, machine-readable output, not conversational text.
Why this model exists
Most large language models are optimized for human conversation, not for systems that must act on model output.
In operational environments this causes recurring issues:
- Non-deterministic phrasing that breaks parsers
- Excess verbosity that hides the actual decision
- Missing information that is not explicitly surfaced
- Explanations instead of decisions
Logic Reasoner v2 exists to address this gap.
It enforces a strict reasoning interface on top of a general-purpose language model by:
- Requiring a clear verdict:
true,false, oruncertain - Separating reason, evidence, assumptions, and next actions
- Guaranteeing exactly one JSON object
- Explicitly stopping generation after the structured response
This makes the model suitable as a decision and verification component inside automated systems, not just as a chat assistant.
What this model is for
Use this model when you need:
- A clear verdict instead of a narrative
- Structured reasoning that can be logged or audited
- Predictable output suitable for automation
- A bridge between LLM reasoning and operational workflows
Typical use cases:
- Kubernetes and GPU stack troubleshooting (GPU Operator, DCGM, drivers)
- Verification of technical or operational claims
- Incident triage and post-mortem workflows
- JSON-driven automation pipelines
What this model is not for
This model is not intended for:
- Academic benchmark leaderboards (e.g. MATH500, GSM)
- Strict symbolic math grading
- Creative or open-ended generation
- Long conversational interactions
Output contract
When used with the provided Modelfile, the model outputs exactly one JSON object and then stops.
Schema
{ "verdict": "true | false | uncertain", "reason": "string", "confidence": 0.0, "evidence": ["string"], "assumptions": ["string"], "next_actions": ["string"] }
Rules
confidence is a heuristic value between 0.0 and 1.0
If information is missing, the verdict must be uncertain
No text outside JSON is expected when the wrapper is used
Stop behavior is enforced by the Modelfile
How to run with Ollama
Create the model locally:
ollama create logic-reasoner-v2 -f Modelfile
Example request:
curl http://localhost:11434/api/generate -d '{ "model": "logic-reasoner-v2", "stream": false, "prompt": "Input: DCGM exporter reports 0 GPUs across all nodes. Question: Is the system healthy?" }'
Quantization
Format: GGUF
Quantization: Q4_K_M
Optimized for low-latency operational inference
Provenance
This model was built and packaged as part of the LLM FUN project on NVIDIA DGX B200 infrastructure using:
Kubernetes (RKE2)
Ollama
OpenWebUI
The Modelfile is a core part of the model behavior and must be used to reproduce the intended output guarantees.
Limitations Confidence values are heuristic, not statistically calibrated
The base model may default to explanatory text if the wrapper is not used
Determinism applies to structure, not factual correctness
License MIT
- Downloads last month
- 49
4-bit