amihai4by commited on
Commit
ad190b4
·
verified ·
1 Parent(s): 3e68545
Files changed (1) hide show
  1. README.md +19 -21
README.md CHANGED
@@ -76,16 +76,16 @@ When used with the provided `Modelfile`, the model outputs **exactly one JSON ob
76
 
77
  ### Schema
78
 
79
- ```json
80
- {
81
  "verdict": "true | false | uncertain",
82
  "reason": "string",
83
  "confidence": 0.0,
84
  "evidence": ["string"],
85
  "assumptions": ["string"],
86
  "next_actions": ["string"]
87
- }
88
- Rules
89
  confidence is a heuristic value between 0.0 and 1.0
90
 
91
  If information is missing, the verdict must be uncertain
@@ -94,36 +94,34 @@ No text outside JSON is expected when the wrapper is used
94
 
95
  Stop behavior is enforced by the Modelfile
96
 
97
- How to run with Ollama
98
  Create the model locally:
99
 
100
- bash
101
- Copy code
102
- ollama create logic-reasoner-v2 -f Modelfile
103
- Example request:
104
 
105
- bash
106
- Copy code
107
- curl http://localhost:11434/api/generate -d '{
 
 
108
  "model": "logic-reasoner-v2",
109
  "stream": false,
110
  "prompt": "Input: DCGM exporter reports 0 GPUs across all nodes. Question: Is the system healthy?"
111
- }'
112
- Quantization
113
- Format: GGUF
114
 
115
- Quantization: Q4_K_M
116
 
117
- Optimized for low-latency operational inference
118
 
119
- Provenance
120
  This model was built and packaged as part of the LLM FUN project on NVIDIA DGX B200 infrastructure using:
121
 
122
- Kubernetes (RKE2)
123
 
124
- Ollama
125
 
126
- OpenWebUI
127
 
128
  The Modelfile is a core part of the model behavior and must be used to reproduce the intended output guarantees.
129
 
 
76
 
77
  ### Schema
78
 
79
+
80
+ {
81
  "verdict": "true | false | uncertain",
82
  "reason": "string",
83
  "confidence": 0.0,
84
  "evidence": ["string"],
85
  "assumptions": ["string"],
86
  "next_actions": ["string"]
87
+ }
88
+ ## Rules
89
  confidence is a heuristic value between 0.0 and 1.0
90
 
91
  If information is missing, the verdict must be uncertain
 
94
 
95
  Stop behavior is enforced by the Modelfile
96
 
97
+ ## How to run with Ollama
98
  Create the model locally:
99
 
 
 
 
 
100
 
101
+ ollama create logic-reasoner-v2 -f Modelfile
102
+ ## Example request:
103
+
104
+
105
+ curl http://localhost:11434/api/generate -d '{
106
  "model": "logic-reasoner-v2",
107
  "stream": false,
108
  "prompt": "Input: DCGM exporter reports 0 GPUs across all nodes. Question: Is the system healthy?"
109
+ }'
110
+ ## Quantization
111
+ Format: GGUF
112
 
113
+ Quantization: Q4_K_M
114
 
115
+ Optimized for low-latency operational inference
116
 
117
+ ## Provenance
118
  This model was built and packaged as part of the LLM FUN project on NVIDIA DGX B200 infrastructure using:
119
 
120
+ Kubernetes (RKE2)
121
 
122
+ Ollama
123
 
124
+ OpenWebUI
125
 
126
  The Modelfile is a core part of the model behavior and must be used to reproduce the intended output guarantees.
127