Autologic SLM - ONNX fp32
Fine-tuned Qwen2.5-0.5B model that translates natural language text into structured logic AST (JSON).
Model Details
- Base model: Qwen/Qwen2.5-0.5B
- Training: LoRA (r=16, alpha=32) on q/v/k/o_proj, bf16, 15 epochs
- Dataset: 395 verified samples, 10 logic patterns (modus ponens, syllogisms, etc.)
- Format: ONNX fp32 with KV cache (for Transformers.js)
- Size: ~2.4 GB
- RAM usage: ~2.2 GB at inference
Output Format
The model outputs JSON conforming to Autologic AST:
{
"axioms": [
{"name": "a1", "formulaJSON": {"type": "Connective", "operator": "IMPLIES",
"left": {"type": "Atom", "id": "Llueve", "text": "llueve"},
"right": {"type": "Atom", "id": "SueloMojado", "text": "suelo mojado"}}}
],
"conclusions": [
{"formulaJSON": {"type": "Atom", "id": "SueloMojado", "text": "suelo mojado"}}
]
}
Critical Notes
- Must use fp32: int8/uint8 quantization destroys instruction following on 0.5B models
- Must use exact training prompt format: ChatML with specific system prompt
- Exported with
task=text-generation-with-past(includes KV cache) - Tokenizer merges are strings (Transformers.js compatible)
Part of
Autologic - Automatic formalization of natural language to formal logic.
- Downloads last month
- 289