π Overview
Nova-LFM 1.2B is a state-of-the-art "Thinking" model engineered for efficiency. It brings reasoning capabilitiesβtypically reserved for 7B+ modelsβdown to the 1.2B parameter class, making it possible to run complex logic chains on edge devices like Raspberry Pis, older smartphones, and laptops.
Built on the Liquid LFM-2.5 architecture, this model utilizes a novel Hybrid Self-Correction fine-tuning method. It pauses to "think" (denoted by <think> tags), verifies its own logic, and corrects errors before generating a final answer.
π Key Capabilities
- System 2 Thinking: Breaks down multi-step math and logic problems instead of guessing.
- Edge-Native: Runs on <3GB VRAM (FP16) or <1GB (Quantized).
- Balanced Profile: engineered to excel at Math (GSM8K) without sacrificing General Knowledge (MMLU).
π Benchmark Performance
Nova-LFM outperforms the industry standard (Llama 3.2 1B) and larger models (Gemma 2 2B) in mathematical reasoning, while maintaining a higher general knowledge score than specialized "math-only" models.
| Model | Parameters | Math Reasoning (GSM8K) | Knowledge (MMLU) | Verdict |
|---|---|---|---|---|
| Nova-LFM (Ours) | 1.2B | 53.5% π | 50.1% | Best Balance |
| Llama 3.2 Instruct | 1.0B | 44.4% | 42.9% | Baseline |
| Gemma 2 | 2.6B | 46.4% | 51.7% | Inefficient |
| DeepSeek R1 Distill | 1.5B | 69.9% | 39.2% π» | Knowledge Collapse |
| SmolLM2 | 1.7B | 31.1% | 48.9% | Weak Reasoning |
Note: Scores represent 5-shot evaluations using EleutherAI LM Harness. DeepSeek R1 shows significant degradation in general knowledge (MMLU < 40%) despite high math scores. Nova-LFM maintains >50% MMLU for general-purpose usability.
π Quick Start
Option 1: Python (Transformers)
Requires transformers >= 4.46.0
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "NovachronoAI/Nova-LFM-1.2B-Thinking"
# Load the model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Define the prompt
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
If I have 3 apples and eat one, then buy two more, how many do I have?
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate with reasoning (Temperature 0.6 recommended)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.6,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Option 2: Ollama (Local)
For the fastest inference on CPU/Mac/Edge: ollama run hf.co/NovachronoAI/Nova-LFM-1.2B-Thinking-GGUF
π¬ Methodology: Hybrid Self-Correction
Standard small models often hallucinate because they rush to predict the next token. Nova-LFM was trained to pause and verify.
- Dataset Construction: We curated a hybrid dataset combining:
- 11k Standard CoT: High-quality linear reasoning chains (Step A β Step B).
- 4k Self-Correction Traces: Synthetic data where the model explicitly doubts itself (e.g., "Wait, that calculation seems off..."), catches the error, and corrects it.
- Fine-Tuning: Trained using Unsloth with LoRA adapters targeting the unique Liquid Neural Network layers (in_proj, out_proj, w1-w3). This dual approach teaches the model that backtracking is allowed, significantly reducing logic errors in multi-step tasks.
β οΈ Limitations
- Hallucination: As a 1.2B model, it does not possess the vast world knowledge of a 70B model. It may hallucinate obscure facts or dates.
- Token Artifacts: Rarely, raw training tags like [Reasoning] may appear in the output.
- Context: Optimized for short-to-medium reasoning tasks (up to 8k context).
π Citation
If you use this model in your research or application, please cite:
@misc{nova-lfm-2026,
title = {Nova-LFM: Scalable System 2 Reasoning at the 1B Scale},
author = {Novachrono},
year = {2026},
publisher = {HuggingFace},
url = {[https://huggingface.co/NovachronoAI/Nova-LFM-1.2B-Thinking](https://huggingface.co/NovachronoAI/Nova-LFM-1.2B-Thinking)}
}
- Downloads last month
- 60
