MicroGuard — SmolLM-135M

A LoRA-adapted faithfulness classifier for RAG systems. Detects whether a generated answer is faithful to the retrieved context.

Performance

Metric Value
Balanced Accuracy 64.3%
F1 Score 0.661
Cohen's Kappa 0.329
Inference Latency 72ms

Evaluated on a combined test set of 15,976 examples from RAGBench, RAGTruth, and HaluBench.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
model = PeftModel.from_pretrained(base, "tarun5986/MicroGuard-SmolLM-135M")
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")

# Or use the MicroGuard package
from microguard import MicroGuard
guard = MicroGuard(model="tarun5986/MicroGuard-SmolLM-135M", base_model="HuggingFaceTB/SmolLM2-135M-Instruct")
result = guard.check(
    context="The Eiffel Tower was built in 1889 by Gustave Eiffel.",
    question="Who built the Eiffel Tower?",
    answer="The Eiffel Tower was built by Gustave Eiffel in 1889."
)
print(result)  # {'verdict': 'FAITHFUL', 'confidence': 74.2, 'latency_ms': 64.0}

Training

  • Method: LoRA (r=16, alpha=32, targets: q,k,v,o projections)
  • Data: 127,932 examples from RAGBench + RAGTruth + HaluBench
  • Evaluation: Constrained decoding via logit comparison (0% garbage outputs)

Paper

MicroGuard: Sub-Billion Parameter Faithfulness Classification for Real-Time RAG QA

Citation

@article{microguard2026,
  title={MicroGuard: Sub-Billion Parameter Faithfulness Classification for Real-Time RAG QA},
  author={Sharma, Tarun},
  journal={IEEE Access},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tarun5986/MicroGuard-SmolLM-135M

Adapter
(37)
this model

Datasets used to train tarun5986/MicroGuard-SmolLM-135M

Space using tarun5986/MicroGuard-SmolLM-135M 1