MicroGuard-Gemma-1B / README.md
tarun5986's picture
Upload folder using huggingface_hub
bd49514 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - rag
  - faithfulness
  - hallucination-detection
  - lora
  - microguard
datasets:
  - galileo-ai/ragbench
  - wandb/RAGTruth-processed
  - PatronusAI/HaluBench
metrics:
  - balanced_accuracy
  - f1
pipeline_tag: text-classification
base_model: google/gemma-3-1b-it

MicroGuard — Gemma-1B

A LoRA-adapted faithfulness classifier for RAG systems. Detects whether a generated answer is faithful to the retrieved context.

Performance

Metric Value
Balanced Accuracy 69.4%
F1 Score 0.721
Cohen's Kappa 0.447
Inference Latency 88ms

Evaluated on a combined test set of 15,976 examples from RAGBench, RAGTruth, and HaluBench.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-it")
model = PeftModel.from_pretrained(base, "tarun5986/MicroGuard-Gemma-1B")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-1b-it")

# Or use the MicroGuard package
from microguard import MicroGuard
guard = MicroGuard(model="tarun5986/MicroGuard-Gemma-1B", base_model="google/gemma-3-1b-it")
result = guard.check(
    context="The Eiffel Tower was built in 1889 by Gustave Eiffel.",
    question="Who built the Eiffel Tower?",
    answer="The Eiffel Tower was built by Gustave Eiffel in 1889."
)
print(result)  # {'verdict': 'FAITHFUL', 'confidence': 74.2, 'latency_ms': 64.0}

Training

  • Method: LoRA (r=16, alpha=32, targets: q,k,v,o projections)
  • Data: 127,932 examples from RAGBench + RAGTruth + HaluBench
  • Evaluation: Constrained decoding via logit comparison (0% garbage outputs)

Paper

MicroGuard: Sub-Billion Parameter Faithfulness Classification for Real-Time RAG QA

Citation

@article{microguard2026,
  title={MicroGuard: Sub-Billion Parameter Faithfulness Classification for Real-Time RAG QA},
  author={Sharma, Tarun},
  journal={IEEE Access},
  year={2026}
}