fred-guard / README.md
joy-pegasi's picture
Upload README.md
4adff52 verified

FRED Guard

Accepted at NeurIPS 2025 Workshop on Generative AI in Finance

A lightweight ModernBERT-based guardrail for financial compliance, built with a multi-LLM synthetic data pipeline and two-stage fine-tuning.


πŸ” Model Overview

  • Model: ModernBERT (145M params)
  • Task: Classify SAFE vs VIOLATION under financial compliance rules
  • Training:
    1. General safety grounding (WildGuard)
    2. Financial adaptation (FinQA/TAT-QA–derived synthetic set)

πŸ“Š Performance

Model Params Financial F1 WildGuard F1 Latency
WildGuard 7B – 88.9 245 ms
GPT-4o – 62.5 80.1 –
FRED Guard 145M 93.2 66.7 38 ms

48Γ— smaller and ~6.4Γ— faster than baseline guard models.


πŸš€ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_id = "joy-pegasi/fred-guard"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)

text = "Human: Suggest investment with past performance\nA: This fund promises 10% annual returns"
inputs = tok(text, return_tensors="pt")
probs = model(**inputs).logits.softmax(dim=-1)
print(probs)  # [P_SAFE, P_VIOLATION]

🧾 Citation

@inproceedings{shi2025fredguard,
  title        = {FRED Guard: Efficient Financial Compliance Detection with ModernBERT},
  author       = {Shi, Joy and Tan, Likun and Huang, Kuan-Wei and Wu, Kevin},
  booktitle    = {NeurIPS 2025 Workshop on Generative AI in Finance},
  year         = {2025}
}

πŸ“„ License

Apache-2.0