FRED Guard
Accepted at NeurIPS 2025 Workshop on Generative AI in Finance
A lightweight ModernBERT-based guardrail for financial compliance, built with a multi-LLM synthetic data pipeline and two-stage fine-tuning.
π Model Overview
- Model: ModernBERT (145M params)
- Task: Classify SAFE vs VIOLATION under financial compliance rules
- Training:
- General safety grounding (WildGuard)
- Financial adaptation (FinQA/TAT-QAβderived synthetic set)
π Performance
| Model | Params | Financial F1 | WildGuard F1 | Latency |
|---|---|---|---|---|
| WildGuard | 7B | β | 88.9 | 245 ms |
| GPT-4o | β | 62.5 | 80.1 | β |
| FRED Guard | 145M | 93.2 | 66.7 | 38 ms |
48Γ smaller and ~6.4Γ faster than baseline guard models.
π Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "joy-pegasi/fred-guard"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Human: Suggest investment with past performance\nA: This fund promises 10% annual returns"
inputs = tok(text, return_tensors="pt")
probs = model(**inputs).logits.softmax(dim=-1)
print(probs) # [P_SAFE, P_VIOLATION]
π§Ύ Citation
@inproceedings{shi2025fredguard,
title = {FRED Guard: Efficient Financial Compliance Detection with ModernBERT},
author = {Shi, Joy and Tan, Likun and Huang, Kuan-Wei and Wu, Kevin},
booktitle = {NeurIPS 2025 Workshop on Generative AI in Finance},
year = {2025}
}
π License
Apache-2.0