fred-guard / README.md
joy-pegasi's picture
Upload README.md
4adff52 verified
# FRED Guard
**Accepted at NeurIPS 2025 Workshop on Generative AI in Finance**
A lightweight ModernBERT-based guardrail for financial compliance, built with a multi-LLM synthetic data pipeline and two-stage fine-tuning.
---
## ๐Ÿ” Model Overview
- **Model**: ModernBERT (145M params)
- **Task**: Classify *SAFE* vs *VIOLATION* under financial compliance rules
- **Training**:
1) General safety grounding (WildGuard)
2) Financial adaptation (FinQA/TAT-QAโ€“derived synthetic set)
---
## ๐Ÿ“Š Performance
| Model | Params | Financial F1 | WildGuard F1 | Latency |
|---------------|--------|--------------|--------------|---------|
| WildGuard | 7B | โ€“ | 88.9 | 245 ms |
| GPT-4o | โ€“ | 62.5 | 80.1 | โ€“ |
| **FRED Guard**| 145M | **93.2** | 66.7 | **38 ms** |
48ร— smaller and ~6.4ร— faster than baseline guard models.
---
## ๐Ÿš€ Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "joy-pegasi/fred-guard"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Human: Suggest investment with past performance\nA: This fund promises 10% annual returns"
inputs = tok(text, return_tensors="pt")
probs = model(**inputs).logits.softmax(dim=-1)
print(probs) # [P_SAFE, P_VIOLATION]
```
---
## ๐Ÿงพ Citation
```bibtex
@inproceedings{shi2025fredguard,
title = {FRED Guard: Efficient Financial Compliance Detection with ModernBERT},
author = {Shi, Joy and Tan, Likun and Huang, Kuan-Wei and Wu, Kevin},
booktitle = {NeurIPS 2025 Workshop on Generative AI in Finance},
year = {2025}
}
```
---
## ๐Ÿ“„ License
Apache-2.0