| # FRED Guard | |
| **Accepted at NeurIPS 2025 Workshop on Generative AI in Finance** | |
| A lightweight ModernBERT-based guardrail for financial compliance, built with a multi-LLM synthetic data pipeline and two-stage fine-tuning. | |
| --- | |
| ## ๐ Model Overview | |
| - **Model**: ModernBERT (145M params) | |
| - **Task**: Classify *SAFE* vs *VIOLATION* under financial compliance rules | |
| - **Training**: | |
| 1) General safety grounding (WildGuard) | |
| 2) Financial adaptation (FinQA/TAT-QAโderived synthetic set) | |
| --- | |
| ## ๐ Performance | |
| | Model | Params | Financial F1 | WildGuard F1 | Latency | | |
| |---------------|--------|--------------|--------------|---------| | |
| | WildGuard | 7B | โ | 88.9 | 245 ms | | |
| | GPT-4o | โ | 62.5 | 80.1 | โ | | |
| | **FRED Guard**| 145M | **93.2** | 66.7 | **38 ms** | | |
| 48ร smaller and ~6.4ร faster than baseline guard models. | |
| --- | |
| ## ๐ Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
| model_id = "joy-pegasi/fred-guard" | |
| tok = AutoTokenizer.from_pretrained(model_id) | |
| model = AutoModelForSequenceClassification.from_pretrained(model_id) | |
| text = "Human: Suggest investment with past performance\nA: This fund promises 10% annual returns" | |
| inputs = tok(text, return_tensors="pt") | |
| probs = model(**inputs).logits.softmax(dim=-1) | |
| print(probs) # [P_SAFE, P_VIOLATION] | |
| ``` | |
| --- | |
| ## ๐งพ Citation | |
| ```bibtex | |
| @inproceedings{shi2025fredguard, | |
| title = {FRED Guard: Efficient Financial Compliance Detection with ModernBERT}, | |
| author = {Shi, Joy and Tan, Likun and Huang, Kuan-Wei and Wu, Kevin}, | |
| booktitle = {NeurIPS 2025 Workshop on Generative AI in Finance}, | |
| year = {2025} | |
| } | |
| ``` | |
| --- | |
| ## ๐ License | |
| Apache-2.0 | |