montehoover commited on
Commit
4f6042f
·
verified ·
1 Parent(s): 6e70476

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -4
README.md CHANGED
@@ -6,7 +6,3 @@ colorTo: red
6
  sdk: static
7
  pinned: false
8
  ---
9
-
10
- https://huggingface.co/collections/tomg-group-umd/dynaguard
11
-
12
- The DynaGuard model series is a family of guardian models designed to evaluate text against user-defined, natural language policies. They provide a flexible and powerful solution for moderating chatbot outputs beyond static, predefined harm categories. Developed by researchers at the University of Maryland and Capital One , the series includes three open-weight models of varying sizes: 1.7B, 4B, and 8B — allowing developers to choose the best balance of performance and efficiency for their needs. Unlike traditional guardian models that screen for a fixed set of harms (e.g., violence or self-harm) , DynaGuard can enforce bespoke, application-specific rules. This includes scenarios like preventing a customer service bot from mistakenly issuing refunds or ensuring a medical bot avoids giving unauthorized advice. The DynaGuard series achieves state-of-the-art performance across a wide range of safety and compliance benchmarks, with the flagship DynaGuard-8B model outperforming other guardian models and even strong generalist models like GPT-4o-mini.
 
6
  sdk: static
7
  pinned: false
8
  ---