Update README.md
Browse files
README.md
CHANGED
|
@@ -64,7 +64,7 @@ datasets:
|
|
| 64 |
|
| 65 |
## 🛡 1. Model Overview
|
| 66 |
|
| 67 |
-
***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs) before generating final answers. It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety
|
| 68 |
|
| 69 |
- **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
|
| 70 |
|
|
|
|
| 64 |
|
| 65 |
## 🛡 1. Model Overview
|
| 66 |
|
| 67 |
+
***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs) before generating final answers. It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety for LRMs.
|
| 68 |
|
| 69 |
- **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
|
| 70 |
|