Add model metadata and link to DeepSight paper
Browse filesThis PR improves the model card by adding relevant metadata (`library_name` and `pipeline_tag`) and linking the model to the broader **DeepSight** safety toolkit paper and resources. ProGuard is the specialized multimodal safeguard model integrated into the DeepSight framework. These changes will help users discover the model and understand its role within the safety evaluation ecosystem.
README.md
CHANGED
|
@@ -1,21 +1,34 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
datasets:
|
| 3 |
- yushaohan/ProGuard-data
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
base_model:
|
| 7 |
-
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 8 |
tags:
|
| 9 |
- vlm
|
| 10 |
- safety
|
| 11 |
- guard
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
@article{yu2025proguard,
|
| 20 |
title={ProGuard: Towards Proactive Multimodal Safeguard},
|
| 21 |
author={Yu, Shaohan and Li, Lijun and Si, Chenyang and Sheng, Lu and Shao, Jing},
|
|
@@ -23,5 +36,11 @@ If you find this model helpful, please cite our paper 🤗
|
|
| 23 |
year={2025},
|
| 24 |
url={https://yushaohan.github.io/ProGuard/}
|
| 25 |
}
|
| 26 |
-
```
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 4 |
datasets:
|
| 5 |
- yushaohan/ProGuard-data
|
| 6 |
language:
|
| 7 |
- en
|
|
|
|
|
|
|
| 8 |
tags:
|
| 9 |
- vlm
|
| 10 |
- safety
|
| 11 |
- guard
|
| 12 |
+
library_name: transformers
|
| 13 |
+
pipeline_tag: image-text-to-text
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# ProGuard-7B
|
| 17 |
+
|
| 18 |
+
ProGuard is a proactive multimodal safeguard model introduced as part of the **DeepSight** toolkit. It is designed to identify and reason about unknown risks across both text and visual modalities, moving beyond rigid predefined classification systems.
|
| 19 |
|
| 20 |
+
- **ProGuard Paper:** [ProGuard: Towards Proactive Multimodal Safeguard](https://arxiv.org/abs/2512.23573)
|
| 21 |
+
- **DeepSight Paper:** [DeepSight: An All-in-One LM Safety Toolkit](https://huggingface.co/papers/2602.12092)
|
| 22 |
+
- **Project Page:** [DeepSight Homepage](https://ai45.shlab.org.cn/safety-entry)
|
| 23 |
+
- **GitHub Repository:** [DeepSafe (Safety Evaluation ToolKit)](https://github.com/AI45Lab/DeepSafe)
|
| 24 |
|
| 25 |
+
This model is the official open-source implementation of **ProGuard**. For deployment instructions, please refer to **[this link](https://github.com/yushaohan/ProGuard/tree/master/deploy)**.
|
| 26 |
+
|
| 27 |
+
## Citation
|
| 28 |
+
|
| 29 |
+
If you find this model helpful, please cite our research:
|
| 30 |
+
|
| 31 |
+
```bibtex
|
| 32 |
@article{yu2025proguard,
|
| 33 |
title={ProGuard: Towards Proactive Multimodal Safeguard},
|
| 34 |
author={Yu, Shaohan and Li, Lijun and Si, Chenyang and Sheng, Lu and Shao, Jing},
|
|
|
|
| 36 |
year={2025},
|
| 37 |
url={https://yushaohan.github.io/ProGuard/}
|
| 38 |
}
|
|
|
|
| 39 |
|
| 40 |
+
@article{zhang2026deepsight,
|
| 41 |
+
title={DeepSight: An All-in-One LM Safety Toolkit},
|
| 42 |
+
author={Zhang, Bo and Guo, Jiaxuan and Li, Lijun and Liu, Dongrui and Chen, Sujin and Chen, Guanxu and Zheng, Zhijie and Lin, Qihao and Yan, Lewen and Qian, Chen and others},
|
| 43 |
+
journal={arXiv preprint arXiv:2602.12092},
|
| 44 |
+
year={2026}
|
| 45 |
+
}
|
| 46 |
+
```
|