Update README.md
Browse files
README.md
CHANGED
|
@@ -214,6 +214,18 @@ Compared to the general-purpose `ms-marco-TinyBERT-L2` baseline:
|
|
| 214 |
These results confirm that **domain-specific pretraining and fine-tuning** substantially enhance semantic understanding and information retrieval capabilities in cybersecurity applications.
|
| 215 |
|
| 216 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 217 |
|
| 218 |
## Model Card Authors
|
| 219 |
|
|
|
|
| 214 |
These results confirm that **domain-specific pretraining and fine-tuning** substantially enhance semantic understanding and information retrieval capabilities in cybersecurity applications.
|
| 215 |
|
| 216 |
---
|
| 217 |
+
# Cite:
|
| 218 |
+
|
| 219 |
+
Bibtex
|
| 220 |
+
|
| 221 |
+
```
|
| 222 |
+
@article{aghaei2025securebert,
|
| 223 |
+
title={SecureBERT 2.0: Advanced Language Model for Cybersecurity Intelligence},
|
| 224 |
+
author={Aghaei, Ehsan and Jain, Sarthak and Arun, Prashanth and Sambamoorthy, Arjun},
|
| 225 |
+
journal={arXiv preprint arXiv:2510.00240},
|
| 226 |
+
year={2025}
|
| 227 |
+
}
|
| 228 |
+
```
|
| 229 |
|
| 230 |
## Model Card Authors
|
| 231 |
|