Update README.md
Browse files
README.md
CHANGED
|
@@ -124,12 +124,13 @@ ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer)
|
|
| 124 |
text = "Stealc malware targets browser cookies and passwords."
|
| 125 |
entities = ner_pipeline(text)
|
| 126 |
print(entities)
|
|
|
|
| 127 |
|
| 128 |
## Training Details
|
| 129 |
|
| 130 |
### Training Objective and Procedure
|
| 131 |
|
| 132 |
-
The `
|
| 133 |
Training focused on accurately classifying entity boundaries and types across five cybersecurity-specific categories: *Malware, Indicator, System, Organization,* and *Vulnerability*.
|
| 134 |
|
| 135 |
The **AdamW** optimizer was used with a **linear learning rate scheduler**, and gradient clipping ensured stability during fine-tuning.
|
|
@@ -173,7 +174,7 @@ The model was fine-tuned on a **cybersecurity-specific NER corpus**, containing
|
|
| 173 |
|
| 174 |
| Component | Description |
|
| 175 |
|:-----------|:-------------|
|
| 176 |
-
| GPUs Used | 8× NVIDIA A100
|
| 177 |
| Precision | Mixed precision (fp16) |
|
| 178 |
| Batch Size | 8 per GPU |
|
| 179 |
| Framework | Transformers (TensorFlow backend) |
|
|
|
|
| 124 |
text = "Stealc malware targets browser cookies and passwords."
|
| 125 |
entities = ner_pipeline(text)
|
| 126 |
print(entities)
|
| 127 |
+
```
|
| 128 |
|
| 129 |
## Training Details
|
| 130 |
|
| 131 |
### Training Objective and Procedure
|
| 132 |
|
| 133 |
+
The `SecureBERT2.0-NER` was fine-tuned for **token-level classification** on cybersecurity text using **Cross Entropy Loss**.
|
| 134 |
Training focused on accurately classifying entity boundaries and types across five cybersecurity-specific categories: *Malware, Indicator, System, Organization,* and *Vulnerability*.
|
| 135 |
|
| 136 |
The **AdamW** optimizer was used with a **linear learning rate scheduler**, and gradient clipping ensured stability during fine-tuning.
|
|
|
|
| 174 |
|
| 175 |
| Component | Description |
|
| 176 |
|:-----------|:-------------|
|
| 177 |
+
| GPUs Used | 8× NVIDIA A100 |
|
| 178 |
| Precision | Mixed precision (fp16) |
|
| 179 |
| Batch Size | 8 per GPU |
|
| 180 |
| Framework | Transformers (TensorFlow backend) |
|