metadata
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-unformatted-network-data-test-ids-2018
results: []
widget:
- text: >-
22 6 387807 11802.2624656079 113.4584986862 56.7292493431 56.7292493431 22
22 1912 2665 640 0 86.9090909091 137.6880217811 976 0 121.1363636364
258.6415602914 976 0 101.7111111111 203.7371995112 41508.8464646465 712
712 32 16 9018.7674418605 78983 5 19638.681160061 387709 116875 248
18462.3333333333 31056.8742846625 387800 160821 13 18466.6666666667
39698.7330570301 0 0 0 0 0 0 0 1 0 0 0 1 104.0227272727 26883 230 0 0 0.0
0.0 0 0 0.0 0.0 0 0 0 0 0 0 86.9090909091 121.1363636364 0 22 22 1912 2665
example_title: 1 SSH
- text: >-
22 6 124745.0 60715.86035512446 344.70319451681434 168.3434205779791
176.35977393883522 21 22 3642 3932 1266 66 173.42857142857142
246.95412557002248 1186 66 178.72727272727272 242.42303029999997 1266 66
176.13953488372093 244.6607125203114 59858.864250946455 420 440 20 16
2970.119047619048 40093.0 10.0 6398.343342885462 124745.0 44477.0 241.0
6237.25 9784.131386459405 120658.0 40834.0 82.0 5745.619047619048
8513.610176407401 16 15 0 0 2 2 0 31 42 0 0 1.0476190476190477
176.13953488372093 26883 201 38382.0 1334.0 14819.666666666666
16719.110509301092 38759.0 5563.0 18511.0 14501.979267212688 0.0 0.0 0.0
0.0 0.0 0.0 173.42857142857142 178.72727272727272 0 21 22 3642 3932
example_title: 2 SSH recreated
- text: >-
80 6 99999019 0.1600015696 0.0400003924 0.0200001962 0.0200001962 2 2 16 0
8 8 8.0 0.0 0 0 0.0 0.0 8 0 4.8 4.38178046 19.2 64 64 32 1
33333006.3333333 99999016 1 57734457.9402723 99999018 99999018 99999018
99999018.0 0.0 99999017 99999017 99999017 99999017.0 0.0 1 0 0 0 0 1 0 0 1
0 0 1 6.0 211 219 2 2 2.0 0.0 99999016 99999016 99999016.0 0.0 0 0 0 0 0 0
8.0 0.0 0 2 2 16 0
example_title: 3 Slowloris
- text: >-
80 6 5588.0 92877.5948460988 894.7745168217609 536.8647100930565
357.9098067287044 3 2 379 140 239 66 126.33333333333331 79.73428093082394
74 66 70.0 4.0 239 66 103.8 67.69460835251209 4582.56 60 40 20 1 1397.0
5374.0 34.0 2296.518996220149 5554.0 5374.0 180.0 2777.0 2597.0 0 0 0 0 0
1 0 0 0 0 2 0 1 4 0 0 0.6666666666666666 103.8 26883 209 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0 0 0 0 0 0 126.33333333333331 70.0 0 3 2 379 140
example_title: 4 Slowloris recreated
- text: >-
80 6 5001243 262.3347835728 1.5996023389 0.7998011694 0.7998011694 4 4 340
972 340 0 85.0 170.0 972 0 243.0 486.0 972 0 145.7777777778 329.6064993965
108640.444444444 136 136 32 1 714463.285714286 5000411 6 1889925.30143528
832 531 31 277.3333333333 250.0806536566 5001237 5000676 258 1667079.0
2886979.68806727 0 0 0 0 0 0 0 1 0 0 0 1 164.0 26883 219 0 0 0.0 0.0 0 0
0.0 0.0 0 0 0 0 0 0 85.0 243.0 0 4 4 340 972
example_title: 5 GoldenEye
- text: >-
80 6 5001927.0 39.58474403964712 0.5997688490855624 0.3998458993903749
0.1999229496951874 2 1 132 66 66 66 66.0 0.0 66 66 66.0 0.0 66 66 66.0 0.0
0.0 40 20 20 0 2500963.5 4961616.0 40311.0 2460652.5 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0 0 0 0 1 0 0 0 3 0 0 0.5 66.0 191 207 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0 0 0 0 0 0 66.0 66.0 0 2 1 132 66
example_title: 6 GoldenEye recreated
bert-unformatted-network-data-test-ids-2018
This model is a fine-tuned version of roberta-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
EXAMPLE FULL NAMES:
'Benign': label_0, 'SSH-Bruteforce': label_1, 'DoS attacks-Slowloris': label_2, 'DoS attacks-GoldenEye': label_3
- SSH-Bruteforce (patator) record from original dataset
- SSH-Bruteforce (patator) record from replicated attack dataset
- Slowloris DoS record from original dataset
- Slowloris DoS record from replicated attack dataset
- GoldenEye DoS record from original dataset
- GoldenEye DoS record from replicated attack dataset
examples from CSE-CIC-IDS2018 on AWS (formatted for model training) https://colab.research.google.com/drive/1PmLep9D3NfMhYsX0soTBhfVXFkawGgGx?authuser=0#scrollTo=ReaH6NCljdsn
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| 0.0033 | 1.0 | 1500 | 0.0000 | 1.0 |
| 0.0038 | 2.0 | 3000 | 0.0000 | 1.0 |
| 0.0 | 3.0 | 4500 | 0.0000 | 1.0 |
Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1