Update README.md
Browse files
README.md
CHANGED
|
@@ -1,13 +1,17 @@
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
tags: []
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
|
@@ -15,15 +19,15 @@ tags: []
|
|
| 15 |
|
| 16 |
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
|
| 18 |
-
This
|
| 19 |
|
| 20 |
-
- **Developed by:**
|
| 21 |
-
- **Funded by [optional]:**
|
| 22 |
-
- **Shared by [optional]:**
|
| 23 |
-
- **Model type:**
|
| 24 |
-
- **Language(s) (NLP):**
|
| 25 |
-
- **License:**
|
| 26 |
-
- **Finetuned from model [optional]:**
|
| 27 |
|
| 28 |
### Model Sources [optional]
|
| 29 |
|
|
@@ -41,31 +45,33 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
| 41 |
|
| 42 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
|
| 44 |
-
|
| 45 |
|
| 46 |
### Downstream Use [optional]
|
| 47 |
|
| 48 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
| 63 |
|
| 64 |
### Recommendations
|
| 65 |
|
| 66 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
|
| 68 |
-
Users
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
|
@@ -79,7 +85,7 @@ Use the code below to get started with the model.
|
|
| 79 |
|
| 80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 81 |
|
| 82 |
-
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
|
@@ -89,10 +95,14 @@ Use the code below to get started with the model.
|
|
| 89 |
|
| 90 |
[More Information Needed]
|
| 91 |
|
| 92 |
-
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
- **Training regime:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
|
| 97 |
#### Speeds, Sizes, Times [optional]
|
| 98 |
|
|
@@ -110,19 +120,19 @@ Use the code below to get started with the model.
|
|
| 110 |
|
| 111 |
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
|
| 113 |
-
|
| 114 |
|
| 115 |
#### Factors
|
| 116 |
|
| 117 |
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
|
| 119 |
-
|
| 120 |
|
| 121 |
#### Metrics
|
| 122 |
|
| 123 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
|
@@ -130,7 +140,7 @@ Use the code below to get started with the model.
|
|
| 130 |
|
| 131 |
#### Summary
|
| 132 |
|
| 133 |
-
|
| 134 |
|
| 135 |
## Model Examination [optional]
|
| 136 |
|
|
@@ -144,7 +154,7 @@ Use the code below to get started with the model.
|
|
| 144 |
|
| 145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
|
| 147 |
-
- **Hardware Type:**
|
| 148 |
- **Hours used:** [More Information Needed]
|
| 149 |
- **Cloud Provider:** [More Information Needed]
|
| 150 |
- **Compute Region:** [More Information Needed]
|
|
@@ -154,19 +164,17 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
### Compute Infrastructure
|
| 160 |
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
#### Hardware
|
| 164 |
|
| 165 |
-
|
| 166 |
|
| 167 |
#### Software
|
| 168 |
|
| 169 |
-
|
| 170 |
|
| 171 |
## Citation [optional]
|
| 172 |
|
|
@@ -192,8 +200,12 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 192 |
|
| 193 |
## Model Card Authors [optional]
|
| 194 |
|
| 195 |
-
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
|
| 199 |
-
[More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Here is your model card formatted according to your specified template:
|
| 2 |
+
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
library_name: transformers
|
| 6 |
tags: []
|
| 7 |
+
|
| 8 |
---
|
| 9 |
|
| 10 |
# Model Card for Model ID
|
| 11 |
|
| 12 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 13 |
|
| 14 |
+
This is a custom 🤗 transformers model fine-tuned for cybersecurity-related tasks, particularly for generating or analyzing Metasploit payloads.
|
| 15 |
|
| 16 |
## Model Details
|
| 17 |
|
|
|
|
| 19 |
|
| 20 |
<!-- Provide a longer summary of what this model is. -->
|
| 21 |
|
| 22 |
+
This model has been fine-tuned on a dataset focused on cybersecurity, specifically on Metasploit payloads. It is based on the LLaMA2 7B architecture and has been further adapted using QLoRA for more efficient parameterization and training. The model is designed to assist in analyzing and generating cybersecurity-related content.
|
| 23 |
|
| 24 |
+
- **Developed by:** Sanjay
|
| 25 |
+
- **Funded by [optional]:** N/A
|
| 26 |
+
- **Shared by [optional]:** N/A
|
| 27 |
+
- **Model type:** LLaMA2 7B QLoRA
|
| 28 |
+
- **Language(s) (NLP):** English
|
| 29 |
+
- **License:** Open-source (specify license)
|
| 30 |
+
- **Finetuned from model [optional]:** georgesung/open_llama_7b_qlora_uncensored
|
| 31 |
|
| 32 |
### Model Sources [optional]
|
| 33 |
|
|
|
|
| 45 |
|
| 46 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 47 |
|
| 48 |
+
This model can be directly used for tasks like generating or analyzing payloads, threat hunting, or cybersecurity data analysis.
|
| 49 |
|
| 50 |
### Downstream Use [optional]
|
| 51 |
|
| 52 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 53 |
|
| 54 |
+
It can be integrated into cybersecurity analysis tools or extended for specific threat detection tasks.
|
| 55 |
|
| 56 |
### Out-of-Scope Use
|
| 57 |
|
| 58 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 59 |
|
| 60 |
+
This model should not be used for malicious purposes, including generating harmful payloads or facilitating illegal activities.
|
| 61 |
|
| 62 |
## Bias, Risks, and Limitations
|
| 63 |
|
| 64 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 65 |
|
| 66 |
+
- **Bias:** The model may generate biased or incorrect results depending on the training data and use case.
|
| 67 |
+
- **Risks:** There is a risk of misuse in cybersecurity operations or unauthorized generation of harmful payloads.
|
| 68 |
+
- **Limitations:** Not suitable for general-purpose NLP tasks, focused mainly on cybersecurity-related content.
|
| 69 |
|
| 70 |
### Recommendations
|
| 71 |
|
| 72 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 73 |
|
| 74 |
+
Users should exercise caution in handling the generated results, especially in sensitive cybersecurity environments. Proper vetting of model output is recommended.
|
| 75 |
|
| 76 |
## How to Get Started with the Model
|
| 77 |
|
|
|
|
| 85 |
|
| 86 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 87 |
|
| 88 |
+
The training dataset consists of payload-related content from Metasploit. Documentation on data pre-processing and filtering is still needed.
|
| 89 |
|
| 90 |
### Training Procedure
|
| 91 |
|
|
|
|
| 95 |
|
| 96 |
[More Information Needed]
|
| 97 |
|
|
|
|
| 98 |
#### Training Hyperparameters
|
| 99 |
|
| 100 |
+
- **Training regime:** 4-bit precision (QLoRA), fp16 mixed precision. The model used the following key hyperparameters:
|
| 101 |
+
- LoRA attention dimension: 64
|
| 102 |
+
- LoRA alpha: 16
|
| 103 |
+
- Initial learning rate: 2e-4
|
| 104 |
+
- Training batch size per GPU: 4
|
| 105 |
+
- Gradient accumulation steps: 1
|
| 106 |
|
| 107 |
#### Speeds, Sizes, Times [optional]
|
| 108 |
|
|
|
|
| 120 |
|
| 121 |
<!-- This should link to a Dataset Card if possible. -->
|
| 122 |
|
| 123 |
+
The evaluation data consists of unseen payloads and Metasploit-related content.
|
| 124 |
|
| 125 |
#### Factors
|
| 126 |
|
| 127 |
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 128 |
|
| 129 |
+
Performance was evaluated based on cybersecurity relevance and accuracy.
|
| 130 |
|
| 131 |
#### Metrics
|
| 132 |
|
| 133 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 134 |
|
| 135 |
+
Evaluation metrics include perplexity, domain-specific accuracy, and payload generation quality.
|
| 136 |
|
| 137 |
### Results
|
| 138 |
|
|
|
|
| 140 |
|
| 141 |
#### Summary
|
| 142 |
|
| 143 |
+
[More Information Needed]
|
| 144 |
|
| 145 |
## Model Examination [optional]
|
| 146 |
|
|
|
|
| 154 |
|
| 155 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 156 |
|
| 157 |
+
- **Hardware Type:** NVIDIA A100
|
| 158 |
- **Hours used:** [More Information Needed]
|
| 159 |
- **Cloud Provider:** [More Information Needed]
|
| 160 |
- **Compute Region:** [More Information Needed]
|
|
|
|
| 164 |
|
| 165 |
### Model Architecture and Objective
|
| 166 |
|
| 167 |
+
Based on the LLaMA2 7B architecture, fine-tuned using QLoRA for enhanced cybersecurity capabilities.
|
| 168 |
|
| 169 |
### Compute Infrastructure
|
| 170 |
|
|
|
|
|
|
|
| 171 |
#### Hardware
|
| 172 |
|
| 173 |
+
NVIDIA A100 GPUs were used for training.
|
| 174 |
|
| 175 |
#### Software
|
| 176 |
|
| 177 |
+
Training was conducted using PyTorch and Hugging Face's 🤗 Transformers library.
|
| 178 |
|
| 179 |
## Citation [optional]
|
| 180 |
|
|
|
|
| 200 |
|
| 201 |
## Model Card Authors [optional]
|
| 202 |
|
| 203 |
+
- **Author:** Sanjay
|
| 204 |
|
| 205 |
## Model Card Contact
|
| 206 |
|
| 207 |
+
[More Information Needed]
|
| 208 |
+
|
| 209 |
+
---
|
| 210 |
+
|
| 211 |
+
You can further customize the card by adding any additional information or links that are relevant to your project.
|