eademir commited on
Commit
4aab4cb
·
verified ·
1 Parent(s): e12a243

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -155
README.md CHANGED
@@ -1,6 +1,11 @@
1
  ---
2
  base_model: mistralai/Mistral-7B-v0.3
3
  library_name: peft
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -13,190 +18,102 @@ library_name: peft
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
 
18
 
 
 
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
 
 
 
 
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
 
78
- ### Training Data
 
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
 
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
 
141
  ## Environmental Impact
 
 
 
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
 
 
 
 
 
 
198
 
199
- [More Information Needed]
200
  ### Framework versions
201
 
202
  - PEFT 0.15.2
 
1
  ---
2
  base_model: mistralai/Mistral-7B-v0.3
3
  library_name: peft
4
+ license: mit
5
+ datasets:
6
+ - AlicanKiraz0/Cybersecurity-Dataset-v1
7
+ language:
8
+ - en
9
  ---
10
 
11
  # Model Card for Model ID
 
18
 
19
  ### Model Description
20
 
21
+ This model is a fine-tuned version of mistralai/Mistral-7B-v0.3, adapted specifically for cybersecurity-related tasks using the AlicanKiraz0/Cybersecurity-Dataset-v1. The model has been further trained using supervised fine-tuning with LoRA (Low-Rank Adaptation) to enhance its ability to answer questions and generate content relevant to information security, threat analysis, incident response, application security, and other cybersecurity domains.
22
 
23
+ The training objective was to make the base model more effective for real-world cybersecurity use cases, including both offensive and defensive security topics. The fine-tuned model is suitable for tasks such as question answering, incident response simulation, CVE summarization, and security education.
24
 
25
+ Base Model: mistralai/Mistral-7B-v0.3
26
+ Fine-Tuning Approach: Parameter-efficient fine-tuning with LoRA (PEFT)
27
+ Domain: Cybersecurity (offensive & defensive, information security, vulnerability analysis, incident response, etc.)
28
+ Data: Publicly available, expert-curated cybersecurity texts and structured records
29
+ Intended Use:
30
+ This model is designed to assist cybersecurity professionals, researchers, and educators with high-quality responses and reasoning in the cybersecurity domain. It can be used for chatbots, research assistants, automated knowledge extraction, and educational tools.
31
 
32
+ ## Limitations:
 
 
 
 
 
 
33
 
34
+ The model may generate outdated or incorrect information if the data is out of date.
35
+ Should not be relied upon for critical, real-world incident response without human oversight.
36
+ Not suitable for generating or promoting illegal or unethical hacking activities.
37
 
38
+ ## Model type / Language(s) / License / Finetuned from
39
+ Model type: Causal Language Model (Decoder-only, LoRA fine-tuned)
40
+ Language(s): English (en)
41
+ License: MIT
42
+ Finetuned from: mistralai/Mistral-7B-v0.3
43
 
44
+ ## Direct Use
45
+ The model can be used as a conversational assistant or question-answering system for cybersecurity-related topics. Intended users are cybersecurity professionals, students, and researchers.
 
46
 
47
+ ## Downstream Use
48
+ Can be integrated into educational tools, automated incident response simulators, or red-team/blue-team training assistants.
49
 
50
+ ## Out-of-Scope Use
51
+ Should not be used for automating actual attack scenarios, generating exploit code, or in critical security systems without human oversight.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ## Bias, Risks, and Limitations
54
+ Model may reflect biases present in public cybersecurity datasets. It may hallucinate or return outdated information and is not a substitute for professional judgment.
55
 
56
+ ## How to Get Started
57
+ ```python
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+ from peft import PeftModel
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.3", device_map="auto", torch_dtype="auto")
62
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.3")
63
+ lora_model = PeftModel.from_pretrained(base_model, "[your-hf-username]/[your-model-name]")
64
 
65
+ prompt = "User: What is a buffer overflow?\nAssistant:"
66
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
67
+ outputs = lora_model.generate(**inputs, max_new_tokens=100)
68
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
69
+ ```
70
 
 
71
 
72
+ ## Training Data
73
+ See [AlicanKiraz0/Cybersecurity-Dataset-v1](https://huggingface.co/datasets/AlicanKiraz0/Cybersecurity-Dataset-v1) for details and license.
74
 
75
+ ## Training Procedure & Hyperparameters
76
+ Precision: fp16 mixed precision
77
+ Batch size: 24
78
+ Epochs: 52
79
+ Learning rate: 2e-4
80
+ LoRA rank (r): 8
81
+ LoRA alpha: 16
 
 
 
 
 
 
 
 
 
82
 
83
  ## Evaluation
84
+ Evaluated on a held-out subset of the same dataset using cross-entropy loss (final value: ~0.81). Human inspection suggests strong security-domain alignment, but thorough downstream task evaluation is ongoing.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  ## Environmental Impact
87
+ **Hardware Type:** NVIDIA A100
88
+ **Hours used:** 2
89
+ **Cloud Provider:** Google Colab
90
 
91
+ ## Technical Specifications
92
+ **Architecture:** Mistral-7B, decoder-only transformer
93
+ **Fine-tuning library:** PEFT 0.15.2 (LoRA)
94
+ **Software:** transformers, peft, bitsandbytes, datasets
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
+ ## Model Card Authors [optional]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
+ [eademir](https://huggingface.co/eademir)
99
 
100
+ ## Model Card Contact
101
 
102
+ https://huggingface.co/eademir
103
 
104
+ ## Citation
105
 
106
+ If you use this model, please cite it as:
107
 
108
+ ```bibtex
109
+ @misc{eademir_mistral_cybersec_2024,
110
+ title = {Mistral-7B-v0.3 Fine-tuned on Cybersecurity-Dataset-v1},
111
+ author = {Eray Aydemir},
112
+ howpublished = {\url{https://huggingface.co/eademir/mistralv0.3-cybersec}},
113
+ year = {2024},
114
+ note = {Fine-tuned Mistral-7B-v0.3 on AlicanKiraz0/Cybersecurity-Dataset-v1}
115
+ }
116
 
 
117
  ### Framework versions
118
 
119
  - PEFT 0.15.2