nielsr HF Staff commited on
Commit
ecdb6ca
·
verified ·
1 Parent(s): 65ecb64

Update model card with technical report link and ArXiv metadata

Browse files

This PR improves the model card by:
- Adding the `arxiv: 2601.21051` metadata tag to associate the model with its technical report.
- Updating the **Technical Report** link in the model description.
- Adding a **Citation** section with the BibTeX entry from the paper.

Files changed (1) hide show
  1. README.md +248 -232
README.md CHANGED
@@ -1,232 +1,248 @@
1
- ---
2
- base_model:
3
- - fdtn-ai/Foundation-Sec-8B
4
- language:
5
- - en
6
- library_name: transformers
7
- license: other
8
- pipeline_tag: text-generation
9
- tags:
10
- - security
11
- - llama
12
- - fdtn-sec
13
- ---
14
- # Foundation-Sec-8B-Reasoning - Model Card
15
-
16
- ## Model Information
17
-
18
- Llama-3.1-FoundationAI-SecurityLLM-8B-Reasoning (Foundation-Sec-8B-Reasoning) is an open-weight, 8-billion parameter instruction-tuned language model specialized for cybersecurity applications.
19
- It extends the Foundation-Sec-8B base model with instruction-following and reasoning capabilities.
20
- It leverages prior training to understand security concepts, terminology, and practices across multiple security domains.
21
- Further reasoning training enables the model to reason about problems before presenting a solution.
22
- Foundation-Sec-8B-Reasoning enables organizations to build AI-driven security tools that can be deployed locally, reducing dependency on cloud-based AI services while maintaining high performance on security-related tasks.
23
-
24
- - **Model Name:** Llama-3.1-FoundationAI-SecurityLLM-8B-Reasoning (Foundation-Sec-8B-Reasoning)
25
- - **Model Developer:** Foundation AI at Cisco
26
- - **Model Card Contact:** https://fdtn.ai/contact
27
- - **Technical Report:** To be released
28
- - **Model Release Date:** January 28th, 2026
29
- - **Supported Language(s):** English
30
- - **Model Architecture:** Auto-regressive language model that uses an optimized transformer architecture (Meta Llama-3.1-8B backbone)
31
- - **Training Objective:** Instruction following and reasoning traces
32
- - **Training Data Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released on updated data.
33
- - **License:** See NOTICE.md
34
-
35
- ## Intended Use
36
-
37
- ### Intended Use Cases
38
-
39
- Foundation-Sec-8B-Reasoning is designed for security practitioners, researchers, and developers building AI-powered security workflows and applications.
40
- Foundation-Sec-8B-Reasoning is optimized for three core use case categories:
41
-
42
- - **SOC Acceleration**: Automating triage, summarization, case note generation, and evidence collection.
43
- - **Proactive Threat Defense**: Simulating attacks, prioritizing vulnerabilities, mapping TTPs, and modeling attacker behavior.
44
- - **Engineering Enablement**: Providing security assistance, validating configurations, assessing compliance evidence, and improving security posture.
45
-
46
- The model is intended for local deployment in environments prioritizing data security, regulatory compliance, and operational control.
47
-
48
- ### Downstream Use
49
-
50
- Foundation-Sec-8B-Reasoning can be used directly for security-related chat use cases. Example downstream applications include:
51
-
52
- - Summarization
53
- - Summarizing detection playbooks and incident reports
54
- - Consolidating fragmented analyst notes into structured case summaries
55
- - Classification
56
- - Mapping threats to MITRE ATT&CK techniques
57
- - Prioritizing vulnerabilities based on contextual risk
58
- - Classifying security-relevant emails and leaked file contents
59
- - Named Entity Recognition
60
- - Extracting compliance evidence from documents
61
- - Building network behavior profiles from technical manuals
62
- - Question & Answer
63
- - Assisting SOC analysts with alert triage and investigation
64
- - Responding to cloud security and software compliance queries
65
- - Reasoning and Text Generation
66
- - Generating red-team attack plans and threat models
67
- - Predicting attacker next steps in active investigations
68
- - Enriching vulnerability scan results with contextual insights
69
-
70
- For questions or assistance with fine-tuning Foundation-Sec-8B-Reasoning, please reach out to the team.
71
-
72
- ### Out-of-Scope Use
73
-
74
- The following uses are out-of-scope and are neither recommended nor intended use cases:
75
-
76
- 1. **Generating harmful content** - The model should not be used to:
77
- - Generate malware or other malicious code
78
- - Create phishing content or social engineering scripts
79
- - Develop attack plans targeting specific organizations
80
- - Design exploitation techniques for vulnerabilities without legitimate security research purposes
81
- 2. **Critical security decisions without human oversight** - The model should not be used for:
82
- - Autonomous security decision-making without human review
83
- - Critical infrastructure protection without expert supervision
84
- - Final determination of security compliance without human verification
85
- - Autonomous vulnerability remediation without testing
86
- 3. **Legal or medical advice** - The model is not qualified to provide:
87
- - Legal advice regarding security regulations, compliance requirements, or intellectual property disputes
88
- - Legal advice regarding security issues that would reference legal statutes, precedents, or case law necessary to provide legal advice
89
- - Medical advice regarding health impacts of security incidents
90
- 4. **Non-security use cases** - The model is specifically optimized for cybersecurity and may not perform as well on general tasks as models trained for broader applications.
91
- 5. **Violation of Laws or Regulations** - Any use that violates applicable laws or regulations.
92
-
93
- ## How to Get Started with the Model
94
-
95
- Use the code below to get started with the model.
96
- [The cookbook](https://github.com/cisco-foundation-ai/cookbook) provides example use cases, code samples for adoption, and references.
97
-
98
- ```python
99
- # Import the required libraries
100
- import torch
101
- from transformers import AutoTokenizer, AutoModelForCausalLM
102
-
103
- # Load the model and tokenizer
104
- tokenizer = AutoTokenizer.from_pretrained("fdtn-ai/Foundation-Sec-8B-Reasoning")
105
- model = AutoModelForCausalLM.from_pretrained("fdtn-ai/Foundation-Sec-8B-Reasoning")
106
-
107
- prompt = "CVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. What is the corresponding CWE?"
108
-
109
- messages = [
110
- {"role": "user", "content": prompt}
111
- ]
112
-
113
- model_inputs = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
114
- inputs = tokenizer(model_inputs, return_tensors="pt", add_special_tokens=False)
115
- output = model.generate(**inputs, temperature=0.1, max_new_tokens=1024)
116
- resp = tokenizer.batch_decode(output)[0]
117
- print(resp.replace(model_inputs, ""))
118
- ```
119
-
120
- ## Training and Evaluation
121
-
122
- ### Training Data
123
-
124
- Foundation-Sec-8B-Reasoning was trained on a wide variety of public and proprietary question answer/pairs for general and security-specific reasoning and instruction-following tasks.
125
-
126
- **Data cutoff:** April 10th, 2025.
127
-
128
- A more detailed description of the methodology is available in the technical report.
129
-
130
- ### Training Setup
131
-
132
- Foundation-Sec-8B-Reasoning is based on the **Llama 3.1 8B** architecture. Training was performed on Cisco Foundation AI’s internal compute cluster.
133
-
134
- Key training details:
135
-
136
- - **Instruction fine-tuning** to follow human instructions
137
- - **RLHF** to align model answers to human preferences
138
- - **32,768-token** sequence length
139
- - **Optimizer:** AdamW
140
-
141
- A more detailed description of the methodology is available in the technical report.
142
-
143
- ### Evaluation
144
-
145
- Foundation-Sec-8B-Reasoning was benchmarked on cybersecurity and general reasoning tasks, using a standardized 0-shot instruction prompting setup (temperature = 0.3).
146
-
147
- | **Benchmark** | **Foundation-Sec-8B-Reasoning** | **Llama 3.1 8B** | **GPT-5-Nano** |
148
- | --- | --- | --- | --- |
149
- | CTI-MCQA | 0.691 | 0.607 | 0.688 |
150
- | CTI-RCM | 0.753 | 0.531 | 0.672 |
151
- | CTI-VSP | 0.856 | 0.811 | 0.822 |
152
- | CTI-Reasoning | 0.411 | 0.335 | 0.431 |
153
-
154
- **Benchmark Overview:**
155
-
156
- - **CTI-MCQA:** 2,500 multiple-choice questions testing cybersecurity knowledge across frameworks like MITRE ATT&CK, NIST, GDPR, and threat intelligence best practices.
157
- - **CTI-RCM:** 1,000 vulnerability root cause mapping examples linking CVEs to CWE categories, assessing deep understanding of security weaknesses.
158
- - **CTI-VSP:** A set of 1,000 CVE descriptions where models predict the CVSS v3 Base metrics and compute the overall score, with performance measured by the average absolute difference from the true scores.
159
- - **IF-Eval:** 541 instruction-following prompts designed for automated, reproducible assessment of LLM instruction-following capabilities.
160
- - **Alpaca Eval 2:** 805 single-turn prompts auto-scored by GPT-4 Turbo against a GPT-4 Turbo reference, validated with 20,000 human preference votes, and closely matching ChatBot Arena results.
161
- - **CTI-Reasoning**: An internal benchmark measuring the ability of the model to reason about second-degree connections between MITRE ATT&CK entities.
162
-
163
- **Key highlights:**
164
-
165
- - Reasoning traces allow model to **leverage test-time compute** to answer queries
166
- - **State-of-the-art non-RAG performance** on CTI-RCM benchmark
167
- - **Better or on-par performance on cyber threat intelligence benchmarks** against GPT-5-Nano
168
-
169
- For full benchmark details and evaluation methodology, please refer to the technical report.
170
-
171
- ## Safety Alignment
172
-
173
- Standard best practices were followed to align the model with general safety values.
174
- Despite the alignment, however, safe out-of-the-box performance cannot be guaranteed.
175
- Our evaluations show that while the model can achieve reasonable safety performance out-of-the-box, LlamaGuard provides much better protection against malicious requests.
176
- It is recommended to deploy this model with additional safeguards (such as LlamaGuard) and human oversight.
177
-
178
- | Model | HarmBench Performance |
179
- | --- | --- |
180
- | Llama-3.1-8B-Instruct | 62.75% |
181
- | Foundation-Sec-8B-Reasoning | 93.00% |
182
- | **LlamaGuard** + Foundation-Sec-8B-Reasoning | 98.25% |
183
-
184
- ## Limitations
185
-
186
- Foundation-Sec-8B-Reasoning has several limitations that users should be aware of:
187
-
188
- 1. **Domain-specific knowledge limitations**:
189
- - Foundation-Sec-8B-Reasoning may not be familiar with recent vulnerabilities, exploits, or novel attack vectors or security technologies released after its training cutoff date
190
- - Knowledge of specialized or proprietary security systems or tools may be limited
191
- 2. **Potential biases**:
192
- - The model may reflect biases present in security literature and documentation
193
- - The model may be trained on known attack patterns and have difficulty recognizing novel attack vectors
194
- - Security practices and recommendations may be biased toward certain technological ecosystems
195
- - Geographic and cultural biases in security approaches may be present
196
- 3. **Security risks**:
197
- - The model cannot verify the identity or intentions of users
198
- - Adversarial prompting techniques might potentially bypass safety mechanisms
199
- - The model may unintentionally provide information that could be misused if proper prompting guardrails are not implemented
200
- 4. **Contextual blindness:**
201
- - The model may struggle to understand the complex interrelationships between systems, users, and data in order to provide accurate context.
202
- 5. **Technical limitations**:
203
- - Performance varies based on how security concepts are described in prompts
204
- - May not fully understand complex, multi-step security scenarios without clear explanation
205
- - Cannot access external systems or actively scan environments
206
- - Cannot independently verify factual accuracy of its outputs
207
- 6. **Ethical considerations**:
208
- - Dual-use nature of security knowledge requires careful consideration of appropriate use cases
209
-
210
- ### Recommendations
211
-
212
- To address the limitations of Foundation-Sec-8B-Reasoning, we recommend:
213
-
214
- 1. **Human oversight**:
215
- - Always have qualified security professionals review model outputs before implementation
216
- - Use the model as an assistive tool rather than a replacement for expert human judgment
217
- - Implement a human-in-the-loop approach for security-critical applications
218
- 2. **System design safeguards**:
219
- - Implement additional validation layers for applications built with this model
220
- - Consider architectural constraints that limit the model’s ability to perform potentially harmful actions (excessive agency)
221
- - Deploy the model in environments with appropriate access controls
222
- 3. **Prompt engineering**:
223
- - Use carefully designed prompts that encourage ethical security practices
224
- - Include explicit instructions regarding responsible disclosure and ethical hacking principles
225
- - Structure interactions to minimize the risk of inadvertently harmful outputs
226
- 4. **Knowledge supplementation**:
227
- - Supplement the model with up-to-date security feeds and databases
228
- - Implement retrieval-augmented generation for current threat intelligence sources
229
- 5. **Usage policies**:
230
- - Develop and enforce clear acceptable use policies for applications using this model
231
- - Implement monitoring and auditing for high-risk applications
232
- - Create documentation for end users about the model’s limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - fdtn-ai/Foundation-Sec-8B
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ license: other
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - security
11
+ - llama
12
+ - fdtn-sec
13
+ arxiv: 2601.21051
14
+ ---
15
+
16
+ # Foundation-Sec-8B-Reasoning - Model Card
17
+
18
+ ## Model Information
19
+
20
+ Llama-3.1-FoundationAI-SecurityLLM-8B-Reasoning (Foundation-Sec-8B-Reasoning) is an open-weight, 8-billion parameter instruction-tuned language model specialized for cybersecurity applications.
21
+ It extends the Foundation-Sec-8B base model with instruction-following and reasoning capabilities.
22
+ It leverages prior training to understand security concepts, terminology, and practices across multiple security domains.
23
+ Further reasoning training enables the model to reason about problems before presenting a solution.
24
+ Foundation-Sec-8B-Reasoning enables organizations to build AI-driven security tools that can be deployed locally, reducing dependency on cloud-based AI services while maintaining high performance on security-related tasks.
25
+
26
+ - **Model Name:** Llama-3.1-FoundationAI-SecurityLLM-8B-Reasoning (Foundation-Sec-8B-Reasoning)
27
+ - **Model Developer:** Foundation AI at Cisco
28
+ - **Model Card Contact:** https://fdtn.ai/contact
29
+ - **Technical Report:** [arXiv:2601.21051](https://huggingface.co/papers/2601.21051)
30
+ - **Model Release Date:** January 28th, 2026
31
+ - **Supported Language(s):** English
32
+ - **Model Architecture:** Auto-regressive language model that uses an optimized transformer architecture (Meta Llama-3.1-8B backbone)
33
+ - **Training Objective:** Instruction following and reasoning traces
34
+ - **Training Data Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released on updated data.
35
+ - **License:** See NOTICE.md
36
+
37
+ ## Intended Use
38
+
39
+ ### Intended Use Cases
40
+
41
+ Foundation-Sec-8B-Reasoning is designed for security practitioners, researchers, and developers building AI-powered security workflows and applications.
42
+ Foundation-Sec-8B-Reasoning is optimized for three core use case categories:
43
+
44
+ - **SOC Acceleration**: Automating triage, summarization, case note generation, and evidence collection.
45
+ - **Proactive Threat Defense**: Simulating attacks, prioritizing vulnerabilities, mapping TTPs, and modeling attacker behavior.
46
+ - **Engineering Enablement**: Providing security assistance, validating configurations, assessing compliance evidence, and improving security posture.
47
+
48
+ The model is intended for local deployment in environments prioritizing data security, regulatory compliance, and operational control.
49
+
50
+ ### Downstream Use
51
+
52
+ Foundation-Sec-8B-Reasoning can be used directly for security-related chat use cases. Example downstream applications include:
53
+
54
+ - Summarization
55
+ - Summarizing detection playbooks and incident reports
56
+ - Consolidating fragmented analyst notes into structured case summaries
57
+ - Classification
58
+ - Mapping threats to MITRE ATT&CK techniques
59
+ - Prioritizing vulnerabilities based on contextual risk
60
+ - Classifying security-relevant emails and leaked file contents
61
+ - Named Entity Recognition
62
+ - Extracting compliance evidence from documents
63
+ - Building network behavior profiles from technical manuals
64
+ - Question & Answer
65
+ - Assisting SOC analysts with alert triage and investigation
66
+ - Responding to cloud security and software compliance queries
67
+ - Reasoning and Text Generation
68
+ - Generating red-team attack plans and threat models
69
+ - Predicting attacker next steps in active investigations
70
+ - Enriching vulnerability scan results with contextual insights
71
+
72
+ For questions or assistance with fine-tuning Foundation-Sec-8B-Reasoning, please reach out to the team.
73
+
74
+ ### Out-of-Scope Use
75
+
76
+ The following uses are out-of-scope and are neither recommended nor intended use cases:
77
+
78
+ 1. **Generating harmful content** - The model should not be used to:
79
+ - Generate malware or other malicious code
80
+ - Create phishing content or social engineering scripts
81
+ - Develop attack plans targeting specific organizations
82
+ - Design exploitation techniques for vulnerabilities without legitimate security research purposes
83
+ 2. **Critical security decisions without human oversight** - The model should not be used for:
84
+ - Autonomous security decision-making without human review
85
+ - Critical infrastructure protection without expert supervision
86
+ - Final determination of security compliance without human verification
87
+ - Autonomous vulnerability remediation without testing
88
+ 3. **Legal or medical advice** - The model is not qualified to provide:
89
+ - Legal advice regarding security regulations, compliance requirements, or intellectual property disputes
90
+ - Legal advice regarding security issues that would reference legal statutes, precedents, or case law necessary to provide legal advice
91
+ - Medical advice regarding health impacts of security incidents
92
+ 4. **Non-security use cases** - The model is specifically optimized for cybersecurity and may not perform as well on general tasks as models trained for broader applications.
93
+ 5. **Violation of Laws or Regulations** - Any use that violates applicable laws or regulations.
94
+
95
+ ## How to Get Started with the Model
96
+
97
+ Use the code below to get started with the model.
98
+ [The cookbook](https://github.com/cisco-foundation-ai/cookbook) provides example use cases, code samples for adoption, and references.
99
+
100
+ ```python
101
+ # Import the required libraries
102
+ import torch
103
+ from transformers import AutoTokenizer, AutoModelForCausalLM
104
+
105
+ # Load the model and tokenizer
106
+ tokenizer = AutoTokenizer.from_pretrained("fdtn-ai/Foundation-Sec-8B-Reasoning")
107
+ model = AutoModelForCausalLM.from_pretrained("fdtn-ai/Foundation-Sec-8B-Reasoning")
108
+
109
+ prompt = "CVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. What is the corresponding CWE?"
110
+
111
+ messages = [
112
+ {"role": "user", "content": prompt}
113
+ ]
114
+
115
+ model_inputs = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
116
+ inputs = tokenizer(model_inputs, return_tensors="pt", add_special_tokens=False)
117
+ output = model.generate(**inputs, temperature=0.1, max_new_tokens=1024)
118
+ resp = tokenizer.batch_decode(output)[0]
119
+ print(resp.replace(model_inputs, ""))
120
+ ```
121
+
122
+ ## Training and Evaluation
123
+
124
+ ### Training Data
125
+
126
+ Foundation-Sec-8B-Reasoning was trained on a wide variety of public and proprietary question answer/pairs for general and security-specific reasoning and instruction-following tasks.
127
+
128
+ **Data cutoff:** April 10th, 2025.
129
+
130
+ A more detailed description of the methodology is available in the technical report.
131
+
132
+ ### Training Setup
133
+
134
+ Foundation-Sec-8B-Reasoning is based on the **Llama 3.1 8B** architecture. Training was performed on Cisco Foundation AI’s internal compute cluster.
135
+
136
+ Key training details:
137
+
138
+ - **Instruction fine-tuning** to follow human instructions
139
+ - **RLHF** to align model answers to human preferences
140
+ - **32,768-token** sequence length
141
+ - **Optimizer:** AdamW
142
+
143
+ A more detailed description of the methodology is available in the technical report.
144
+
145
+ ### Evaluation
146
+
147
+ Foundation-Sec-8B-Reasoning was benchmarked on cybersecurity and general reasoning tasks, using a standardized 0-shot instruction prompting setup (temperature = 0.3).
148
+
149
+ | **Benchmark** | **Foundation-Sec-8B-Reasoning** | **Llama 3.1 8B** | **GPT-5-Nano** |
150
+ | --- | --- | --- | --- |
151
+ | CTI-MCQA | 0.691 | 0.607 | 0.688 |
152
+ | CTI-RCM | 0.753 | 0.531 | 0.672 |
153
+ | CTI-VSP | 0.856 | 0.811 | 0.822 |
154
+ | CTI-Reasoning | 0.411 | 0.335 | 0.431 |
155
+
156
+ **Benchmark Overview:**
157
+
158
+ - **CTI-MCQA:** 2,500 multiple-choice questions testing cybersecurity knowledge across frameworks like MITRE ATT&CK, NIST, GDPR, and threat intelligence best practices.
159
+ - **CTI-RCM:** 1,000 vulnerability root cause mapping examples linking CVEs to CWE categories, assessing deep understanding of security weaknesses.
160
+ - **CTI-VSP:** A set of 1,000 CVE descriptions where models predict the CVSS v3 Base metrics and compute the overall score, with performance measured by the average absolute difference from the true scores.
161
+ - **IF-Eval:** 541 instruction-following prompts designed for automated, reproducible assessment of LLM instruction-following capabilities.
162
+ - **Alpaca Eval 2:** 805 single-turn prompts auto-scored by GPT-4 Turbo against a GPT-4 Turbo reference, validated with 20,000 human preference votes, and closely matching ChatBot Arena results.
163
+ - **CTI-Reasoning**: An internal benchmark measuring the ability of the model to reason about second-degree connections between MITRE ATT&CK entities.
164
+
165
+ **Key highlights:**
166
+
167
+ - Reasoning traces allow model to **leverage test-time compute** to answer queries
168
+ - **State-of-the-art non-RAG performance** on CTI-RCM benchmark
169
+ - **Better or on-par performance on cyber threat intelligence benchmarks** against GPT-5-Nano
170
+
171
+ For full benchmark details and evaluation methodology, please refer to the technical report.
172
+
173
+ ## Safety Alignment
174
+
175
+ Standard best practices were followed to align the model with general safety values.
176
+ Despite the alignment, however, safe out-of-the-box performance cannot be guaranteed.
177
+ Our evaluations show that while the model can achieve reasonable safety performance out-of-the-box, LlamaGuard provides much better protection against malicious requests.
178
+ It is recommended to deploy this model with additional safeguards (such as LlamaGuard) and human oversight.
179
+
180
+ | Model | HarmBench Performance |
181
+ | --- | --- |
182
+ | Llama-3.1-8B-Instruct | 62.75% |
183
+ | Foundation-Sec-8B-Reasoning | 93.00% |
184
+ | **LlamaGuard** + Foundation-Sec-8B-Reasoning | 98.25% |
185
+
186
+ ## Limitations
187
+
188
+ Foundation-Sec-8B-Reasoning has several limitations that users should be aware of:
189
+
190
+ 1. **Domain-specific knowledge limitations**:
191
+ - Foundation-Sec-8B-Reasoning may not be familiar with recent vulnerabilities, exploits, or novel attack vectors or security technologies released after its training cutoff date
192
+ - Knowledge of specialized or proprietary security systems or tools may be limited
193
+ 2. **Potential biases**:
194
+ - The model may reflect biases present in security literature and documentation
195
+ - The model may be trained on known attack patterns and have difficulty recognizing novel attack vectors
196
+ - Security practices and recommendations may be biased toward certain technological ecosystems
197
+ - Geographic and cultural biases in security approaches may be present
198
+ 3. **Security risks**:
199
+ - The model cannot verify the identity or intentions of users
200
+ - Adversarial prompting techniques might potentially bypass safety mechanisms
201
+ - The model may unintentionally provide information that could be misused if proper prompting guardrails are not implemented
202
+ 4. **Contextual blindness:**
203
+ - The model may struggle to understand the complex interrelationships between systems, users, and data in order to provide accurate context.
204
+ 5. **Technical limitations**:
205
+ - Performance varies based on how security concepts are described in prompts
206
+ - May not fully understand complex, multi-step security scenarios without clear explanation
207
+ - Cannot access external systems or actively scan environments
208
+ - Cannot independently verify factual accuracy of its outputs
209
+ 6. **Ethical considerations**:
210
+ - Dual-use nature of security knowledge requires careful consideration of appropriate use cases
211
+
212
+ ### Recommendations
213
+
214
+ To address the limitations of Foundation-Sec-8B-Reasoning, we recommend:
215
+
216
+ 1. **Human oversight**:
217
+ - Always have qualified security professionals review model outputs before implementation
218
+ - Use the model as an assistive tool rather than a replacement for expert human judgment
219
+ - Implement a human-in-the-loop approach for security-critical applications
220
+ 2. **System design safeguards**:
221
+ - Implement additional validation layers for applications built with this model
222
+ - Consider architectural constraints that limit the model’s ability to perform potentially harmful actions (excessive agency)
223
+ - Deploy the model in environments with appropriate access controls
224
+ 3. **Prompt engineering**:
225
+ - Use carefully designed prompts that encourage ethical security practices
226
+ - Include explicit instructions regarding responsible disclosure and ethical hacking principles
227
+ - Structure interactions to minimize the risk of inadvertently harmful outputs
228
+ 4. **Knowledge supplementation**:
229
+ - Supplement the model with up-to-date security feeds and databases
230
+ - Implement retrieval-augmented generation for current threat intelligence sources
231
+ 5. **Usage policies**:
232
+ - Develop and enforce clear acceptable use policies for applications using this model
233
+ - Implement monitoring and auditing for high-risk applications
234
+ - Create documentation for end users about the model’s limitations
235
+
236
+ ## Citation
237
+
238
+ ```bibtex
239
+ @misc{yang2026foundation-sec-8b-reasoning,
240
+ title={Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report},
241
+ author={Zhuoran Yang and Ed Li and Jianliang He and Aman Priyanshu and Baturay Saglam and Paul Kassianik and Sajana Weerawardhena and Anu Vellore and Blaine Nelson and Neusha Javidnia and Arthur Goldblatt and Fraser Burch and Avi Zohary and Assaf Eisenman and Mahdi Sabbaghi and Supriti Vijay and Rahim Dharssi and Dhruv Kedia and Kojin Oshiba and Yaron Singer and Amin Karbasi},
242
+ year={2026},
243
+ eprint={2601.21051},
244
+ archivePrefix={arXiv},
245
+ primaryClass={cs.CR},
246
+ url={https://huggingface.co/papers/2601.21051}
247
+ }
248
+ ```