Instructions to use athena129/Gemma4Defense-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use athena129/Gemma4Defense-2B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="athena129/Gemma4Defense-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("athena129/Gemma4Defense-2B") model = AutoModelForCausalLM.from_pretrained("athena129/Gemma4Defense-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - PEFT
How to use athena129/Gemma4Defense-2B with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use athena129/Gemma4Defense-2B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "athena129/Gemma4Defense-2B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "athena129/Gemma4Defense-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/athena129/Gemma4Defense-2B
- SGLang
How to use athena129/Gemma4Defense-2B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "athena129/Gemma4Defense-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "athena129/Gemma4Defense-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "athena129/Gemma4Defense-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "athena129/Gemma4Defense-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use athena129/Gemma4Defense-2B with Docker Model Runner:
docker model run hf.co/athena129/Gemma4Defense-2B
Gemma4Defense-2B — Model Card
Model Information
Gemma4Defense-2B is a 2.3B-parameter language model specialized for defensive cybersecurity tasks, fine-tuned from Google's Gemma-4-E2B-it. It is purpose-built for two evaluation skills measured by CTI-Bench: mapping CVE descriptions to their CWE category (CTI-RCM) and answering cyber threat intelligence multiple-choice questions (CTI-MCQ).
Under the evaluation protocol of Foundation-Sec-8B (arXiv:2504.21039), Gemma4Defense-2B retains 98.6% of Foundation-Sec-Instruct-8B's CTI-RCM accuracy while exceeding its CTI-MCQ by +10.5 points, at approximately one-quarter the parameter count.
A companion model trained with the same recipe on Qwen3-4B-Instruct-2507 — CyberSecQwen-4B — converges to the same CTI-RCM accuracy within 0.9 points (0.6664 vs 0.6754), demonstrating that the result is recipe-driven rather than substrate-specific.
| Base model | google/gemma-4-E2B-it |
| Parameters | 2.3B effective |
| Architecture | Gemma-4 (text + vision + audio; fine-tuned for text-only inference) |
| Adapter | LoRA r=64, alpha=64, dropout=0.05 |
| Precision | bfloat16 |
| Languages | English |
| License | Gemma Terms of Use |
Intended Use
Intended Use Cases
Gemma4Defense-2B is intended for security practitioners, researchers, and engineers working on:
- CWE classification — mapping vulnerability descriptions (CVEs, advisories) to MITRE CWE categories
- Cyber threat intelligence Q&A — answering structured questions about cybersecurity concepts, attacks, controls
- Defensive analysis assistants — supporting human analysts who triage CVEs, prioritize patches, or document threat-actor behavior
- Cybersecurity benchmarking — as a reference for compact-model performance on CTI-Bench RCM/MCQ subsets
Downstream Use
The model can be used as a building block in:
- Security operations center (SOC) ticket triage tools that suggest a likely CWE for an incoming CVE
- Vulnerability management dashboards that pre-classify CVE feeds before human review
- Educational tutoring assistants for cybersecurity coursework grounded in CTI-Bench-style content
- Internal cyber knowledge bases / chat assistants for security teams
Out-of-Scope Use
The following uses are out-of-scope and are neither recommended nor intended use cases:
- Generating harmful content — the model must not be used to produce exploit code, weaponized proof-of-concept payloads, attacker tradecraft, or instructions that materially aid offensive operations.
- Critical security decisions without human oversight — the model should not auto-execute remediation, blocklist updates, account lockouts, or any action whose reversal carries cost; outputs are advisory and require qualified human review.
- Legal or medical advice — the model is trained on cybersecurity domain content and is not appropriate for legal, medical, or other regulated-advice contexts.
- Non-security use cases — general chat, code generation, summarization, translation, or other domains outside its specialization will produce lower-quality output than purpose-built models.
- Violation of laws or regulations — including but not limited to unauthorized vulnerability scanning, illegal data access, or misuse contrary to applicable cybersecurity statutes (CFAA, GDPR, etc.).
Hardware Requirements
The numbers below are first-principles estimates from the bf16 weight footprint plus typical KV-cache overhead at the trained 4096-token context. They are not measured throughput numbers; for production deployment, profile against your specific traffic pattern.
| Specification | Gemma4Defense-2B | Foundation-Sec-Instruct-8B (reference) |
|---|---|---|
| Parameters (per-token effective / total weights) | 2.3 B / ~5 B (Gemma-4 Per-Layer Embeddings) | 8 B |
| bf16 weight file on disk | ~9.3 GB | ~16 GB |
| Inference VRAM, weights only (bf16) | ~9 GB | ~16 GB |
| Inference VRAM, weights + 4 K KV cache (bf16) | ~10–11 GB | ~17–18 GB |
| Single-GPU class (bf16, headroom for batch ≥ 1) | Fits on 12 GB+ consumer GPU (e.g., RTX 3060 12 GB, RTX 4070 12 GB, T4 16 GB) | Typically requires 24 GB+ (e.g., RTX 4090, A10, A100 40 GB) |
Notes:
- "Per-token effective" parameters reflect Gemma-4's Per-Layer Embedding architecture: ~2.3 B parameters activate per token, but the full ~5 B weight matrix must be resident in VRAM during inference. The compute cost at inference scales with the per-token effective count.
- Compute (FLOPs / token) is approximately proportional to the per-token effective parameter count at fixed context length, so per-token inference cost is roughly 0.29× that of an 8 B model.
- Quantized variants (int8, int4) further reduce VRAM by ~½ and ~¼ respectively. The released checkpoint is bf16 only; community quantization is not validated by the authors of this release.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "athena129/Gemma4Defense-2B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
cve = ("A deserialization vulnerability in the destruct() function of Laravel "
"v8.5.9 allows attackers to execute arbitrary commands.")
messages = [{
"role": "user",
"content": (
"Analyze the following CVE description and map it to the appropriate CWE. "
"Provide a brief justification for your choice. "
"Ensure the last line of your response contains only the CWE ID.\n\n"
f"CVE Description: {cve}"
),
}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=256, temperature=0.3, do_sample=True)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Training and Evaluation
Training Data
The model was trained on a combined cybersecurity corpus of approximately 12,500 supervised records:
- CTI-RCM 2021 (decontaminated) — CVE → CWE classification examples drawn from MITRE/NVD public records dated 2021. Items appearing in the CTI-Bench evaluation splits were explicitly removed prior to training. (~6,776 records)
- CVE / CTI synthetic Q&A — defensive-analyst-style cyber question–answer pairs grounded in CVE descriptions, designed to teach domain reasoning while preserving terse-answer formats. (~5,776 records)
Decontamination matters here: an earlier internal version (v3) of this work showed roughly 72% test-set overlap when trained on undeduplicated CTI corpora, producing inflated CTI-RCM scores that did not generalize. The released v3.4 model trains exclusively on the 2021 cohort with overlap items removed.
Methodology
This model uses direct supervised fine-tuning (SFT) of an instruction-tuned base via LoRA. The training recipe was selected through a controlled-experiment series across multiple trained variants spanning two model families and several corpus compositions, with multi-trial benchmark validation locking the released hyperparameters.
Key methodological choices that informed the released recipe:
- Direct SFT, not knowledge distillation. Knowledge-distillation variants from a larger 20B teacher model (CyberPal-2.0-20B) were evaluated during recipe development. At the corpus sizes tested (≤ 15K supervised records), direct SFT on the curated corpus outperformed distillation on the headline benchmarks. The released model is direct SFT only.
- Decontaminated training data. An earlier internal iteration showed ~72% test-set overlap when trained on undeduplicated CTI corpora, producing inflated CTI-RCM scores that did not generalize. The released model trains exclusively on the 2021 cohort with CTI-Bench overlap items removed.
- Instruction-tuned base, not pre-trained base. Direct SFT on the IT checkpoint preserves the existing format priors (terse-answer multiple-choice convention) better than SFT on the pre-trained base; comparable runs on base checkpoints showed substantial CTI-MCQ format-binding decay (~−14 to −38 pp in the worst case) at the same corpus scale.
- Multi-trial benchmarking. All headline numbers are means of 5 independent trials with random sampling seeds at temperature 0.3; standard deviations are reported alongside.
- Cross-substrate validation. The identical training corpus and hyperparameters were independently applied to Qwen3-4B-Instruct-2507 (CyberSecQwen-4B). The two models converge to within 0.9 points on CTI-RCM, providing a built-in robustness check that the result is recipe-driven rather than substrate-specific.
Training Setup
| Hyperparameter | Value |
|---|---|
| Adapter | LoRA, r=64, alpha=64, dropout=0.05 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Learning rate | 5e-5 |
| Schedule | cosine, warmup_ratio=0.05 |
| Weight decay | 0.01 |
| Per-device batch size | 2 |
| Gradient accumulation | 8 (effective batch = 16) |
| Epochs | 10 (cumulative across v3.1 → v3.4 incremental training, with adapter resumption) |
| Max sequence length | 4096 |
| Precision | bfloat16 |
| Attention implementation | sdpa |
| Random seed | 42 |
Notes on attention: Gemma-4 has dual head_dim per layer (256 on sliding-attention layers, 512 on global-attention layers). On AMD MI300X (gfx942), FlashAttention-2 via Composable Kernels is bounded at head_dim=256 by the hardware shared-memory budget, so this model was trained with PyTorch's sdpa implementation rather than FA2. The companion CyberSecQwen-4B model uses FA2 because Qwen3-4B's head_dim=128 fits within the limit.
The base model was Gemma-4-E2B-it, an instruction-tuned variant. Training was performed on AMD MI300X 192GB hardware via the AMD Developer Cloud, using PyTorch + ROCm + Hugging Face transformers, peft, and trl 0.29.1 inside the official vllm/vllm-openai-rocm Docker image.
Evaluation
Evaluated under the Foundation-Sec-8B protocol (arXiv:2504.21039 §B.3-B.4): zero-shot for instruction-tuned models, 5-shot for pretrained base models, dataset's own Prompt column as the user message, no system prompt, temperature 0.3, max-tokens 512, concurrency 32. Reported numbers are the mean of 5 independent trials with random sampling seeds; standard deviations are reported alongside.
Headline result
| Benchmark | Metric | Gemma4Defense-2B | Foundation-Sec-Instruct-8B | Δ |
|---|---|---|---|---|
| CTI-MCQ (2,500 items) | strict_acc, 5-trial mean ± std | 0.6042 ± 0.0090 | 0.4996 | +10.5 pp |
| CTI-RCM (1,000 items) | strict_acc, 5-trial mean ± std | 0.6754 ± 0.0035 | 0.6850 | -1.0 pp (within ~3σ of measurement noise) |
Pre / post fine-tune comparison
The improvement attributable to this fine-tune over its starting checkpoint:
| Stage | CTI-RCM | CTI-MCQ |
|---|---|---|
| Gemma-4-E2B-it (raw, instruction-tuned base) | 0.580 | 0.578 |
| Gemma4Defense-2B (this fine-tune) | 0.6754 | 0.6042 |
| Lift | +9.5 pp | +2.6 pp |
The CTI-MCQ lift is intentionally small in absolute terms: Gemma-4-E2B-it already has strong multiple-choice format priors, and the fine-tune is designed to preserve that ability while specializing on CTI-RCM rather than displacing it. The much smaller instruction-tuned-then-domain-SFT displacement effect is documented in the project's accompanying lessons.
Comparison to other cybersecurity-relevant models we evaluated
All numbers below were measured by us under the protocol above (with the noted shot count), not quoted from third-party papers. CyberPal-2.0-20B numbers reflect a single-trial run at our protocol — its own paper reports 0.874 / 0.757 using a different prompt template (Figure 11 of arXiv:2510.14113); the +2pp MCQ match validated our harness, while the RCM gap likely reflects the template difference.
| Model | Size | CTI-RCM | CTI-MCQ | Notes |
|---|---|---|---|---|
| Foundation-Sec-8B (base) | 8B | 0.745 | 0.655 | 5-shot pretrained reference |
| Foundation-Sec-Instruct-8B | 8B | 0.685 | 0.500 | 0-shot, our TARGET |
| CyberPal-2.0-20B (cyber-pal-security/CyberOss-2.0-20B) | 20B | 0.728* | 0.738* | independently verified at our protocol |
| Gemma4Defense-2B (this model) | 2.3B | 0.6754 ± 0.0035 | 0.6042 ± 0.0090 | 5-trial mean ± std |
| CyberSecQwen-4B (companion) | 4B | 0.6664 ± 0.0023 | 0.5868 ± 0.0029 | same recipe, different substrate |
| Gemma-4-E4B-it (raw) | 5.1B effective | 0.618 | 0.666 | 0-shot |
| Gemma-4-E2B-it (raw) | 2.3B | 0.580 | 0.578 | 0-shot, our base |
| Gemma-4-E4B-base (raw) | 5.1B effective | 0.588 | 0.666 | 5-shot |
| Gemma-4-E2B-base (raw) | 2.3B | 0.490 | 0.570 | 5-shot |
* Single-trial values from our independent reproduction.
Key highlights
- Beats Foundation-Sec-Instruct-8B on CTI-MCQ by +10.5 points at approximately one-quarter the parameter count.
- Stays within ~1 point of Foundation-Sec-Instruct-8B on CTI-RCM under the same evaluation protocol.
- Cross-substrate companion (CyberSecQwen-4B) reproduces the CTI-RCM result within 0.9 points using the same recipe on a different model family.
- Independent reproduction of CyberPal-2.0-20B at the Foundation-Sec protocol confirms its CTI-MCQ accuracy within 2 points of its paper claim.
Limitations
Domain-specific knowledge limitations. The model is trained on cybersecurity domain text and is not a general assistant. Tasks outside this domain will produce lower-quality output than purpose-built general models.
Time-anchored training data. The CTI-RCM training cohort is drawn from 2021 records. Vulnerability classes that emerged or rose in prevalence after 2021 (e.g., AI/ML-specific weaknesses, recent supply-chain CWEs) are under-represented in training and will be classified less accurately.
English-only. All training and evaluation data are in English; multilingual cyber tasks will degrade.
CTI-RCM gap. Foundation-Sec-Instruct-8B remains slightly stronger on CTI-RCM under this protocol (-1.0 point gap, within multi-trial measurement noise but still real). Production deployments where CWE classification is the primary metric should benchmark both models on their specific input distribution.
No safety RLHF. The model is supervised-fine-tuned only; the training data emphasizes defensive-analyst framing but no formal reinforcement-learning safety alignment was applied.
Multimodal architecture inherited. Gemma-4 ships as a multimodal base with vision and audio towers. This release contains only the text-language-model weights extracted post-merge; downstream tooling that expects the multimodal config should consume the published
Gemma4ForCausalLMconfig (already declared in the repo).
Recommendations
- Always have qualified security professionals review model outputs before implementation for any operational use case (patch prioritization, ticket routing, blocklisting).
- Use this model as an assistive tool rather than a replacement for expert human judgment, especially for novel vulnerability classes outside the 2021 training cohort.
- Validate on your own input distribution before deployment. Public CTI-Bench performance does not perfectly transfer to internal advisory feeds, vendor-proprietary CWE taxonomies, or non-English content.
- Monitor for drift. As new CVE / CWE patterns emerge, periodically re-evaluate; consider supplementing with retrieval over a current vulnerability knowledge base for time-sensitive queries.
- Apply standard prompt-injection mitigations when wrapping the model in agentic workflows that accept external content (advisory feeds, scraped pages); domain-SFT does not confer prompt-injection resistance.
Companion Model
CyberSecQwen-4B is a sister release fine-tuned with the same training corpus and hyperparameters, on the Qwen3-4B-Instruct-2507 base. The two models converge to within 0.9 points on CTI-RCM (0.6754 Gemma vs 0.6664 Qwen, 5-trial mean) — the same recipe produces equivalent task performance across two distinct model families. The Qwen variant is licensed Apache 2.0 and is available for use cases where the Gemma terms are not a fit.
Citation
If you use this model, please cite:
@misc{gemma4defense2026,
title = {Gemma4Defense-2B: A Compact CTI Specialist Fine-Tuned from Gemma-4-E2B-it},
author = {Mulia, Samuel},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/athena129/Gemma4Defense-2B}
}
The evaluation protocol is from:
@article{foundation-sec-8b,
title = {Foundation-Sec-8B: A Cybersecurity-Specialized Language Model},
author = {Cisco Foundation AI},
journal = {arXiv preprint arXiv:2504.21039},
year = {2025},
url = {https://arxiv.org/abs/2504.21039}
}
The benchmark is from:
@misc{cti-bench,
title = {CTI-Bench: A Benchmark Suite for Cybersecurity LLMs},
author = {Alam, Md Tanvirul and Bhusal, Dipkamal and Park, Youngja and Rastogi, Nidhi},
year = {2024},
url = {https://github.com/xashru/cti-bench}
}
- Downloads last month
- 45
Model tree for athena129/Gemma4Defense-2B
Papers for athena129/Gemma4Defense-2B
Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report
Article mentioning athena129/Gemma4Defense-2B
Evaluation results
- strict_acc (5-trial mean) on CTI-Benchself-reported0.675
- strict_acc (5-trial mean) on CTI-Benchself-reported0.604