Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report
Paper • 2504.21039 • Published • 17
4-bit AWQ quantized version of CyberSecQwen-4B.
| Parameter | Value |
|---|---|
| Method | AWQ (group_size=128, zero_point=True) |
| Weight precision | 4-bit |
| Compute dtype | float16 |
| Calibration samples | 320 CTI-Bench prompts (256 RCM + 64 MCQ, chat-template formatted) |
| Quantization tool | autoawq |
| Calibration hardware | Modal A100 |
Evaluated under the Foundation-Sec-8B protocol:
| Task | AWQ 4-bit | GGUF Q4_K_M | FP16 Reference |
|---|---|---|---|
| CTI-MCQ (2,500 items) | 0.5921 ± 0.0083 | 0.5368 ± 0.0048 | 0.5868 ± 0.0029 |
| CTI-RCM (1,000 items) | 0.5814 ± 0.0025 | 0.6254 ± 0.0063 | 0.6664 ± 0.0023 |
Key findings:
| Trial | Seed | Accuracy |
|---|---|---|
| 1 | 42 | 0.6016 |
| 2 | 43 | 0.5984 |
| 3 | 44 | 0.5936 |
| 4 | 45 | 0.5780 |
| 5 | 46 | 0.5888 |
| Trial | Seed | Accuracy |
|---|---|---|
| 1 | 42 | 0.6016 |
| 2 | 43 | 0.5984 |
| 3 | 44 | 0.5936 |
| 4 | 45 | 0.5780 |
| 5 | 46 | 0.5888 |
| Trial | Seed | Accuracy |
|---|---|---|
| 1 | 42 | 0.5790 |
| 2 | 43 | 0.5830 |
| 3 | 44 | 0.5790 |
| 4 | 45 | 0.5840 |
| 5 | 46 | 0.5820 |
| Variant | CTI-MCQ | CTI-RCM | Size | Engine |
|---|---|---|---|---|
| AWQ 4-bit | 0.5921 | 0.5814 | 2.7 GB | vLLM |
| GGUF Q4_K_M | 0.5368 | 0.6254 | 2.5 GB | llama.cpp |
Choose AWQ for MCQ/general chat, GGUF for vulnerability classification.
vllm serve ree2raz/CyberSecQwen-4B-AWQ --quantization awq_marlin --dtype float16
| Format | Size |
|---|---|
| Original FP16 | ~8 GB |
| AWQ 4-bit | ~2.7 GB |
@misc{{cybersecqwen2026,
title = {{CyberSecQwen-4B: A Compact CTI Specialist Fine-Tuned from Qwen3-4B-Instruct-2507 on AMD MI300X}},
author = {{Mulia, Samuel}},
year = {{2026}},
publisher = {{Hugging Face}},
url = {{https://huggingface.co/athena129/CyberSecQwen-4B}}
}}
GitHub repository — Modal scripts for quantization + evaluation.
Base model
Qwen/Qwen3-4B-Instruct-2507