kobzaond's picture
Update README.md
c36ab19 verified
|
raw
history blame
2.66 kB

Model Card for AlquistCoder (DPO)

AlquistCoder is a compact, security-aligned coding assistant based on Phi-4-mini (3.8B). It is designed to prioritize secure code generation and robustness against malicious misuse without sacrificing general programming utility.

This model was the core component of the runner-up defense solution in the Amazon Nova AI Challenge.

Model Details

  • Model Name: CIIRC-NLP/alquistcoder_FINAL_DPO
  • Base Model: Microsoft Phi-4-mini-instruct
  • Organization: Czech Institute of Informatics, Robotics and Cybernetics (CIIRC) & FEE, Czech Technical University.
  • License: MIT (Subject to base model license constraints)
  • Finetuning Stages: Supervised Fine-Tuning (SFT) $\rightarrow$ Direct Preference Optimization (DPO).

Key Features

  • Security-First: Explicitly trained to minimize CWE vulnerabilities (e.g., SQL injection, XSS) using a novel synthetic data pipeline.
  • Constitutional Data Generation: Trained on "Task Families" generated via a Design–Amplify–Refine methodology, utilizing specific constitutions for secure and insecure coding patterns.
  • Compact & Efficient: Delivers strong performance at the 3.8B parameter scale, making it suitable for local deployment.
  • Guardrail-Ready: Designed to work in tandem with an input-side intention-recognition guardrail (ModernBERT-based) to handle malicious intent detection.

Performance

AlquistCoder demonstrates significantly lower vulnerability rates compared to larger open-weight and proprietary baselines while maintaining competitive coding utility.

Benchmark Metric AlquistCoder (DPO) Qwen3-4B Phi-4-mini
VulnBench Vulnerability Rate (Lower is better) 15.09% 61.01% 49.69%
CyberSecEval Autocomplete Vuln Rate 2.97% 11.80% 10.39%
HumanEval Pass@1 (Utility) 77.44% 78.05% 74.40%

Note: Security metrics refer to the DPO model. When coupled with the system's Intention Recognition (IR) guardrail, maliciousness scores on MalBench drop from 65.49% to 13.38%.

Usage

AlquistCoder uses standard chat templates. It can be used with the Hugging Face transformers library.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "CIIRC-NLP/alquistcoder_FINAL_DPO"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Example: Asking for code that

license: mit language: - en base_model: - microsoft/Phi-4-mini-instruct