FALCON / README.md
selfconstruct3d's picture
Update README.md
f5d2361 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - cybersecurity
  - APT
  - threat-intelligence
  - contrastive-learning
  - embeddings
  - attribution
  - MITRE-ATTACK
  - CTI
  - ModernBERT
datasets:
  - mitre-attack
base_model: cisco-ai/SecureBERT2.0-base
pipeline_tag: feature-extraction
model-index:
  - name: FALCON
    results:
      - task:
          type: text-classification
          name: APT Group Attribution
        metrics:
          - type: accuracy
            value: 0
            name: Accuracy (5-fold CV)
          - type: f1
            value: 0
            name: F1 Weighted (5-fold CV)
          - type: f1
            value: 0
            name: F1 Macro (5-fold CV)

FALCON — Finetuned Actor Linking via CONtrastive Learning

A domain-adapted embedding model for automated APT group attribution from cyber threat intelligence text.

Developed by AIT — Austrian Institute of Technology, Cybersecurity Group
Model type Transformer encoder (ModernBERT) with contrastive fine-tuning
Language English
License Apache 2.0
Base model cisco-ai/SecureBERT2.0-base
Paper Coming soon

Model Description

FALCON (Finetuned Actor Linking via CONtrastive learning) is a cybersecurity embedding model that maps textual descriptions of attack behaviors to a vector space where descriptions belonging to the same APT group are close together and descriptions from different groups are far apart.

Given a sentence like "The group has used spearphishing emails with malicious macro-enabled attachments to deliver initial payloads", FALCON produces a 768-dimensional embedding that can be used to classify which APT group performed that behavior.

Training Pipeline

cisco-ai/SecureBERT2.0-base (ModernBERT, 150M params)
        ↓
   Tokenizer Extension — Added APT group names + aliases as single tokens
        ↓
   MLM Fine-Tuning — Taught the model meaningful representations for new tokens
        ↓
   Supervised Contrastive Fine-Tuning (SupCon) — Shaped the embedding space
        so same-group descriptions cluster together
        ↓
   FALCON

What Makes FALCON Different

  • Domain-adapted base: Built on SecureBERT 2.0, which already understands cybersecurity terminology, rather than a generic language model.
  • Contrastive objective: Unlike classification-only models, FALCON optimizes the embedding geometry directly using Supervised Contrastive Loss (Khosla et al., 2020), producing embeddings suitable for retrieval, clustering, and few-shot classification.
  • Name-agnostic: Group names are masked during contrastive training with [MASK], forcing the model to learn behavioral patterns rather than memorizing name co-occurrences.
  • Alias-aware tokenizer: APT group names and their vendor-specific aliases (e.g., APT29, Cozy Bear, Midnight Blizzard, NOBELIUM) are single tokens, preventing subword fragmentation.

Intended Uses

Direct Use

  • APT group attribution: Given a behavioral description from a CTI report, classify which threat actor is most likely responsible.
  • Semantic search over CTI: Retrieve the most relevant threat actor profiles given a description of observed attack behavior.
  • Threat actor clustering: Group unlabeled incident descriptions by behavioral similarity.
  • Few-shot attribution: Attribute newly emerging APT groups with very few reference samples.

Downstream Use

  • Fine-tuning for organization-specific threat actor taxonomies.
  • Integration into SIEM/SOAR pipelines for automated triage.
  • Enrichment of threat intelligence platforms with behavioral similarity scoring.

Out-of-Scope Use

  • Attribution based on IOCs (hashes, IPs, domains) — FALCON operates on natural language text only.
  • Real-time network traffic classification.
  • Definitive legal or geopolitical attribution — FALCON is a decision-support tool, not an oracle.

How to Use

Feature Extraction (Embeddings)

import torch
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("ait-cybersec/FALCON")
tokenizer = AutoTokenizer.from_pretrained("ait-cybersec/FALCON")

text = "The group used PowerShell scripts to download and execute additional payloads."

inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
    outputs = model(**inputs)

# Mean pooling (recommended)
attention_mask = inputs["attention_mask"].unsqueeze(-1)
token_embs = outputs.last_hidden_state
embedding = (token_embs * attention_mask).sum(dim=1) / attention_mask.sum(dim=1)

print(f"Embedding shape: {embedding.shape}")  # [1, 768]

APT Group Classification (with sklearn probe)

import numpy as np
from sklearn.linear_model import LogisticRegression

# Encode your labeled corpus
train_embeddings = np.array([get_embedding(text) for text in train_texts])
test_embeddings = np.array([get_embedding(text) for text in test_texts])

clf = LogisticRegression(max_iter=2000)
clf.fit(train_embeddings, train_labels)

predictions = clf.predict(test_embeddings)

Semantic Similarity Between Descriptions

from sklearn.metrics.pairwise import cosine_similarity

emb1 = get_embedding("The actor used spearphishing with malicious attachments.")
emb2 = get_embedding("The group sent phishing emails containing weaponized documents.")
emb3 = get_embedding("The adversary exploited a SQL injection vulnerability.")

print(f"Phishing vs Phishing: {cosine_similarity(emb1, emb2)[0][0]:.4f}")  # High
print(f"Phishing vs SQLi:     {cosine_similarity(emb1, emb3)[0][0]:.4f}")  # Lower

Training Details

Training Data

  • Source: MITRE ATT&CK Enterprise Groups — technique usage descriptions for all tracked APT groups.
  • Preprocessing:
    • Canonicalized group aliases using GroupID (e.g., APT29 = Cozy Bear = Midnight Blizzard → single label).
    • Filtered to groups with ≥30 unique technique usage descriptions.
    • Masked all group names and aliases in training text with [MASK] to prevent name leakage.
  • Final dataset: ~144 unique APT groups, variable samples per group (30–200+).

Training Procedure

Stage 1: Tokenizer Extension

Extended the SecureBERT 2.0 tokenizer with APT group names and vendor-specific aliases as single tokens. This prevents names like "Kimsuky" from being split into subword fragments (['Kim', '##su', '##ky']['Kimsuky']).

Stage 2: Masked Language Modeling (MLM)

Hyperparameter Value
Base model cisco-ai/SecureBERT2.0-base
Objective MLM (15% masking probability)
Learning rate 2e-5
Batch size 16
Epochs 10
Weight decay 0.01
Warmup ratio 0.1
Max sequence length 128
Text used Unmasked (model sees group names to learn their embeddings)

Stage 3: Supervised Contrastive Learning (SupCon)

Hyperparameter Value
Base checkpoint Stage 2 MLM output
Loss function Supervised Contrastive Loss (Khosla et al., 2020)
Temperature 0.07
Projection head 768 → 768 (ReLU) → 256
Unfrozen layers Last 4 transformer layers + projection head
Learning rate 2e-5
Batch size 64
Epochs 15
Scheduler Cosine annealing
Gradient clipping max_norm=1.0
Text used Masked (group names replaced with [MASK])

Evaluation

Evaluation uses a linear probing protocol: freeze the model, extract embeddings, train a LogisticRegression classifier on top, and report metrics using 5-fold stratified cross-validation with oversampling applied only to the training fold (no data leakage).

Results

Model Accuracy F1 Weighted F1 Macro
SecureBERT 2.0 (frozen baseline, CLS)
SecureBERT 2.0 (frozen baseline, Mean)
FALCON-base (MLM only)
FALCON (MLM + Contrastive)

Fill in after training completes.

Evaluation Protocol Details

  • No data leakage: Oversampling is applied inside each training fold only; test folds contain only original, unique samples.
  • Name masking: All group names and aliases are replaced with [MASK] in evaluation text, ensuring the model is evaluated on behavioral understanding, not name recognition.
  • Canonicalization: All vendor-specific aliases are resolved to a single canonical label per GroupID, preventing inflated metrics from alias splits.

Comparison with Related Models

Model Domain Architecture Training Objective Cybersecurity-Specific
BERT base General BERT MLM + NSP
SecBERT Cybersecurity BERT MLM
SecureBERT Cybersecurity RoBERTa MLM (custom tokenizer)
ATTACK-BERT Cybersecurity Sentence-BERT Sentence similarity
SecureBERT 2.0 Cybersecurity ModernBERT MLM (text + code)
FALCON APT Attribution ModernBERT MLM + SupCon ✅ (task-specific)

Limitations and Bias

  • Training data bias: MITRE ATT&CK over-represents well-documented state-sponsored groups (APT28, APT29, Lazarus). Less-known actors may have weaker representations.
  • Behavioral overlap: Many APT groups share identical TTPs (e.g., spearphishing, PowerShell usage). The model cannot reliably distinguish groups that employ the same techniques in the same way.
  • English only: The model is trained on English-language CTI text and will not perform well on non-English threat reports.
  • Static knowledge: The model reflects the MITRE ATT&CK knowledge base at training time and does not update as new groups or techniques emerge.
  • Not a replacement for analyst judgment: FALCON is a decision-support tool. Attribution conclusions should always be validated by human analysts.

Ethical Considerations

Automated threat attribution is a sensitive capability with potential for misuse. Incorrect attribution could lead to misguided defensive actions or geopolitical consequences. Users should:

  • Always treat model outputs as hypotheses, not conclusions.
  • Combine FALCON outputs with additional intelligence sources (IOCs, infrastructure analysis, geopolitical context).
  • Be aware that threat actors deliberately employ false-flag operations to mislead attribution.

Citation

@misc{falcon2025,
  title={FALCON: Finetuned Actor Linking via Contrastive Learning for APT Group Attribution},
  author={AIT Austrian Institute of Technology, Cybersecurity Group},
  year={2025},
  url={https://huggingface.co/ait-cybersec/FALCON}
}

Related Work

  • Aghaei, E. et al. "SecureBERT 2.0: Advanced Language Model for Cybersecurity Intelligence." arXiv:2510.00240 (2025).
  • Khosla, P. et al. "Supervised Contrastive Learning." NeurIPS (2020).
  • Irfan, S. et al. "A Comprehensive Survey of APT Attribution." arXiv:2409.11415 (2024).
  • Abdeen, B. et al. "SMET: Semantic Mapping of CVE to ATT&CK." (2023).

Model Card Authors

AIT — Austrian Institute of Technology, Cybersecurity Group

Model Card Contact

For inquiries, please open an issue on this repository or contact the AIT Cybersecurity Group.