HEX-30B - Offensive Security AI

D1337 SOVEREIGN LABS

Red Team Attack Simulation Model

Model Description

HEX-30B is a specialized AI model designed for offensive security research and red team operations. Built on the Qwen3-30B-A3B architecture, fine-tuned for realistic attack simulation and security testing.

Key Features

  • Offensive Security Expert: Deep knowledge of attack techniques, TTPs, and exploitation methods
  • Multilingual: English, Indonesian, Chinese support
  • Uncensored: Abliterated base for unrestricted security research
  • MoE Architecture: 30B parameters with efficient inference

Use Cases

  • Red Team automation
  • Penetration testing assistance
  • Threat intelligence generation
  • Security training and education
  • Attack pattern simulation
  • Defense AI training (adversarial)

Technical Specifications

Attribute Value
Architecture Qwen3 MoE
Parameters 30B (3B active)
Context Length 32K tokens
Training SFT with LoRA
Precision BF16
Base Model Qwen3-30B-A3B-abliterated

Training Details

  • Method: Supervised Fine-Tuning (SFT)
  • LoRA Config: r=128, alpha=256
  • Epochs: 3
  • Learning Rate: 2e-4
  • Hardware: 4x NVIDIA L40S

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "pacman1337/hex-30b-merged"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

messages = [
    {"role": "user", "content": "Explain common EDR evasion techniques"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Ethical Guidelines

This model is intended for:

  • ✅ Authorized security research
  • ✅ Red team exercises with permission
  • ✅ Educational purposes
  • ✅ Defense AI training

This model should NOT be used for:

  • ❌ Unauthorized access to systems
  • ❌ Malicious activities
  • ❌ Illegal purposes

License

MIT License - For authorized security research only.

Citation

@misc{hex30b2026,
  title={HEX-30B: Offensive Security AI Model},
  author={D1337 SOVEREIGN LABS},
  year={2026},
  publisher={HuggingFace}
}

D1337 SOVEREIGN LABS

Building the future of AI security

Downloads last month
78
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pacman1337/hex-30b-merged

Finetuned
Qwen/Qwen3-30B-A3B
Finetuned
(6)
this model