Iapetus-v2-Kernel is a 80-billion parameter coding model for Assembly (ptxas, arm, and x86), Cuda, C, C++ by Kronos Labs, fine-tuned on top of Qwen3-Coder-Next's Hybrid attention layout. It was trained on closed source datasets containing synthetically generated and verified code. We hope to spur interest in LLM's capable of performant low-level programming.

The model contains the following layout:

Size: 80B Parameters, A3B (3 Billion Active)

Depth: 48 layers

Hybrid layout: 12 * (3 * (Gated DeltaNet -> MoE) -> 1 * (Gated Attention -> MoE))

MoE: 512 experts, 10 activated experts, 1 shared expert

Context Length: 262,144 native

This repo contains the model in use at nvfp4 quantization.

Downloads last month
172
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for KronosLabs/Iapetus-v2-Kernel-NVFP4

Quantized
(103)
this model