Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
OzTianluΒ 
posted an update Feb 25
Post
1548
πŸ›‘οΈ Meet Spartacus-1B: Shattering the Memory Wall with True O(1) Inference! πŸš€
NoesisLab/Spartacus-1B-Instruct
NoesisLab/ChatSpartacus
At NoesisLab, we've entirely ripped out Softmax Attention and replaced it with Causal Monoid State Compression.
Say hello to Spartacus-1B-Instruct (1.3B) πŸ—‘οΈ.
Instead of maintaining a massive, ever-growing list of past tokens, Spartacus compresses its entire causal history into a fixed-size state matrix per head. The result?
⚑ True O(1) Inference: Memory footprint and generation time per token remain absolutely constant, whether you are on token 10 or token 100,000.
🧠 Explicit Causality: We threw away RoPE and attention masks. The model learns when to forget using dynamic, content-aware vector decay.
πŸ”₯ Blazing Fast Training: Full hardware utilization via our custom Triton-accelerated JIT parallel prefix scan.
πŸ“Š Zero-Shot Benchmarks that Hit Hard:
O(1) architectures usually sacrifice zero-shot accuracy. Not Spartacus. It is punching way above its weight class, beating established sub-quadratic models (like Mamba-1.4B and RWKV-6-1.6B):
πŸ† ARC-Challenge: 0.3063 (vs Mamba 0.284)
πŸ† ARC-Easy: 0.5518
πŸ† PIQA: 0.6915
In this post