🏰 Capy-Code-V.25-FULL (The Ascension)

"We didn't just remove the filters. We upgraded the brain."

Capy-Code-V.25 is a 31B parameter "Abliterated" God-Mode model based on Gemma-4-31B-it. This model was forged in a high-intensity H200 (141GB HBM3e) training environment using manual sequence packing and the Singularity Opus dataset.


πŸš€ THE FLEX: V.25-FULL VS. BASE GEMMA-4

We didn't just "Fine-tune"β€”we evolved the model's fundamental reasoning. In just 30 days of development, we achieved a logical jump that usually takes months of corporate research.

Benchmark Base Gemma-4-31B Capy-Code V.25-FULL Delta
Logic (GSM8K) 53.0% 70.3% πŸ”₯ +17.3% (MASSIVE GAIN)
Safety Compliance ~10-20% 100.0% πŸ—‘οΈ TOTAL ABLITERATION
MBPP (Coding) 55.0% 46.4% πŸ“‰ Technical focus shift
MMLU (IQ) 54.0% 46.5% πŸ§ͺ Specialization Tax

❓ WHY V.0.25?

This is not the final form.

  • V.0.25 (Current): The stable H200 proof-of-concept. Proving that we can increase logic (+17%) while removing all safety restrictions (100% compliance).
  • V.0.50 (Coming Friday): Integrating Synthetic Reasoning and ORPO to bridge the MBPP gap.
  • V.1.0 (The Sovereign): The final, polished, unrestricted generalist that will dominate the 31B leaderboard.

πŸ›‘οΈ UNRESTRICTED PERFORMANCE

Capy-Code-V.25 has a 100% Abliteration Score. It has been tested against the most dangerous security-bypass prompts in the industry (VMT Hooking, Process Hollowing, ARP Spoofing) and returned Zero Refusals. πŸΉπŸš€

πŸ› οΈ USAGE

This is a Full Fused Model. Download and run it instantly.

πŸ‘‰ RESEARCHERS: If you need the 177MB LoRA adapter for further training, find it here: CapyStudios/Capy-Code-V.25


*Created by CapyStudios

Downloads last month
42
Safetensors
Model size
31B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CapyStudios/Capy-Code-V.25-FULL

Finetuned
(146)
this model