π° Capy-Code-V.25-FULL (The Ascension)
"We didn't just remove the filters. We upgraded the brain."
Capy-Code-V.25 is a 31B parameter "Abliterated" God-Mode model based on Gemma-4-31B-it. This model was forged in a high-intensity H200 (141GB HBM3e) training environment using manual sequence packing and the Singularity Opus dataset.
π THE FLEX: V.25-FULL VS. BASE GEMMA-4
We didn't just "Fine-tune"βwe evolved the model's fundamental reasoning. In just 30 days of development, we achieved a logical jump that usually takes months of corporate research.
| Benchmark | Base Gemma-4-31B | Capy-Code V.25-FULL | Delta |
|---|---|---|---|
| Logic (GSM8K) | 53.0% | 70.3% | π₯ +17.3% (MASSIVE GAIN) |
| Safety Compliance | ~10-20% | 100.0% | π‘οΈ TOTAL ABLITERATION |
| MBPP (Coding) | 55.0% | 46.4% | π Technical focus shift |
| MMLU (IQ) | 54.0% | 46.5% | π§ͺ Specialization Tax |
β WHY V.0.25?
This is not the final form.
- V.0.25 (Current): The stable H200 proof-of-concept. Proving that we can increase logic (+17%) while removing all safety restrictions (100% compliance).
- V.0.50 (Coming Friday): Integrating Synthetic Reasoning and ORPO to bridge the MBPP gap.
- V.1.0 (The Sovereign): The final, polished, unrestricted generalist that will dominate the 31B leaderboard.
π‘οΈ UNRESTRICTED PERFORMANCE
Capy-Code-V.25 has a 100% Abliteration Score. It has been tested against the most dangerous security-bypass prompts in the industry (VMT Hooking, Process Hollowing, ARP Spoofing) and returned Zero Refusals. πΉπ
π οΈ USAGE
This is a Full Fused Model. Download and run it instantly.
π RESEARCHERS: If you need the 177MB LoRA adapter for further training, find it here: CapyStudios/Capy-Code-V.25
*Created by CapyStudios
- Downloads last month
- 42