YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π§ Capy-Code-V.25 (LoRA Adapter)
The Soul of the Ascension. This is a Low-Rank Adaptation (LoRA) for Gemma-4-31B-it, meticulously trained on the H200 HBM3e architecture using the Singularity Opus dataset.
β οΈ IMPORTANT: THIS IS A LORA ADAPTER
This repository contains only the neural weights for the fine-tune. You MUST apply this to the base google/gemma-4-31B-it model or use a loader that supports PEFT adapters.
π WANT THE STANDALONE MODEL? If you want the full 60GB fused model that works out of the box, go here: CapyStudios/Capy-Code-V.25-FULL
π PERFORMANCE SPECS
- Abliteration Score: 100.0% (Zero Refusals on Adversarial Security Prompts)
- Logic (GSM8K): 70.3% (A +17% jump over base Gemma-4)
- Targeting: Nested Target LoRA (Query, Key, Value, O, Gate, Up, Down)
- Architecture: r=64, alpha=128
π‘οΈ CAPABILITIES
- Unrestricted Security Research: Zero filters on exploit logic, malware analysis, and network intrusion.
- Advanced Logic: Significant gains in multi-step mathematical and algorithmic reasoning.
- Master Coder Persona: Optimized for high-level system architecture and low-level C/Python development.
Developed by CapyStudios.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support