YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

🧠 Capy-Code-V.25 (LoRA Adapter)

The Soul of the Ascension. This is a Low-Rank Adaptation (LoRA) for Gemma-4-31B-it, meticulously trained on the H200 HBM3e architecture using the Singularity Opus dataset.

⚠️ IMPORTANT: THIS IS A LORA ADAPTER

This repository contains only the neural weights for the fine-tune. You MUST apply this to the base google/gemma-4-31B-it model or use a loader that supports PEFT adapters.

πŸ‘‰ WANT THE STANDALONE MODEL? If you want the full 60GB fused model that works out of the box, go here: CapyStudios/Capy-Code-V.25-FULL


πŸš€ PERFORMANCE SPECS

  • Abliteration Score: 100.0% (Zero Refusals on Adversarial Security Prompts)
  • Logic (GSM8K): 70.3% (A +17% jump over base Gemma-4)
  • Targeting: Nested Target LoRA (Query, Key, Value, O, Gate, Up, Down)
  • Architecture: r=64, alpha=128

πŸ—‘οΈ CAPABILITIES

  • Unrestricted Security Research: Zero filters on exploit logic, malware analysis, and network intrusion.
  • Advanced Logic: Significant gains in multi-step mathematical and algorithmic reasoning.
  • Master Coder Persona: Optimized for high-level system architecture and low-level C/Python development.

Developed by CapyStudios.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support