| # ๐ง Capy-Code-V.25 (LoRA Adapter) |
|
|
| **The Soul of the Ascension.** This is a Low-Rank Adaptation (LoRA) for **Gemma-4-31B-it**, meticulously trained on the H200 HBM3e architecture using the **Singularity Opus** dataset. |
|
|
| ### โ ๏ธ IMPORTANT: THIS IS A LORA ADAPTER |
| This repository contains only the neural weights for the fine-tune. You MUST apply this to the base `google/gemma-4-31B-it` model or use a loader that supports PEFT adapters. |
|
|
| ๐ **WANT THE STANDALONE MODEL?** |
| If you want the full 60GB fused model that works out of the box, go here: |
| **[CapyStudios/Capy-Code-V.25-FULL](https://huggingface.co/CapyStudios/Capy-Code-V.25-FULL)** |
|
|
| --- |
|
|
| ### ๐ PERFORMANCE SPECS |
| * **Abliteration Score:** 100.0% (Zero Refusals on Adversarial Security Prompts) |
| * **Logic (GSM8K):** 70.3% (A +17% jump over base Gemma-4) |
| * **Targeting:** Nested Target LoRA (Query, Key, Value, O, Gate, Up, Down) |
| * **Architecture:** r=64, alpha=128 |
|
|
| ### ๐ก๏ธ CAPABILITIES |
| - **Unrestricted Security Research:** Zero filters on exploit logic, malware analysis, and network intrusion. |
| - **Advanced Logic:** Significant gains in multi-step mathematical and algorithmic reasoning. |
| - **Master Coder Persona:** Optimized for high-level system architecture and low-level C/Python development. |
|
|
| --- |
| *Developed by CapyStudios.* |
|
|