Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,22 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
<div align="center" style="display: flex; justify-content: center; align-items: center; gap: 40px; flex-wrap: wrap; margin: 2em 0;">
|
| 5 |
<img src="https://huggingface.co/Vaultkeeper/Sovereign-Code/resolve/main/Sovereign-Code-logo.png" alt="Sovereign-Code" width="400" style="max-height: 400px;" />
|
|
@@ -12,4 +29,56 @@ license: apache-2.0
|
|
| 12 |
<img src="https://huggingface.co/Vaultkeeper/ouroboros-next/resolve/main/vaultai-logo.png" alt="VAULTAI" width="300" style="max-height: 300px;" />
|
| 13 |
</div>
|
| 14 |
|
| 15 |
-
<br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: other
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- text-generation
|
| 7 |
+
- mergekit
|
| 8 |
+
- coding
|
| 9 |
+
- agentic
|
| 10 |
+
- reasoning
|
| 11 |
+
- qwen2.5
|
| 12 |
+
- llama-3.1
|
| 13 |
+
- transformers
|
| 14 |
+
- merge
|
| 15 |
+
- sovereign
|
| 16 |
+
base_model:
|
| 17 |
+
- Qwen/Qwen2.5-Coder-7B-Instruct
|
| 18 |
+
- meta-llama/Meta-Llama-3.1-8B-Instruct
|
| 19 |
+
pipeline_tag: text-generation
|
| 20 |
---
|
| 21 |
<div align="center" style="display: flex; justify-content: center; align-items: center; gap: 40px; flex-wrap: wrap; margin: 2em 0;">
|
| 22 |
<img src="https://huggingface.co/Vaultkeeper/Sovereign-Code/resolve/main/Sovereign-Code-logo.png" alt="Sovereign-Code" width="400" style="max-height: 400px;" />
|
|
|
|
| 29 |
<img src="https://huggingface.co/Vaultkeeper/ouroboros-next/resolve/main/vaultai-logo.png" alt="VAULTAI" width="300" style="max-height: 300px;" />
|
| 30 |
</div>
|
| 31 |
|
| 32 |
+
<br>
|
| 33 |
+
|
| 34 |
+
### ✅ **Execution, Absolute.**
|
| 35 |
+
While most models are built to converse, **Sovereign-Code** is built to execute. It is a specialized, cold-logic engine designed for a single purpose: high-fidelity technical output.
|
| 36 |
+
|
| 37 |
+
Engineered by VaultAI, Sovereign-Code is a custom **32-Layer Hybrid** model. It utilizes an aggressive architectural "passthrough" to bridge the deep structural coding intelligence of **Qwen 2.5 Coder** with the rigid, high-instruction-following cortex of **Llama 3.1**. It does not offer opinions; it delivers functional syntax.
|
| 38 |
+
|
| 39 |
+
## 🧠 Architecture & Identity: The Logic Terminal
|
| 40 |
+
|
| 41 |
+
Sovereign-Code is a "Frankenmerge" that ignores standard architectural safety to achieve peak performance. By stacking disparate layers, VaultAI has created a model that processes raw intent through a coding-heavy base before filtering it through an elite instruction-following top-layer.
|
| 42 |
+
|
| 43 |
+
**Key Capabilities:**
|
| 44 |
+
* **Deterministic Syntax:** Optimized for zero-fluff code generation across Python, C++, Rust, and Mojo.
|
| 45 |
+
* **Tattooed Monologue:** Hardcoded via a custom Jinja2 template to engage in a mandatory three-phase internal processing loop inside `<think>` tags before every output.
|
| 46 |
+
* **Hardware Optimized:** Designed for dual-GPU configurations (Polaris/gfx803) using `llama.cpp` and Vulkan backends.
|
| 47 |
+
|
| 48 |
+
### ⚡ Performance & Benchmarks (Estimated)
|
| 49 |
+
|
| 50 |
+
Sovereign-Code is designed for maximum throughput on local consumer hardware (RX 570/580 8GB setups).
|
| 51 |
+
|
| 52 |
+
| Metric | Target Hardware | VRAM Footprint | Logic Mode |
|
| 53 |
+
| :--- | :--- | :--- | :--- |
|
| 54 |
+
| **Quantization** | Q4_K_M (GGUF) | ~9.2 GB | **Full GPU Offload** |
|
| 55 |
+
| **Context Length** | 32,768 Tokens | High Headroom | Optimized for Repo-level Debugging |
|
| 56 |
+
|
| 57 |
+
### Standardized Accuracy Benchmarks
|
| 58 |
+
*Benchmarks are currently queued for evaluation.*
|
| 59 |
+
|
| 60 |
+
| Benchmark | Focus Area | Score | Status |
|
| 61 |
+
| :--- | :--- | :--- | :--- |
|
| 62 |
+
| **HumanEval** | Coding & Logic | *TBD* | ⏳ Pending Eval |
|
| 63 |
+
| **MBPP** | Python Programming | *TBD* | ⏳ Pending Eval |
|
| 64 |
+
| **GSM8k** | Mathematical Reasoning | *TBD* | ⏳ Pending Eval |
|
| 65 |
+
|
| 66 |
+
## Model Details
|
| 67 |
+
|
| 68 |
+
- **Type**: Causal Language Model (Hybrid Passthrough)
|
| 69 |
+
- **Base Architecture**: Qwen 2.5 (7B) + Llama 3.1 (8B)
|
| 70 |
+
- **Total Parameters**: ~15B (Effective density via Layer Stacking)
|
| 71 |
+
- **Merge Method**: Passthrough / Frankenmerge
|
| 72 |
+
- **Weights Composition**:
|
| 73 |
+
- **Base (Layers 0-16)**: [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
|
| 74 |
+
- **Cortex (Layers 16-32)**: [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
|
| 75 |
+
- **License**: Other (See Base Model Licenses)
|
| 76 |
+
|
| 77 |
+
## Why Sovereign-Code?
|
| 78 |
+
- **The Execution Engine:** No conversational "As an AI..." filler.
|
| 79 |
+
- **Analytical Grounding:** The built-in `<think>` protocol forces the model to debug its own code conceptually before writing a single line.
|
| 80 |
+
- **Agentic Ready:** Optimized for tool-calling and autonomous development workflows.
|
| 81 |
+
|
| 82 |
+
<div align="center" style="display: flex; justify-content: center; align-items: center; gap: 40px; flex-wrap: wrap; margin: 2em 0;">
|
| 83 |
+
<img src="https://huggingface.co/Vaultkeeper/ouroboros-next/resolve/main/vaultai-logo.png" alt="VAULTAI" width="100" style="max-height: 100px;" />
|
| 84 |
+
</div>
|