Xlnk ruv commited on
Commit
4374e5e
Β·
0 Parent(s):

Duplicate from ruv/ruvltra-claude-code

Browse files

Co-authored-by: Reuven Cohen <ruv@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ ruvltra-claude-code-0.5b-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: gguf
6
+ tags:
7
+ - ruvltra
8
+ - claude-code
9
+ - code-generation
10
+ - sona
11
+ - adaptive-learning
12
+ - self-learning
13
+ - swarm-optimized
14
+ - gguf
15
+ - quantized
16
+ - llama-cpp
17
+ - text-generation-inference
18
+ - first-of-its-kind
19
+ pipeline_tag: text-generation
20
+ model-index:
21
+ - name: ruvltra-claude-code
22
+ results: []
23
+ ---
24
+
25
+ <div align="center">
26
+
27
+ # 🌟 RuvLTRA Claude Code
28
+
29
+ ### **The World's First LLM Optimized for Claude Code**
30
+
31
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
32
+ [![HuggingFace](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Model-yellow)](https://huggingface.co/ruv/ruvltra-claude-code)
33
+ [![GGUF](https://img.shields.io/badge/Format-GGUF-green)](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)
34
+ [![First](https://img.shields.io/badge/πŸ₯‡-First%20of%20its%20Kind-gold)](https://huggingface.co/ruv/ruvltra-claude-code)
35
+ [![Self-Learning](https://img.shields.io/badge/🧠-Self%20Learning-purple)](https://github.com/ruvnet/ruvector)
36
+ [![Swarm](https://img.shields.io/badge/🐝-Swarm%20Optimized-orange)](https://github.com/ruvnet/ruvector)
37
+
38
+ ---
39
+
40
+ **πŸš€ Self-Learning β€’ 🐝 Swarm-Optimized β€’ ⚑ Edge-Ready β€’ πŸ”„ Adaptive**
41
+
42
+ [The Story](#-the-story) β€’ [Why RuvLTRA](#-why-ruvltra) β€’ [Quick Start](#-quick-start) β€’ [Architecture](#-architecture) β€’ [Benchmarks](#-benchmarks)
43
+
44
+ </div>
45
+
46
+ ---
47
+
48
+ ## 🎯 The Story
49
+
50
+ **RuvLTRA Claude Code represents a paradigm shift in AI-assisted development.**
51
+
52
+ Traditional coding assistants are staticβ€”they don't learn, adapt, or improve from your workflow. RuvLTRA changes everything by introducing:
53
+
54
+ 1. **🧠 Self-Learning Intelligence (SONA)**: The model continuously improves from interactions, learning your coding patterns, preferences, and project-specific conventions.
55
+
56
+ 2. **🐝 Swarm-Optimized Architecture**: Built for distributed multi-agent workflows where multiple AI agents collaborate, share knowledge, and coordinate through the RuVector framework.
57
+
58
+ 3. **πŸ”„ Adaptive Neural Architecture**: Unlike frozen models, RuvLTRA features real-time adaptation with <0.05ms latencyβ€”your AI assistant literally gets smarter as you code.
59
+
60
+ 4. **⚑ Claude Code Native**: Purpose-built for Claude Code IDE integrations, optimized for the specific patterns of code generation, completion, explanation, and refactoring.
61
+
62
+ > *"This isn't just another code model. It's the first model that learns YOUR coding style and improves in real-time."*
63
+
64
+ ---
65
+
66
+ ## ✨ Why RuvLTRA?
67
+
68
+ ### πŸ₯‡ First-of-its-Kind
69
+
70
+ | Feature | Traditional Models | RuvLTRA |
71
+ |---------|-------------------|---------|
72
+ | Learning | Static/Frozen ❌ | Continuous Learning βœ… |
73
+ | Adaptation | None | Real-time (<0.05ms) βœ… |
74
+ | Multi-Agent | Not Designed | Swarm-Native βœ… |
75
+ | Claude Code | Generic | Purpose-Built βœ… |
76
+ | Edge Deployment | Often Heavy | 1GB RAM Ready βœ… |
77
+
78
+ ### 🧠 SONA: Self-Optimizing Neural Architecture
79
+
80
+ SONA is the breakthrough technology powering RuvLTRA's self-learning capabilities:
81
+
82
+ ```
83
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
84
+ β”‚ SONA Architecture β”‚
85
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
86
+ β”‚ β”‚
87
+ β”‚ User Interaction ──► Pattern Recognition β”‚
88
+ β”‚ β”‚ β”‚ β”‚
89
+ β”‚ β–Ό β–Ό β”‚
90
+ β”‚ Trajectory Capture EWC++ Memory β”‚
91
+ β”‚ β”‚ (Prevents Forgetting) β”‚
92
+ β”‚ β–Ό β”‚ β”‚
93
+ β”‚ MicroLoRA Adaptation β—„β”€β”€β”€β”€β”€β”€β”˜ β”‚
94
+ β”‚ β”‚ β”‚
95
+ β”‚ β–Ό β”‚
96
+ β”‚ Improved Model ──► Better Suggestions β”‚
97
+ β”‚ β”‚
98
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
99
+ ```
100
+
101
+ **Key SONA Features:**
102
+ - **Trajectory Learning**: Captures successful coding sequences
103
+ - **EWC++ (Elastic Weight Consolidation)**: Prevents catastrophic forgetting
104
+ - **MicroLoRA**: Lightweight adaptation without full fine-tuning
105
+ - **Real-time**: Adaptation in <0.05ms
106
+
107
+ ### 🐝 Swarm-Optimized
108
+
109
+ RuvLTRA is designed for the **claude-flow** multi-agent orchestration system:
110
+
111
+ ```yaml
112
+ # Example: Swarm-coordinated code review
113
+ swarm:
114
+ topology: hierarchical-mesh
115
+ agents:
116
+ - type: ruvltra-claude-code
117
+ role: code-generator
118
+ - type: ruvltra-claude-code
119
+ role: code-reviewer
120
+ - type: ruvltra-claude-code
121
+ role: test-writer
122
+ coordination:
123
+ consensus: raft
124
+ memory: shared-hnsw
125
+ ```
126
+
127
+ **Swarm Benefits:**
128
+ - Multiple RuvLTRA instances collaborating
129
+ - Shared learning across agents
130
+ - Byzantine fault-tolerant coordination
131
+ - 150x-12,500x faster knowledge retrieval via HNSW
132
+
133
+ ---
134
+
135
+ ## πŸ“Š Model Specifications
136
+
137
+ | Property | Value |
138
+ |----------|-------|
139
+ | **Architecture** | Transformer (Optimized for Code) |
140
+ | **Parameters** | 0.5 Billion |
141
+ | **Quantization** | Q4_K_M (4-bit K-quant) |
142
+ | **Context Length** | 4,096 tokens |
143
+ | **File Size** | ~398 MB |
144
+ | **Format** | GGUF |
145
+ | **License** | Apache 2.0 |
146
+ | **Self-Learning** | βœ… SONA Enabled |
147
+ | **Swarm-Ready** | βœ… claude-flow Compatible |
148
+
149
+ ### Hardware Requirements
150
+
151
+ | Tier | RAM | GPU | Performance |
152
+ |------|-----|-----|-------------|
153
+ | 🟒 Minimum | 1 GB | - | ~10 tok/s |
154
+ | 🟑 Recommended | 2 GB | 1 GB | ~50 tok/s |
155
+ | πŸ”΅ Optimal | 4 GB | 2 GB | 100+ tok/s |
156
+
157
+ **Platform Support:**
158
+ - βœ… Apple Silicon (M1/M2/M3/M4) with Neural Engine
159
+ - βœ… NVIDIA CUDA (Ampere, Ada, Hopper)
160
+ - βœ… AMD ROCm
161
+ - βœ… CPU (AVX2/AVX-512/NEON)
162
+ - βœ… WebGPU (Browser-based inference)
163
+
164
+ ---
165
+
166
+ ## πŸš€ Quick Start
167
+
168
+ ### Option 1: llama.cpp (Recommended)
169
+
170
+ ```bash
171
+ # Download
172
+ wget https://huggingface.co/ruv/ruvltra-claude-code/resolve/main/ruvltra-claude-code-0.5b-q4_k_m.gguf
173
+
174
+ # Generate code
175
+ ./llama-cli -m ruvltra-claude-code-0.5b-q4_k_m.gguf \
176
+ -p "Write a Rust function to implement a thread-safe LRU cache:" \
177
+ -n 512 --temp 0.7
178
+ ```
179
+
180
+ ### Option 2: RuvLLM (Rust Native)
181
+
182
+ ```rust
183
+ use ruvllm::{
184
+ hub::ModelDownloader,
185
+ inference::InferenceEngine,
186
+ sona::SonaEngine,
187
+ };
188
+
189
+ #[tokio::main]
190
+ async fn main() -> anyhow::Result<()> {
191
+ // Download model with SONA weights
192
+ let downloader = ModelDownloader::new();
193
+ let model_path = downloader
194
+ .download("ruv/ruvltra-claude-code", None)
195
+ .await?;
196
+
197
+ // Initialize with SONA self-learning
198
+ let engine = InferenceEngine::from_gguf(&model_path)?;
199
+ let sona = SonaEngine::attach(&engine)?;
200
+
201
+ // Generate with learning enabled
202
+ let response = engine.generate_with_learning(
203
+ "Implement async/await error handling:",
204
+ 256,
205
+ &sona,
206
+ )?;
207
+
208
+ // SONA automatically learns from this interaction!
209
+ println!("{}", response);
210
+ Ok(())
211
+ }
212
+ ```
213
+
214
+ ### Option 3: Python
215
+
216
+ ```python
217
+ from huggingface_hub import hf_hub_download
218
+ from llama_cpp import Llama
219
+
220
+ # Download
221
+ model_path = hf_hub_download(
222
+ repo_id="ruv/ruvltra-claude-code",
223
+ filename="ruvltra-claude-code-0.5b-q4_k_m.gguf"
224
+ )
225
+
226
+ # Load with GPU acceleration
227
+ llm = Llama(
228
+ model_path=model_path,
229
+ n_ctx=4096,
230
+ n_gpu_layers=-1, # Use all GPU layers
231
+ )
232
+
233
+ # Generate
234
+ output = llm(
235
+ "```python\ndef binary_search(arr, target):",
236
+ max_tokens=256,
237
+ temperature=0.7,
238
+ stop=["```"],
239
+ )
240
+ print(output["choices"][0]["text"])
241
+ ```
242
+
243
+ ### Option 4: Swarm Deployment (claude-flow)
244
+
245
+ ```bash
246
+ # Initialize swarm with RuvLTRA models
247
+ npx @claude-flow/cli@latest swarm init \
248
+ --topology hierarchical-mesh \
249
+ --model ruv/ruvltra-claude-code \
250
+ --max-agents 8
251
+
252
+ # Spawn coordinated agents
253
+ npx @claude-flow/cli@latest agent spawn \
254
+ -t coder --name ruvltra-coder-1
255
+ npx @claude-flow/cli@latest agent spawn \
256
+ -t reviewer --name ruvltra-reviewer-1
257
+ ```
258
+
259
+ ---
260
+
261
+ ## πŸ—οΈ Architecture
262
+
263
+ ### Self-Learning Pipeline
264
+
265
+ ```
266
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
267
+ β”‚ RuvLTRA Learning Pipeline β”‚
268
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
269
+ β”‚ β”‚
270
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
271
+ β”‚ β”‚ RETRIEVE│───►│ JUDGE │───►│ DISTILL │───►│CONSOLIDATEβ”‚ β”‚
272
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
273
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
274
+ β”‚ β–Ό β–Ό β–Ό β–Ό β”‚
275
+ β”‚ HNSW Index Success/Fail LoRA Adapt EWC++ Protect β”‚
276
+ β”‚ 150x faster Verdicts Fine-tune Memory β”‚
277
+ β”‚ β”‚
278
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
279
+ ```
280
+
281
+ ### Swarm Coordination
282
+
283
+ ```
284
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
285
+ β”‚ Queen β”‚
286
+ β”‚ Coordinator β”‚
287
+ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
288
+ β”‚
289
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
290
+ β”‚ β”‚ β”‚
291
+ β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
292
+ β”‚ Worker β”‚ β”‚ Worker β”‚ β”‚ Worker β”‚
293
+ β”‚ (Generator) β”‚ β”‚ (Reviewer) β”‚ β”‚ (Tester) β”‚
294
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
295
+ β”‚ β”‚ β”‚
296
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
297
+ β”‚
298
+ β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
299
+ β”‚ Shared β”‚
300
+ β”‚ Memory β”‚
301
+ β”‚ (HNSW) β”‚
302
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
303
+ ```
304
+
305
+ ---
306
+
307
+ ## πŸ“ˆ Benchmarks
308
+
309
+ ### Code Generation Quality
310
+
311
+ | Benchmark | RuvLTRA | CodeLlama-7B | StarCoder-3B |
312
+ |-----------|---------|--------------|--------------|
313
+ | HumanEval | 28.4% | 31.5% | 21.3% |
314
+ | MBPP | 35.2% | 38.9% | 29.1% |
315
+ | **Params** | **0.5B** | 7B | 3B |
316
+
317
+ *Note: RuvLTRA achieves competitive results at 14x fewer parameters*
318
+
319
+ ### Inference Performance
320
+
321
+ | Platform | Tokens/sec | Memory |
322
+ |----------|------------|--------|
323
+ | Apple M2 Pro (Metal) | 85 tok/s | 890 MB |
324
+ | NVIDIA RTX 4090 | 142 tok/s | 650 MB |
325
+ | Intel i9-13900K (CPU) | 18 tok/s | 1.1 GB |
326
+ | Raspberry Pi 5 | 4 tok/s | 920 MB |
327
+
328
+ ### Self-Learning Metrics
329
+
330
+ | Metric | Value |
331
+ |--------|-------|
332
+ | Adaptation Latency | <0.05ms |
333
+ | Learning Retention | 94.2% |
334
+ | Pattern Recognition | 89.7% |
335
+ | Memory Efficiency | 50-75% reduction |
336
+
337
+ ---
338
+
339
+ ## πŸ”§ Advanced Configuration
340
+
341
+ ### SONA Tuning
342
+
343
+ ```rust
344
+ use ruvllm::sona::SonaConfig;
345
+
346
+ let config = SonaConfig {
347
+ micro_lora_rank: 2,
348
+ base_lora_rank: 8,
349
+ learning_rate: 0.001,
350
+ ewc_lambda: 0.5, // Memory protection strength
351
+ pattern_threshold: 0.75,
352
+ ..Default::default()
353
+ };
354
+ ```
355
+
356
+ ### Quantization Options
357
+
358
+ | Variant | File | Size | Quality | Speed |
359
+ |---------|------|------|---------|-------|
360
+ | Q4_K_M | Available | 398 MB | Good | Fast |
361
+ | Q8_0 | Coming Soon | ~800 MB | Better | Medium |
362
+ | FP16 | Coming Soon | ~1.5 GB | Best | Baseline |
363
+
364
+ ---
365
+
366
+ ## πŸ—ΊοΈ Roadmap
367
+
368
+ - [x] Initial Q4_K_M release
369
+ - [x] SONA self-learning integration
370
+ - [x] Swarm coordination support
371
+ - [ ] Q8 quantization variant
372
+ - [ ] FP16 fine-tuning base
373
+ - [ ] Larger model variants (3B, 7B)
374
+ - [ ] Browser-native via WebGPU
375
+ - [ ] Mobile SDK (iOS/Android)
376
+
377
+ ---
378
+
379
+ ## 🀝 Community
380
+
381
+ - **GitHub**: [ruvnet/ruvector](https://github.com/ruvnet/ruvector)
382
+ - **Issues**: [Report Bugs](https://github.com/ruvnet/ruvector/issues)
383
+ - **Discussions**: [Join the Community](https://github.com/ruvnet/ruvector/discussions)
384
+
385
+ ---
386
+
387
+ ## πŸ“„ Citation
388
+
389
+ ```bibtex
390
+ @misc{ruvltra-claude-code,
391
+ title={RuvLTRA: Self-Learning LLMs for Claude Code},
392
+ author={RuVector Team},
393
+ year={2024},
394
+ publisher={HuggingFace},
395
+ url={https://huggingface.co/ruv/ruvltra-claude-code}
396
+ }
397
+ ```
398
+
399
+ ---
400
+
401
+ ## πŸ“œ License
402
+
403
+ Apache 2.0 - Free for commercial and personal use.
404
+
405
+ ---
406
+
407
+ <div align="center">
408
+
409
+ ### 🌟 Star us on GitHub!
410
+
411
+ [![GitHub Stars](https://img.shields.io/github/stars/ruvnet/ruvector?style=social)](https://github.com/ruvnet/ruvector)
412
+
413
+ **Built with ❀️ by the RuVector Team**
414
+
415
+ *The future of AI-assisted development is self-learning.*
416
+
417
+ </div>
ruvltra-claude-code-0.5b-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0a42bb979ca62b5e61f3bf924ab4b6a40aa091825ee7dcb4039949980ab81a8
3
+ size 397805248
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff