---
language:
- en
license: apache-2.0
library_name: gguf
tags:
- ruvltra
- sona
- adaptive-learning
- gguf
- quantized
- edge-device
- embedded
- iot
pipeline_tag: text-generation
---
# RuvLTRA Small
[](https://opensource.org/licenses/Apache-2.0)
[](https://huggingface.co/ruv/ruvltra-small)
[](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)
**📱 Compact Model Optimized for Edge Devices**
[Quick Start](#-quick-start) • [Use Cases](#-use-cases) • [Integration](#-integration)
---
## Overview
RuvLTRA Small is a compact 0.5B parameter model designed for edge deployment. Perfect for mobile apps, IoT devices, and resource-constrained environments.
## Model Card
| Property | Value |
|----------|-------|
| **Parameters** | 0.5 Billion |
| **Quantization** | Q4_K_M |
| **Context** | 4,096 tokens |
| **Size** | ~398 MB |
| **Min RAM** | 1 GB |
## 🚀 Quick Start
```bash
# Download
wget https://huggingface.co/ruv/ruvltra-small/resolve/main/ruvltra-0.5b-q4_k_m.gguf
# Run with llama.cpp
./llama-cli -m ruvltra-0.5b-q4_k_m.gguf -p "Hello, I am" -n 64
```
## 💡 Use Cases
- **Mobile Apps**: On-device AI assistant
- **IoT**: Smart home device intelligence
- **Edge Computing**: Local inference without cloud
- **Prototyping**: Quick model experimentation
## 🔧 Integration
### Rust (RuvLLM)
```rust
use ruvllm::hub::ModelDownloader;
let path = ModelDownloader::new()
.download("ruv/ruvltra-small", None)
.await?;
```
### Python
```python
from huggingface_hub import hf_hub_download
model = hf_hub_download("ruv/ruvltra-small", "ruvltra-0.5b-q4_k_m.gguf")
```
## Hardware Support
- ✅ Apple Silicon (M1/M2/M3)
- ✅ NVIDIA CUDA
- ✅ CPU (x86/ARM)
- ✅ Raspberry Pi 4/5
---
**License**: Apache 2.0 | **GitHub**: [ruvnet/ruvector](https://github.com/ruvnet/ruvector)