Upload v0.1.0/README.md with huggingface_hub
Browse files- v0.1.0/README.md +645 -0
v0.1.0/README.md
ADDED
|
@@ -0,0 +1,645 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Local-Llama-Inference
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+

|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
**A Production-Ready Python SDK for GPU-Accelerated LLM Inference**
|
| 8 |
+
|
| 9 |
+
Local-Llama-Inference is a comprehensive Python SDK that integrates **llama.cpp** and **NVIDIA NCCL** to enable high-performance inference of GGUF-quantized Large Language Models (LLMs) on single and multiple NVIDIA GPUs.
|
| 10 |
+
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## 🎯 Features
|
| 14 |
+
|
| 15 |
+
### 🚀 Core Capabilities
|
| 16 |
+
- **Single GPU Inference** - Automatic memory optimization and layer offloading
|
| 17 |
+
- **Multi-GPU Support** - Tensor parallelism with automatic split suggestions
|
| 18 |
+
- **30+ REST API Endpoints** - Full llama.cpp endpoint coverage
|
| 19 |
+
- **OpenAI-Compatible API** - Drop-in compatible `/v1/chat/completions` endpoint
|
| 20 |
+
- **Streaming Responses** - Token-by-token streaming via Python generators
|
| 21 |
+
- **Production-Ready** - Error handling, process management, health checks
|
| 22 |
+
|
| 23 |
+
### 🔌 API Support
|
| 24 |
+
```python
|
| 25 |
+
# Chat & Completions
|
| 26 |
+
client.chat() # Chat completion (non-streaming)
|
| 27 |
+
client.stream_chat() # Chat with token streaming
|
| 28 |
+
client.complete() # Text completion
|
| 29 |
+
client.stream_complete() # Streaming completion
|
| 30 |
+
|
| 31 |
+
# Embeddings & Reranking
|
| 32 |
+
client.embed() # Generate embeddings
|
| 33 |
+
client.rerank() # Cross-encoder reranking
|
| 34 |
+
|
| 35 |
+
# Token Utilities
|
| 36 |
+
client.tokenize() # Text to tokens
|
| 37 |
+
client.detokenize() # Tokens to text
|
| 38 |
+
client.apply_template() # Apply chat template
|
| 39 |
+
|
| 40 |
+
# Advanced Features
|
| 41 |
+
client.infill() # Code infilling
|
| 42 |
+
client.set_lora_adapters() # LoRA support
|
| 43 |
+
client.save_slot() # Slot management
|
| 44 |
+
client.restore_slot() # Restore saved state
|
| 45 |
+
|
| 46 |
+
# Server Management
|
| 47 |
+
client.health() # Health check
|
| 48 |
+
client.get_props() # Get server properties
|
| 49 |
+
client.get_metrics() # Performance metrics
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### 🎮 GPU Utilities
|
| 53 |
+
```python
|
| 54 |
+
from local_llama_inference import detect_gpus, suggest_tensor_split, check_cuda_version
|
| 55 |
+
|
| 56 |
+
# Detect available GPUs
|
| 57 |
+
gpus = detect_gpus()
|
| 58 |
+
|
| 59 |
+
# Get automatic tensor split for multi-GPU
|
| 60 |
+
tensor_split = suggest_tensor_split(gpus)
|
| 61 |
+
|
| 62 |
+
# Check CUDA version
|
| 63 |
+
cuda_version = check_cuda_version()
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### 📊 NCCL Collective Operations
|
| 67 |
+
```python
|
| 68 |
+
from local_llama_inference._bindings.nccl_binding import NCCLBinding
|
| 69 |
+
|
| 70 |
+
# Direct access to NCCL primitives
|
| 71 |
+
nccl = NCCLBinding('/path/to/libnccl.so.2')
|
| 72 |
+
nccl.all_reduce(sendbuff, recvbuff, ncclFloat32, ncclSum, comm)
|
| 73 |
+
nccl.broadcast(buffer, ncclFloat32, root, comm)
|
| 74 |
+
nccl.all_gather(sendbuff, recvbuff, ncclFloat32, comm)
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
## 📋 System Requirements
|
| 80 |
+
|
| 81 |
+
### Minimum
|
| 82 |
+
- **GPU**: NVIDIA compute capability 5.0+ (sm_50)
|
| 83 |
+
- Tesla K80, K40 | GeForce GTX 750 Ti
|
| 84 |
+
- **VRAM**: 2GB+ per GPU
|
| 85 |
+
- **Python**: 3.8+
|
| 86 |
+
- **OS**: Linux x86_64
|
| 87 |
+
- **RAM**: 8GB+ system memory
|
| 88 |
+
|
| 89 |
+
### Recommended
|
| 90 |
+
- **GPU**: RTX 2060 or newer (sm_75+)
|
| 91 |
+
- **VRAM**: 4GB+ per GPU
|
| 92 |
+
- **RAM**: 16GB+ system memory
|
| 93 |
+
- **CUDA**: Any version 11.5+ (runtime included)
|
| 94 |
+
|
| 95 |
+
### Supported GPUs
|
| 96 |
+
✅ Kepler (sm_50) - Tesla K80, K40, GTX 750 Ti
|
| 97 |
+
✅ Maxwell (sm_61) - GTX 750, GTX 950, GTX 1050
|
| 98 |
+
✅ Pascal (sm_61) - GTX 1060, GTX 1080
|
| 99 |
+
✅ Volta (sm_70) - Tesla V100
|
| 100 |
+
✅ Turing (sm_75) - RTX 2060, RTX 2080
|
| 101 |
+
✅ Ampere (sm_80) - RTX 3060, RTX 3090
|
| 102 |
+
✅ Ada (sm_86) - RTX 4080, RTX 6000
|
| 103 |
+
✅ Hopper (sm_89) - H100, H200
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## ⚡ Quick Start (5 Minutes)
|
| 108 |
+
|
| 109 |
+
### 1. Installation
|
| 110 |
+
|
| 111 |
+
#### Option A: From Release Package (Recommended)
|
| 112 |
+
```bash
|
| 113 |
+
# Download from GitHub Releases
|
| 114 |
+
# https://github.com/Local-Llama-Inference/Local-Llama-Inference/releases/tag/v0.1.0
|
| 115 |
+
|
| 116 |
+
tar -xzf local-llama-inference-complete-v0.1.0.tar.gz
|
| 117 |
+
cd local-llama-inference-v0.1.0
|
| 118 |
+
pip install -e ./python
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
#### Option B: From Source (Developer)
|
| 122 |
+
```bash
|
| 123 |
+
git clone https://github.com/Local-Llama-Inference/Local-Llama-Inference.git
|
| 124 |
+
cd Local-Llama-Inference/local-llama-inference
|
| 125 |
+
pip install -e .
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### 2. Verify Installation
|
| 129 |
+
```bash
|
| 130 |
+
python -c "from local_llama_inference import LlamaServer, detect_gpus; print('✅ SDK Ready'); print(detect_gpus())"
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
### 3. Download a Model
|
| 134 |
+
```bash
|
| 135 |
+
# Download Mistral 7B Q4 (quantized, ~4GB)
|
| 136 |
+
wget https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/Mistral-7B-Instruct-v0.1.Q4_K_M.gguf
|
| 137 |
+
|
| 138 |
+
# Or find more models at: https://huggingface.co/models?search=gguf
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
### 4. Run Your First Inference
|
| 142 |
+
```python
|
| 143 |
+
from local_llama_inference import LlamaServer, LlamaClient
|
| 144 |
+
|
| 145 |
+
# Start the server
|
| 146 |
+
server = LlamaServer(
|
| 147 |
+
model_path="./Mistral-7B-Instruct-v0.1.Q4_K_M.gguf",
|
| 148 |
+
n_gpu_layers=33, # Offload all layers to GPU
|
| 149 |
+
n_ctx=2048, # Context window
|
| 150 |
+
host="127.0.0.1",
|
| 151 |
+
port=8000
|
| 152 |
+
)
|
| 153 |
+
|
| 154 |
+
print("Starting server...")
|
| 155 |
+
server.start()
|
| 156 |
+
server.wait_ready(timeout=60)
|
| 157 |
+
print("✅ Server ready!")
|
| 158 |
+
|
| 159 |
+
# Create client and send request
|
| 160 |
+
client = LlamaClient(base_url="http://127.0.0.1:8000")
|
| 161 |
+
|
| 162 |
+
response = client.chat_completion(
|
| 163 |
+
messages=[
|
| 164 |
+
{"role": "system", "content": "You are a helpful AI assistant."},
|
| 165 |
+
{"role": "user", "content": "What is machine learning?"}
|
| 166 |
+
],
|
| 167 |
+
temperature=0.7,
|
| 168 |
+
max_tokens=256
|
| 169 |
+
)
|
| 170 |
+
|
| 171 |
+
print("Assistant:", response.choices[0].message.content)
|
| 172 |
+
|
| 173 |
+
# Cleanup
|
| 174 |
+
server.stop()
|
| 175 |
+
```
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+
## 📚 Getting Started Tutorials
|
| 180 |
+
|
| 181 |
+
### Basic Chat Example
|
| 182 |
+
```python
|
| 183 |
+
from local_llama_inference import LlamaServer, LlamaClient
|
| 184 |
+
|
| 185 |
+
# Configure and start server
|
| 186 |
+
server = LlamaServer(
|
| 187 |
+
model_path="model.gguf",
|
| 188 |
+
n_gpu_layers=33, # Use GPU
|
| 189 |
+
main_gpu=0, # Primary GPU
|
| 190 |
+
n_ctx=2048, # Context size
|
| 191 |
+
n_batch=512, # Batch size
|
| 192 |
+
)
|
| 193 |
+
server.start()
|
| 194 |
+
server.wait_ready()
|
| 195 |
+
|
| 196 |
+
# Simple chat
|
| 197 |
+
client = LlamaClient()
|
| 198 |
+
response = client.chat_completion(
|
| 199 |
+
messages=[{"role": "user", "content": "Hello!"}]
|
| 200 |
+
)
|
| 201 |
+
print(response.choices[0].message.content)
|
| 202 |
+
|
| 203 |
+
# Multi-turn conversation
|
| 204 |
+
messages = [
|
| 205 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
| 206 |
+
{"role": "user", "content": "What is Python?"},
|
| 207 |
+
]
|
| 208 |
+
|
| 209 |
+
response = client.chat_completion(messages=messages)
|
| 210 |
+
print("Assistant:", response.choices[0].message.content)
|
| 211 |
+
|
| 212 |
+
messages.append({"role": "assistant", "content": response.choices[0].message.content})
|
| 213 |
+
messages.append({"role": "user", "content": "How is it used in AI?"})
|
| 214 |
+
|
| 215 |
+
response = client.chat_completion(messages=messages)
|
| 216 |
+
print("Assistant:", response.choices[0].message.content)
|
| 217 |
+
|
| 218 |
+
server.stop()
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
### Streaming Responses
|
| 222 |
+
```python
|
| 223 |
+
from local_llama_inference import LlamaServer, LlamaClient
|
| 224 |
+
|
| 225 |
+
server = LlamaServer(model_path="model.gguf", n_gpu_layers=33)
|
| 226 |
+
server.start()
|
| 227 |
+
server.wait_ready()
|
| 228 |
+
|
| 229 |
+
client = LlamaClient()
|
| 230 |
+
|
| 231 |
+
# Stream tokens in real-time
|
| 232 |
+
for token in client.stream_chat(
|
| 233 |
+
messages=[{"role": "user", "content": "Write a poem about AI"}]
|
| 234 |
+
):
|
| 235 |
+
print(token.choices[0].delta.content, end="", flush=True)
|
| 236 |
+
print()
|
| 237 |
+
|
| 238 |
+
server.stop()
|
| 239 |
+
```
|
| 240 |
+
|
| 241 |
+
### Multi-GPU Inference
|
| 242 |
+
```python
|
| 243 |
+
from local_llama_inference import (
|
| 244 |
+
LlamaServer, LlamaClient, detect_gpus, suggest_tensor_split
|
| 245 |
+
)
|
| 246 |
+
|
| 247 |
+
# Detect GPUs
|
| 248 |
+
gpus = detect_gpus()
|
| 249 |
+
print(f"Found {len(gpus)} GPU(s)")
|
| 250 |
+
|
| 251 |
+
# Get suggested tensor split
|
| 252 |
+
tensor_split = suggest_tensor_split(gpus)
|
| 253 |
+
print(f"Suggested tensor split: {tensor_split}")
|
| 254 |
+
|
| 255 |
+
# Start with multi-GPU
|
| 256 |
+
server = LlamaServer(
|
| 257 |
+
model_path="model.gguf",
|
| 258 |
+
n_gpu_layers=33,
|
| 259 |
+
tensor_split=tensor_split, # Distribute layers across GPUs
|
| 260 |
+
)
|
| 261 |
+
server.start()
|
| 262 |
+
server.wait_ready()
|
| 263 |
+
|
| 264 |
+
# Use normally
|
| 265 |
+
client = LlamaClient()
|
| 266 |
+
response = client.chat_completion(
|
| 267 |
+
messages=[{"role": "user", "content": "Process this on multiple GPUs!"}]
|
| 268 |
+
)
|
| 269 |
+
print(response.choices[0].message.content)
|
| 270 |
+
|
| 271 |
+
server.stop()
|
| 272 |
+
```
|
| 273 |
+
|
| 274 |
+
### Embeddings
|
| 275 |
+
```python
|
| 276 |
+
from local_llama_inference import LlamaServer, LlamaClient
|
| 277 |
+
|
| 278 |
+
server = LlamaServer(model_path="embedding-model.gguf", n_gpu_layers=33)
|
| 279 |
+
server.start()
|
| 280 |
+
server.wait_ready()
|
| 281 |
+
|
| 282 |
+
client = LlamaClient()
|
| 283 |
+
|
| 284 |
+
# Single embedding
|
| 285 |
+
embedding = client.embed(input="What is machine learning?")
|
| 286 |
+
print(f"Embedding dimension: {len(embedding.data[0].embedding)}")
|
| 287 |
+
|
| 288 |
+
# Batch embeddings
|
| 289 |
+
embeddings = client.embed(
|
| 290 |
+
input=[
|
| 291 |
+
"Machine learning is a subset of AI",
|
| 292 |
+
"Deep learning uses neural networks",
|
| 293 |
+
"LLMs are large language models"
|
| 294 |
+
]
|
| 295 |
+
)
|
| 296 |
+
print(f"Generated {len(embeddings.data)} embeddings")
|
| 297 |
+
|
| 298 |
+
server.stop()
|
| 299 |
+
```
|
| 300 |
+
|
| 301 |
+
### Advanced: NCCL Operations
|
| 302 |
+
```python
|
| 303 |
+
from local_llama_inference._bindings.nccl_binding import NCCLBinding, NCCLDataType, NCCLRedOp
|
| 304 |
+
import numpy as np
|
| 305 |
+
|
| 306 |
+
# Load NCCL
|
| 307 |
+
nccl = NCCLBinding('/path/to/libnccl.so.2')
|
| 308 |
+
|
| 309 |
+
# AllReduce operation
|
| 310 |
+
sendbuff = np.array([1.0, 2.0, 3.0], dtype=np.float32)
|
| 311 |
+
recvbuff = np.zeros_like(sendbuff)
|
| 312 |
+
|
| 313 |
+
# This would require NCCL communicator setup
|
| 314 |
+
# nccl.all_reduce(sendbuff.ctypes.data_as(ctypes.POINTER(ctypes.c_float)),
|
| 315 |
+
# recvbuff.ctypes.data_as(ctypes.POINTER(ctypes.c_float)),
|
| 316 |
+
# len(sendbuff), NCCLDataType.FLOAT32, NCCLRedOp.SUM, comm)
|
| 317 |
+
```
|
| 318 |
+
|
| 319 |
+
---
|
| 320 |
+
|
| 321 |
+
## 🔧 Configuration
|
| 322 |
+
|
| 323 |
+
### Server Configuration
|
| 324 |
+
```python
|
| 325 |
+
from local_llama_inference import ServerConfig, SamplingConfig
|
| 326 |
+
|
| 327 |
+
# Create configuration
|
| 328 |
+
config = ServerConfig(
|
| 329 |
+
# Model
|
| 330 |
+
model_path="./model.gguf",
|
| 331 |
+
|
| 332 |
+
# Server
|
| 333 |
+
host="127.0.0.1",
|
| 334 |
+
port=8000,
|
| 335 |
+
api_key=None, # Optional API key
|
| 336 |
+
|
| 337 |
+
# GPU settings
|
| 338 |
+
n_gpu_layers=33, # Layers to offload to GPU
|
| 339 |
+
tensor_split=[0.5, 0.5], # Multi-GPU distribution
|
| 340 |
+
main_gpu=0, # Primary GPU
|
| 341 |
+
|
| 342 |
+
# Context
|
| 343 |
+
n_ctx=2048, # Context window size
|
| 344 |
+
n_batch=512, # Batch size
|
| 345 |
+
n_ubatch=512, # Micro-batch size
|
| 346 |
+
|
| 347 |
+
# Performance
|
| 348 |
+
flash_attn=True, # Use Flash Attention v2
|
| 349 |
+
numa=False, # NUMA optimization
|
| 350 |
+
|
| 351 |
+
# Advanced
|
| 352 |
+
use_mmap=True, # Memory mapped I/O
|
| 353 |
+
use_mlock=False, # Lock memory
|
| 354 |
+
embedding_only=False, # Embedding mode
|
| 355 |
+
)
|
| 356 |
+
|
| 357 |
+
# Generate CLI arguments
|
| 358 |
+
args = config.to_args()
|
| 359 |
+
|
| 360 |
+
# Create server
|
| 361 |
+
server = LlamaServer(config)
|
| 362 |
+
```
|
| 363 |
+
|
| 364 |
+
### Sampling Configuration
|
| 365 |
+
```python
|
| 366 |
+
from local_llama_inference import SamplingConfig
|
| 367 |
+
|
| 368 |
+
sampling_config = SamplingConfig(
|
| 369 |
+
temperature=0.7, # Higher = more random
|
| 370 |
+
top_k=40, # Nucleus sampling
|
| 371 |
+
top_p=0.9, # Cumulative probability
|
| 372 |
+
min_p=0.05, # Minimum probability
|
| 373 |
+
repeat_penalty=1.1, # Penalize repetition
|
| 374 |
+
mirostat=0, # Mirostat sampling (0=off)
|
| 375 |
+
seed=42, # Random seed
|
| 376 |
+
grammar=None, # Grammar constraints
|
| 377 |
+
json_schema=None, # JSON schema
|
| 378 |
+
)
|
| 379 |
+
|
| 380 |
+
# Use in request
|
| 381 |
+
response = client.chat_completion(
|
| 382 |
+
messages=[{"role": "user", "content": "Hello"}],
|
| 383 |
+
temperature=sampling_config.temperature,
|
| 384 |
+
top_k=sampling_config.top_k,
|
| 385 |
+
top_p=sampling_config.top_p,
|
| 386 |
+
)
|
| 387 |
+
```
|
| 388 |
+
|
| 389 |
+
---
|
| 390 |
+
|
| 391 |
+
## 📖 API Reference
|
| 392 |
+
|
| 393 |
+
### `LlamaServer` - Process Management
|
| 394 |
+
```python
|
| 395 |
+
server = LlamaServer(config, binary_path=None)
|
| 396 |
+
|
| 397 |
+
# Methods
|
| 398 |
+
server.start(wait_ready=False, timeout=60) # Start server
|
| 399 |
+
server.stop() # Stop server
|
| 400 |
+
server.restart() # Restart server
|
| 401 |
+
server.is_running() # Check status
|
| 402 |
+
server.wait_ready(timeout=60) # Wait for /health
|
| 403 |
+
```
|
| 404 |
+
|
| 405 |
+
### `LlamaClient` - HTTP REST Client
|
| 406 |
+
```python
|
| 407 |
+
client = LlamaClient(base_url="http://127.0.0.1:8000", api_key=None)
|
| 408 |
+
|
| 409 |
+
# Chat & Completions
|
| 410 |
+
client.chat_completion(messages, model=None, **kwargs)
|
| 411 |
+
client.stream_chat(messages, model=None, **kwargs)
|
| 412 |
+
client.complete(prompt, model=None, **kwargs)
|
| 413 |
+
client.stream_complete(prompt, model=None, **kwargs)
|
| 414 |
+
|
| 415 |
+
# Embeddings
|
| 416 |
+
client.embed(input, model=None)
|
| 417 |
+
client.rerank(model, query, documents)
|
| 418 |
+
|
| 419 |
+
# Tokens
|
| 420 |
+
client.tokenize(prompt, add_special=True)
|
| 421 |
+
client.detokenize(tokens)
|
| 422 |
+
client.apply_template(messages, add_generation_prompt=True)
|
| 423 |
+
|
| 424 |
+
# Server Info
|
| 425 |
+
client.health() # GET /health
|
| 426 |
+
client.get_props() # GET /props
|
| 427 |
+
client.set_props(props) # POST /props
|
| 428 |
+
client.get_metrics() # GET /metrics
|
| 429 |
+
client.get_models() # GET /models
|
| 430 |
+
client.get_slots() # GET /slots
|
| 431 |
+
```
|
| 432 |
+
|
| 433 |
+
### `detect_gpus()` - GPU Detection
|
| 434 |
+
```python
|
| 435 |
+
gpus = detect_gpus()
|
| 436 |
+
# Returns: List[GPUInfo]
|
| 437 |
+
# Each GPUInfo has: index, name, uuid, compute_capability, total_memory_mb, free_memory_mb
|
| 438 |
+
|
| 439 |
+
for gpu in gpus:
|
| 440 |
+
print(f"GPU {gpu.index}: {gpu.name}")
|
| 441 |
+
print(f" Compute Capability: {gpu.compute_capability}")
|
| 442 |
+
print(f" VRAM: {gpu.total_memory_mb} MB ({gpu.free_memory_mb} MB free)")
|
| 443 |
+
print(f" Supports Flash Attention: {gpu.supports_flash_attn()}")
|
| 444 |
+
```
|
| 445 |
+
|
| 446 |
+
### `suggest_tensor_split()` - Auto Multi-GPU
|
| 447 |
+
```python
|
| 448 |
+
tensor_split = suggest_tensor_split(gpus)
|
| 449 |
+
# Automatically calculates optimal layer distribution
|
| 450 |
+
# Returns: List[float] summing to 1.0
|
| 451 |
+
```
|
| 452 |
+
|
| 453 |
+
---
|
| 454 |
+
|
| 455 |
+
## 🛠️ Troubleshooting
|
| 456 |
+
|
| 457 |
+
### "CUDA out of memory"
|
| 458 |
+
```python
|
| 459 |
+
# Solution 1: Reduce GPU layers
|
| 460 |
+
server = LlamaServer(model_path="model.gguf", n_gpu_layers=15)
|
| 461 |
+
|
| 462 |
+
# Solution 2: Use smaller quantization
|
| 463 |
+
# Download Q2 or Q3 instead of Q5/Q6
|
| 464 |
+
|
| 465 |
+
# Solution 3: Reduce batch size
|
| 466 |
+
server = LlamaServer(model_path="model.gguf", n_batch=256)
|
| 467 |
+
```
|
| 468 |
+
|
| 469 |
+
### "GPU not found"
|
| 470 |
+
```bash
|
| 471 |
+
# Check NVIDIA driver
|
| 472 |
+
nvidia-smi
|
| 473 |
+
|
| 474 |
+
# Verify NVIDIA driver is installed
|
| 475 |
+
# https://www.nvidia.com/Download/driverDetails.aspx
|
| 476 |
+
|
| 477 |
+
# Check compute capability
|
| 478 |
+
python -c "from local_llama_inference import detect_gpus; print(detect_gpus())"
|
| 479 |
+
```
|
| 480 |
+
|
| 481 |
+
### "libcudart.so.12 not found"
|
| 482 |
+
```bash
|
| 483 |
+
# The complete package includes CUDA runtime
|
| 484 |
+
|
| 485 |
+
# Or install NVIDIA drivers:
|
| 486 |
+
sudo apt update
|
| 487 |
+
sudo apt install nvidia-driver-XXX # Replace XXX with version
|
| 488 |
+
sudo reboot
|
| 489 |
+
```
|
| 490 |
+
|
| 491 |
+
### "Server startup timeout"
|
| 492 |
+
```python
|
| 493 |
+
# Increase timeout
|
| 494 |
+
server.wait_ready(timeout=120) # Default is 60 seconds
|
| 495 |
+
|
| 496 |
+
# Or check server logs for errors
|
| 497 |
+
server.start(wait_ready=False)
|
| 498 |
+
time.sleep(5)
|
| 499 |
+
# Check console for error messages
|
| 500 |
+
```
|
| 501 |
+
|
| 502 |
+
### Slow Inference
|
| 503 |
+
```python
|
| 504 |
+
# Increase GPU offloading
|
| 505 |
+
n_gpu_layers=33 # Offload all layers
|
| 506 |
+
|
| 507 |
+
# Check GPU utilization
|
| 508 |
+
nvidia-smi -l 1 # Refresh every second
|
| 509 |
+
|
| 510 |
+
# Use larger models with better quantization (Q5, Q6 instead of Q2)
|
| 511 |
+
# Reduce context size if not needed
|
| 512 |
+
```
|
| 513 |
+
|
| 514 |
+
---
|
| 515 |
+
|
| 516 |
+
## 🔗 Key Files & Directories
|
| 517 |
+
|
| 518 |
+
```
|
| 519 |
+
local-llama-inference/
|
| 520 |
+
├── src/local_llama_inference/ # Python SDK source
|
| 521 |
+
│ ├── __init__.py # Public API
|
| 522 |
+
│ ├── server.py # LlamaServer class
|
| 523 |
+
│ ├── client.py # LlamaClient REST wrapper
|
| 524 |
+
│ ├── config.py # Configuration classes
|
| 525 |
+
│ ├── gpu.py # GPU utilities
|
| 526 |
+
│ ├── exceptions.py # Custom exceptions
|
| 527 |
+
│ ├── _bindings/
|
| 528 |
+
│ │ ├── llama_binding.py # libllama.so ctypes wrapper
|
| 529 |
+
│ │ └── nccl_binding.py # libnccl.so.2 ctypes wrapper
|
| 530 |
+
│ └── _bootstrap/
|
| 531 |
+
│ ├── finder.py # Binary locator
|
| 532 |
+
│ └── extractor.py # Bundle extractor
|
| 533 |
+
├── examples/ # Tutorial scripts
|
| 534 |
+
│ ├── single_gpu_chat.py
|
| 535 |
+
│ ���── multi_gpu_tensor_split.py
|
| 536 |
+
│ ├── streaming_chat.py
|
| 537 |
+
│ ├── embeddings_example.py
|
| 538 |
+
│ └── nccl_bindings_example.py
|
| 539 |
+
├── tests/ # Unit tests
|
| 540 |
+
├── pyproject.toml # Package metadata
|
| 541 |
+
├── README.md # This file
|
| 542 |
+
├── LICENSE # MIT License
|
| 543 |
+
└── releases/v0.1.0/ # Release artifacts
|
| 544 |
+
├── local-llama-inference-complete-v0.1.0.tar.gz
|
| 545 |
+
├── local-llama-inference-sdk-v0.1.0.tar.gz
|
| 546 |
+
└── CHECKSUMS.txt
|
| 547 |
+
```
|
| 548 |
+
|
| 549 |
+
---
|
| 550 |
+
|
| 551 |
+
## 📦 Dependencies
|
| 552 |
+
|
| 553 |
+
### Required
|
| 554 |
+
- **httpx** >= 0.24.0 - Async HTTP client for REST API
|
| 555 |
+
- **pydantic** >= 2.0 - Data validation and settings management
|
| 556 |
+
|
| 557 |
+
### Optional (Development)
|
| 558 |
+
- **pytest** >= 7.0 - Unit testing
|
| 559 |
+
- **pytest-asyncio** >= 0.21.0 - Async test support
|
| 560 |
+
|
| 561 |
+
### System
|
| 562 |
+
- **NVIDIA CUDA** - Any version 11.5+ (runtime included in package)
|
| 563 |
+
- **NVIDIA Drivers** - Required, any recent version
|
| 564 |
+
|
| 565 |
+
---
|
| 566 |
+
|
| 567 |
+
## 🚀 Performance Tips
|
| 568 |
+
|
| 569 |
+
1. **Use Flash Attention** - Set `flash_attn=True` for 2-3x speedup
|
| 570 |
+
2. **Increase Context** - Larger `n_ctx` = slower but better context
|
| 571 |
+
3. **Batch Size** - `n_batch=512` good for most cases
|
| 572 |
+
4. **GPU Layers** - Higher `n_gpu_layers` = faster but more VRAM
|
| 573 |
+
5. **Quantization** - Q4 = 4GB, Q5 = 5GB, Q6 = 6GB typical sizes
|
| 574 |
+
6. **Multi-GPU** - Use `tensor_split` to distribute across GPUs
|
| 575 |
+
7. **Keep Alive** - Reuse server instance instead of restart/start cycles
|
| 576 |
+
|
| 577 |
+
---
|
| 578 |
+
|
| 579 |
+
## 🔐 Security
|
| 580 |
+
|
| 581 |
+
- **API Keys** - Optional API key support via `ServerConfig.api_key`
|
| 582 |
+
- **Local Only** - Bind to `127.0.0.1` for local development
|
| 583 |
+
- **Production** - Consider authentication/TLS for production deployments
|
| 584 |
+
- **Model Files** - Keep GGUF files private, don't share URLs publicly
|
| 585 |
+
|
| 586 |
+
---
|
| 587 |
+
|
| 588 |
+
## 📄 License
|
| 589 |
+
|
| 590 |
+
MIT License - See `LICENSE` file for details
|
| 591 |
+
|
| 592 |
+
---
|
| 593 |
+
|
| 594 |
+
## 🤝 Contributing
|
| 595 |
+
|
| 596 |
+
Contributions are welcome! Please:
|
| 597 |
+
|
| 598 |
+
1. Fork the repository
|
| 599 |
+
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
|
| 600 |
+
3. Commit changes (`git commit -m 'Add amazing feature'`)
|
| 601 |
+
4. Push to branch (`git push origin feature/amazing-feature`)
|
| 602 |
+
5. Open a Pull Request
|
| 603 |
+
|
| 604 |
+
---
|
| 605 |
+
|
| 606 |
+
## 📞 Support & Resources
|
| 607 |
+
|
| 608 |
+
- **GitHub Issues**: [Report bugs or request features](https://github.com/Local-Llama-Inference/Local-Llama-Inference/issues)
|
| 609 |
+
- **GitHub Discussions**: [Ask questions and share ideas](https://github.com/Local-Llama-Inference/Local-Llama-Inference/discussions)
|
| 610 |
+
- **Releases**: [Download packages](https://github.com/Local-Llama-Inference/Local-Llama-Inference/releases)
|
| 611 |
+
|
| 612 |
+
### Related Projects
|
| 613 |
+
- **llama.cpp** - Core inference engine: https://github.com/ggml-org/llama.cpp
|
| 614 |
+
- **NCCL** - GPU collective communication: https://github.com/NVIDIA/nccl
|
| 615 |
+
- **Hugging Face GGUF Models** - https://huggingface.co/models?search=gguf
|
| 616 |
+
|
| 617 |
+
---
|
| 618 |
+
|
| 619 |
+
## 📊 Project Status
|
| 620 |
+
|
| 621 |
+
- **Version**: 0.1.0 (Beta)
|
| 622 |
+
- **Status**: Production Ready
|
| 623 |
+
- **Last Updated**: February 24, 2026
|
| 624 |
+
- **Python Support**: 3.8 - 3.12
|
| 625 |
+
- **GPU Support**: NVIDIA sm_50 and newer
|
| 626 |
+
|
| 627 |
+
---
|
| 628 |
+
|
| 629 |
+
## 🎓 Learning Resources
|
| 630 |
+
|
| 631 |
+
### Official Documentation
|
| 632 |
+
- See `00-START-HERE.md` in release package
|
| 633 |
+
- See `RELEASE_NOTES_v0.1.0.md` for detailed feature list
|
| 634 |
+
- Check `examples/` directory for code samples
|
| 635 |
+
|
| 636 |
+
### External Resources
|
| 637 |
+
- **llama.cpp Documentation**: https://github.com/ggml-org/llama.cpp/tree/master/docs
|
| 638 |
+
- **GGUF Format**: https://github.com/ggml-org/gguf
|
| 639 |
+
- **NCCL Documentation**: https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/
|
| 640 |
+
|
| 641 |
+
---
|
| 642 |
+
|
| 643 |
+
**Built with ❤️ for the open-source ML community**
|
| 644 |
+
|
| 645 |
+
⭐ If you find this project useful, please consider starring the repository!
|