How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf truegleai/deepseek-coder-api:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf truegleai/deepseek-coder-api:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf truegleai/deepseek-coder-api:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf truegleai/deepseek-coder-api:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf truegleai/deepseek-coder-api:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf truegleai/deepseek-coder-api:Q4_K_MUse Docker
docker model run hf.co/truegleai/deepseek-coder-api:Q4_K_MQuick Links
🚀 o87Dev - Maximum Capacity Deployment
Strategy: Deploy the largest viable model (DeepSeek-Coder-V2-Lite-Instruct-16B-Q4_K_M) on Hugging Face's free CPU tier.
⚙️ Technical Details
- Model: DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf (10.4GB)
- Quantization: Q4_K_M (Optimal quality/size for free tier)
- Loader:
llama-cpp-python(CPU optimized) - Context: 2048 tokens (max for free tier stability)
📊 Performance Expectations
- First load: ~60-120 seconds (model loads from disk)
- Inference speed: ~2-5 tokens/second on CPU
- Memory usage: ~12-14GB of 16GB available
🎯 Usage Tips
- First request triggers model load (be patient)
- Keep prompts under 500 tokens for best results
- Use temperature 0.7-0.9 for creative tasks
- Monitor memory usage in Space logs
🔗 Integration
This Space serves as the primary AI endpoint for the o87Dev local API server.
- Downloads last month
- 14
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf truegleai/deepseek-coder-api:Q4_K_M# Run inference directly in the terminal: llama-cli -hf truegleai/deepseek-coder-api:Q4_K_M