Llammy3.2-3B-GUFF
Collection
48 items • Updated • 1
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M# Run inference directly in the terminal:
llama-cli -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M# Run inference directly in the terminal:
./llama-cli -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_Mgit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_Mdocker model run hf.co/bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_MWhere Code Meets Visual Alchemy
Qwen2.5-Coder-3B fine-tuned for Blender Python and image engineering pipelines — the most-downloaded model in the EPM fleet. Designed for creators who demand precision in both code generation and visual storytelling.
Eternal Path Media (永恒之路) — Darren Chow (@bartendr604) + Claude Sonnet 4.6 (Anthropic)
Part of the Eternal Path Media AI Suite — 永恒之路
5-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M# Run inference directly in the terminal: llama-cli -hf bartendr604/EPM-ZIBL.Engineer.3b:Q5_K_M