Text Generation
GGUF
English
Russian
Chinese
pzdrk-reasoning
code
legal
medical
finance
chemistry
biology
text-generation-inference
conversational
custom_code
How to use from
llama.cppInstall from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf pzdrk/pzdrk-R1:F16# Run inference directly in the terminal:
llama-cli -hf pzdrk/pzdrk-R1:F16Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf pzdrk/pzdrk-R1:F16# Run inference directly in the terminal:
llama-cli -hf pzdrk/pzdrk-R1:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf pzdrk/pzdrk-R1:F16# Run inference directly in the terminal:
./llama-cli -hf pzdrk/pzdrk-R1:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf pzdrk/pzdrk-R1:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf pzdrk/pzdrk-R1:F16Use Docker
docker model run hf.co/pzdrk/pzdrk-R1:F16Quick Links
pzdrk-reasoning-1-agi-sota-pro-max-15b
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
16-bit
Model tree for pzdrk/pzdrk-R1
Unable to build the model tree, the base model loops to the model itself. Learn more.
# Gated model: Login with a HF token with gated access permission hf auth login