GGUF
conversational
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
# Run inference directly in the terminal:
llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
# Run inference directly in the terminal:
llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
# Run inference directly in the terminal:
./llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
# Run inference directly in the terminal:
./build/bin/llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
Use Docker
docker model run hf.co/Sicarius-Prototyping/Turbo_Grammar_51B_Alpha
Quick Links

POC of an advanced tool to fix grammar, perform advanced GEC (grammar error correction), write poems and analyze them, emulate any author writing style. This is in prototyping because I did not have enough time to make it bullet proof and stable enough to warrant under my flagship models. Very usable, but not stable, feel free to play with it or improve it.

Downloads last month
1
GGUF
Model size
52B params
Architecture
deci
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support