How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha# Run inference directly in the terminal:
llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_AlphaUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha# Run inference directly in the terminal:
./llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_AlphaBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha# Run inference directly in the terminal:
./build/bin/llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_AlphaUse Docker
docker model run hf.co/Sicarius-Prototyping/Turbo_Grammar_51B_AlphaQuick Links
POC of an advanced tool to fix grammar, perform advanced GEC (grammar error correction), write poems and analyze them, emulate any author writing style. This is in prototyping because I did not have enough time to make it bullet proof and stable enough to warrant under my flagship models. Very usable, but not stable, feel free to play with it or improve it.
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha# Run inference directly in the terminal: llama-cli -hf Sicarius-Prototyping/Turbo_Grammar_51B_Alpha