Instructions to use hocuf/ll32 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use hocuf/ll32 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="hocuf/ll32", filename="unsloth.Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use hocuf/ll32 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf hocuf/ll32:Q4_K_M # Run inference directly in the terminal: llama-cli -hf hocuf/ll32:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf hocuf/ll32:Q4_K_M # Run inference directly in the terminal: llama-cli -hf hocuf/ll32:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf hocuf/ll32:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf hocuf/ll32:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf hocuf/ll32:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf hocuf/ll32:Q4_K_M
Use Docker
docker model run hf.co/hocuf/ll32:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use hocuf/ll32 with Ollama:
ollama run hf.co/hocuf/ll32:Q4_K_M
- Unsloth Studio new
How to use hocuf/ll32 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for hocuf/ll32 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for hocuf/ll32 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for hocuf/ll32 to start chatting
- Pi new
How to use hocuf/ll32 with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf hocuf/ll32:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "hocuf/ll32:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use hocuf/ll32 with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf hocuf/ll32:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default hocuf/ll32:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use hocuf/ll32 with Docker Model Runner:
docker model run hf.co/hocuf/ll32:Q4_K_M
- Lemonade
How to use hocuf/ll32 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull hocuf/ll32:Q4_K_M
Run and chat with the model
lemonade run user.ll32-Q4_K_M
List all available models
lemonade list
| FROM /content/hocuf/ll32/unsloth.F16.gguf | |
| TEMPLATE """{{ if .Messages }} | |
| {{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> | |
| {{- if .System }} | |
| {{ .System }} | |
| {{- end }} | |
| {{- if .Tools }} | |
| You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question. | |
| {{- end }} | |
| {{- end }}<|eot_id|> | |
| {{- range $i, $_ := .Messages }} | |
| {{- $last := eq (len (slice $.Messages $i)) 1 }} | |
| {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|> | |
| {{- if and $.Tools $last }} | |
| Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. | |
| Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. | |
| {{ $.Tools }} | |
| {{- end }} | |
| {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> | |
| {{ end }} | |
| {{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|> | |
| {{- if .ToolCalls }} | |
| {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} | |
| {{- else }} | |
| {{ .Content }}{{ if not $last }}<|eot_id|>{{ end }} | |
| {{- end }} | |
| {{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|> | |
| {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> | |
| {{ end }} | |
| {{- end }} | |
| {{- end }} | |
| {{- else }} | |
| {{- if .System }}<|start_header_id|>system<|end_header_id|> | |
| {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> | |
| {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> | |
| {{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}""" | |
| PARAMETER stop "<|start_header_id|>" | |
| PARAMETER stop "<|end_header_id|>" | |
| PARAMETER stop "<|eot_id|>" | |
| PARAMETER stop "<|eom_id|>" | |
| PARAMETER temperature 1.5 | |
| PARAMETER min_p 0.1 |