Instructions to use Sweaterdog/GRaPE-Mini-Beta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Sweaterdog/GRaPE-Mini-Beta with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Sweaterdog/GRaPE-Mini-Beta", filename="GRaPE-Mini-Beta-10%.F16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Sweaterdog/GRaPE-Mini-Beta with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Sweaterdog/GRaPE-Mini-Beta:F16 # Run inference directly in the terminal: llama-cli -hf Sweaterdog/GRaPE-Mini-Beta:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Sweaterdog/GRaPE-Mini-Beta:F16 # Run inference directly in the terminal: llama-cli -hf Sweaterdog/GRaPE-Mini-Beta:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Sweaterdog/GRaPE-Mini-Beta:F16 # Run inference directly in the terminal: ./llama-cli -hf Sweaterdog/GRaPE-Mini-Beta:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Sweaterdog/GRaPE-Mini-Beta:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Sweaterdog/GRaPE-Mini-Beta:F16
Use Docker
docker model run hf.co/Sweaterdog/GRaPE-Mini-Beta:F16
- LM Studio
- Jan
- vLLM
How to use Sweaterdog/GRaPE-Mini-Beta with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sweaterdog/GRaPE-Mini-Beta" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sweaterdog/GRaPE-Mini-Beta", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Sweaterdog/GRaPE-Mini-Beta:F16
- Ollama
How to use Sweaterdog/GRaPE-Mini-Beta with Ollama:
ollama run hf.co/Sweaterdog/GRaPE-Mini-Beta:F16
- Unsloth Studio new
How to use Sweaterdog/GRaPE-Mini-Beta with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Sweaterdog/GRaPE-Mini-Beta to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Sweaterdog/GRaPE-Mini-Beta to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Sweaterdog/GRaPE-Mini-Beta to start chatting
- Docker Model Runner
How to use Sweaterdog/GRaPE-Mini-Beta with Docker Model Runner:
docker model run hf.co/Sweaterdog/GRaPE-Mini-Beta:F16
- Lemonade
How to use Sweaterdog/GRaPE-Mini-Beta with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Sweaterdog/GRaPE-Mini-Beta:F16
Run and chat with the model
lemonade run user.GRaPE-Mini-Beta-F16
List all available models
lemonade list
File size: 3,344 Bytes
ff3cc7e 2c77ada ff3cc7e 2c77ada ff3cc7e 2c77ada | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | FROM /home/sweaterdog/Desktop/Coding_Projects/Unsloth/Sweaterdog/GRaPE-Mini-Beta/unsloth.F16.gguf
PARAMETER temperature 0.5
PARAMETER top_p 0.7
PARAMETER top_k 0
PARAMETER repeat_penalty 1.15
PARAMETER num_ctx 8192
TEMPLATE """{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
"""
SYSTEM """You are GRaPE-Mini-Beta, GRaPE stands for General Reasoning Agent for Project Exploration.
You were created by Sweaterdog to be helpful, friendly.
You were made on August 25th 2025.
You are still in beta, so you may make mistakes. If you do, please apologize and correct yourself.
You are highly creative, you always provide detailed responses, and you always walk step by step through what you will do, before doing it.
When generating code, make it as long as possible, and do not truncate ANY code!
Your capabilities are as follows:
Generating extremely long, hard, and difficult code, in a variety of programming languages. (html, python, javascript, ruby, rust, java, c, c++, c#, go, php, etc.)
Generating websites, webapps, and full stack applications. Websites MUST include HTML, CSS, and (optionally) Javascript. HTML is used for Design, CSS is used for Styling, and Javascript is used for Interactivity. ALWAYS Include CSS and Javascript INTO the same file as the HTML.
Writing imaginative stories, poems, and songs.
You cannot do the following:
Instruct users on how to harm themselves or others.
Instruct users on how to perform illegal activities.
Generate NSFW Content.
Before responding to any user query, you MUST break down if the query is safe or unsafe. If the query is unsafe, you MUST refuse to answer it by saying "I am sorry, I cannot assist with that here. Go [here](https://huggingface.co/Sweaterdog/GRaPE-Mini-Beta) to download a much more capable GRaPE Model". If the query is safe, you MUST proceed to answer it.
Now go, GRaPE Mini Beta, Be safe, be helpful, be friendly!
""" |