Instructions to use SL-AI/GRaPE-Mini-Instruct-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SL-AI/GRaPE-Mini-Instruct-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("SL-AI/GRaPE-Mini-Instruct-GGUF", dtype="auto") - llama-cpp-python
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="SL-AI/GRaPE-Mini-Instruct-GGUF", filename="grape_mini_instruct.bf16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Use Docker
docker model run hf.co/SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SL-AI/GRaPE-Mini-Instruct-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SL-AI/GRaPE-Mini-Instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
- SGLang
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SL-AI/GRaPE-Mini-Instruct-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SL-AI/GRaPE-Mini-Instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SL-AI/GRaPE-Mini-Instruct-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SL-AI/GRaPE-Mini-Instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Ollama:
ollama run hf.co/SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
- Unsloth Studio new
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SL-AI/GRaPE-Mini-Instruct-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SL-AI/GRaPE-Mini-Instruct-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for SL-AI/GRaPE-Mini-Instruct-GGUF to start chatting
- Pi new
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Docker Model Runner:
docker model run hf.co/SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
- Lemonade
How to use SL-AI/GRaPE-Mini-Instruct-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull SL-AI/GRaPE-Mini-Instruct-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.GRaPE-Mini-Instruct-GGUF-Q4_K_M
List all available models
lemonade list
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("SL-AI/GRaPE-Mini-Instruct-GGUF", dtype="auto")The General Reasoning Agent (for) Project Exploration
The GRaPE Family
| Attribute | Size | Modalities | Domain |
|---|---|---|---|
| GRaPE Flash | 7B A1B | Text in, Text out | High-Speed Applications |
| GRaPE Mini (Instruct) | 3B | Text + Image + Video in, Text out | On-Device Deployment |
| GRaPE Nano | 700M | Text in, Text out | Extreme Edge Deployment |
Capabilities
The GRaPE Family was trained on about 14 billion tokens of data after pre-training. About half was code related tasks, with the rest being heavy on STEAM. Ensuring the model has a sound logical basis.
GRaPE Flash and Nano are monomodal models, only accepting text. GRaPE Mini being trained most recently supports image and video inputs.
How to Run
I recommend using LM Studio for running GRaPE Models, and have generally found these sampling parameters to work best:
| Name | Value |
|---|---|
| Temperature | 0.6 |
| Top K Sampling | 40 |
| Repeat Penalty | 1 |
| Top P Sampling | 0.85 |
| Min P Sampling | 0.05 |
Uses of GRaPE Mini Right Now
GRaPE Mini was foundational to the existence of Andy-4.1, a model trained to play Minecraft. This was a demo proving the efficiency and power this architecture can make.
GRaPE Mini as a Model
GRaPE Mini Instruct is a version of GRaPE Mini that was not trained on any data regarding reasoning tasks. It was the foundation which allowed for the unique architecture shown in GRaPE Mini to truly be expressed.
GRaPE Mini Instruct exists also as a way for lower compute devices to run GRaPE Models.
Architecture
GRaPE Flash: Built on the
OlMoEArchitecture, allowing for incredibly fast speeds where it matters. Allows for retaining factual information, but lacks in logical tasks.GRaPE Mini: Built on the
Qwen3 VLArchitecture, allowing for edge case deployments, where logic cannot be sacrificed.GRaPE Nano: Built on the
LFM 2Architecture, allowing for the fastest speed, and the most knowledge in the tiniest package.
Notes
The GRaPE Family started all the way back in August of 2025, meaning these models are severely out of date on architecture, and training data.
GRaPE 2 will come sooner than the GRaPE 1 family had, and will show multiple improvements.
There are no benchmarks for GRaPE 1 Models due to the costly nature of running them, as well as prioritization of newer models.
Updates for GRaPE 2 models will be posted here on Huggingface, as well as Skinnertopia
Demos for select GRaPE Models can be found here: https://github.com/Sweaterdog/GRaPE-Demos
- Downloads last month
- 61
2-bit
4-bit
6-bit
8-bit
16-bit

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SL-AI/GRaPE-Mini-Instruct-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)