Instructions to use unsloth/INTELLECT-2-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use unsloth/INTELLECT-2-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="unsloth/INTELLECT-2-GGUF", filename="BF16/INTELLECT-2-BF16-00001-of-00002.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use unsloth/INTELLECT-2-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL # Run inference directly in the terminal: llama-cli -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL # Run inference directly in the terminal: llama-cli -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL # Run inference directly in the terminal: ./llama-cli -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL # Run inference directly in the terminal: ./build/bin/llama-cli -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Use Docker
docker model run hf.co/unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
- LM Studio
- Jan
- Ollama
How to use unsloth/INTELLECT-2-GGUF with Ollama:
ollama run hf.co/unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
- Unsloth Studio new
How to use unsloth/INTELLECT-2-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/INTELLECT-2-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/INTELLECT-2-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for unsloth/INTELLECT-2-GGUF to start chatting
- Pi new
How to use unsloth/INTELLECT-2-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use unsloth/INTELLECT-2-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Run Hermes
hermes
- Docker Model Runner
How to use unsloth/INTELLECT-2-GGUF with Docker Model Runner:
docker model run hf.co/unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
- Lemonade
How to use unsloth/INTELLECT-2-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull unsloth/INTELLECT-2-GGUF:UD-Q4_K_XL
Run and chat with the model
lemonade run user.INTELLECT-2-GGUF-UD-Q4_K_XL
List all available models
lemonade list
Please read Running QwQ effectively on sampling issues for QwQ based models.
Or TLDR, use the below settings:
./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99 \
--temp 0.6 \
--repeat-penalty 1.1 \
--dry-multiplier 0.5 \
--min-p 0.00 \
--top-k 40 \
--top-p 0.95 \
--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"
INTELLECT-2
INTELLECT-2 is a 32 billion parameter language model trained through a reinforcement learning run leveraging globally distributed, permissionless GPU resources contributed by the community.
The model was trained using prime-rl, a framework designed for distributed asynchronous RL, using GRPO over verifiable rewards along with modifications for improved training stability. For detailed information on our infrastructure and training recipe, see our technical report.
Model Information
- Training Dataset (verifiable math & coding tasks): PrimeIntellect/Intellect-2-RL-Dataset
- Base Model: QwQ-32B
- Training Code: prime-rl
Usage
INTELLECT-2 is based on the qwen2 architecture, making it compatible with popular libraries and inference engines such as vllm or sglang.
Given that INTELLECT-2 was trained with a length control budget, you will achieve the best results by appending the prompt "Think for 10000 tokens before giving a response." to your instruction. As reported in our technical report, the model did not train for long enough to fully learn the length control objective, which is why results won't differ strongly if you specify lengths other than 10,000. If you wish to do so, you can expect the best results with 2000, 4000, 6000 and 8000, as these were the other target lengths present during training.
Performance
During training, INTELLECT-2 improved upon QwQ in its mathematical and coding abilities. Performance on IFEval slightly decreased, which can likely be attributed to the lack of diverse training data and pure focus on mathematics and coding.
| Model | AIME24 | AIME25 | LiveCodeBench (v5) | GPQA-Diamond | IFEval |
|---|---|---|---|---|---|
| INTELLECT-2 | 78.8 | 64.9 | 67.8 | 66.8 | 81.5 |
| QwQ-32B | 76.6 | 64.8 | 66.1 | 66.3 | 83.4 |
| Qwen-R1-Distill-32B | 69.9 | 58.4 | 55.1 | 65.2 | 72.0 |
| Deepseek-R1 | 78.6 | 65.1 | 64.1 | 71.6 | 82.7 |
- Downloads last month
- 223
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for unsloth/INTELLECT-2-GGUF
Base model
PrimeIntellect/INTELLECT-2
