Instructions to use QuantFactory/granite-3.1-2b-instruct-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/granite-3.1-2b-instruct-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/granite-3.1-2b-instruct-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/granite-3.1-2b-instruct-GGUF", filename="granite-3.1-2b-instruct.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/granite-3.1-2b-instruct-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3.1-2b-instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
- SGLang
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantFactory/granite-3.1-2b-instruct-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3.1-2b-instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantFactory/granite-3.1-2b-instruct-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3.1-2b-instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Ollama:
ollama run hf.co/QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/granite-3.1-2b-instruct-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/granite-3.1-2b-instruct-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/granite-3.1-2b-instruct-GGUF to start chatting
- Pi new
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/granite-3.1-2b-instruct-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/granite-3.1-2b-instruct-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.granite-3.1-2b-instruct-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/granite-3.1-2b-instruct-GGUF
This is quantized version of ibm-granite/granite-3.1-2b-instruct created using llama.cpp
Original Model Card
Granite-3.1-2B-Instruct
Model Summary: Granite-3.1-2B-Instruct is a 2B parameter long-context instruct model finetuned from Granite-3.1-2B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
- Developers: Granite Team, IBM
- GitHub Repository: ibm-granite/granite-3.1-language-models
- Website: Granite Docs
- Paper: Granite 3.1 Language Models (coming soon)
- Release Date: December 18th, 2024
- License: Apache 2.0
Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.
Intended Use: The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
Capabilities
- Summarization
- Text classification
- Text extraction
- Question-answering
- Retrieval Augmented Generation (RAG)
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
- Long-context tasks including long document/meeting summarization, long document QA, etc.
Generation: This is a simple example of how to use Granite-3.1-2B-Instruct model.
Install the following libraries:
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the snippet from the section that is relevant for your use case.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
Model Architecture: Granite-3.1-2B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
| Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
|---|---|---|---|---|
| Embedding size | 2048 | 4096 | 1024 | 1536 |
| Number of layers | 40 | 40 | 24 | 32 |
| Attention head size | 64 | 128 | 64 | 64 |
| Number of attention heads | 32 | 32 | 16 | 24 |
| Number of KV heads | 8 | 8 | 8 | 8 |
| MLP hidden size | 8192 | 12800 | 512 | 512 |
| MLP activation | SwiGLU | SwiGLU | SwiGLU | SwiGLU |
| Number of experts | โ | โ | 32 | 40 |
| MoE TopK | โ | โ | 8 | 8 |
| Initialization std | 0.1 | 0.1 | 0.1 | 0.1 |
| Sequence length | 128K | 128K | 128K | 128K |
| Position embedding | RoPE | RoPE | RoPE | RoPE |
| # Parameters | 2.5B | 8.1B | 1.3B | 3.3B |
| # Active parameters | 2.5B | 8.1B | 400M | 800M |
| # Training tokens | 12T | 12T | 10T | 10T |
Training Data: Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities including long-context tasks, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the Granite 3.0 Technical Report, Granite 3.1 Technical Report (coming soon), and Accompanying Author List.
Infrastructure: We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
Ethical Considerations and Limitations: Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
- Downloads last month
- 68
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/granite-3.1-2b-instruct-GGUF
Base model
ibm-granite/granite-3.1-2b-base