Instructions to use QuantFactory/granite-3b-code-base-128k-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/granite-3b-code-base-128k-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/granite-3b-code-base-128k-GGUF")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/granite-3b-code-base-128k-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/granite-3b-code-base-128k-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/granite-3b-code-base-128k-GGUF", filename="granite-3b-code-base-128k.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/granite-3b-code-base-128k-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/granite-3b-code-base-128k-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/granite-3b-code-base-128k-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3b-code-base-128k-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
- SGLang
How to use QuantFactory/granite-3b-code-base-128k-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantFactory/granite-3b-code-base-128k-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3b-code-base-128k-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantFactory/granite-3b-code-base-128k-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/granite-3b-code-base-128k-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use QuantFactory/granite-3b-code-base-128k-GGUF with Ollama:
ollama run hf.co/QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/granite-3b-code-base-128k-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/granite-3b-code-base-128k-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/granite-3b-code-base-128k-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/granite-3b-code-base-128k-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/granite-3b-code-base-128k-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/granite-3b-code-base-128k-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/granite-3b-code-base-128k-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.granite-3b-code-base-128k-GGUF-Q4_K_M
List all available models
lemonade list
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("QuantFactory/granite-3b-code-base-128k-GGUF", dtype="auto")QuantFactory/granite-3b-code-base-128k-GGUF
This is quantized version of ibm-granite/granite-3b-code-base-128k created using llama.cpp
Original Model Card
Granite-3B-Code-Base-128K
Model Summary
Granite-3B-Code-Base-128K extends the context length of Granite-3B-Code-Base from 2K to 128K with continual pretraining using the original training data but with repository-level file packing and per-language length upsampling, that we found to be critical for long-context pretraining. We adopt an progressive training strategy where we doubled the context window until it reached the desired length of 128K by appropriately adjusting RoPE theta. We trained on 4B tokens total for all stages, which is only 0.1% of Granite-3B-Code-Base's original pre-training data.
- Developers: IBM Research
- GitHub Repository: ibm-granite/granite-code-models
- Paper: Scaling Granite Code Models to 128K Context
- Release Date: July 18th, 2024
- License: Apache 2.0.
Usage
Intended use
Prominent enterprise use cases of LLMs in software engineering productivity with 128K context length support that includes code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the 3B parameter model, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages.
Generation
This is a simple example of how to use Granite-3B-Code-Base-128K model.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-3b-code-base-128k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "def generate():"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
Training Data
Starting from the base Granite model, this model was further pretrained on repository-level code data with per-language context-length oversampling, allowing it to effectively utilize up to 128K tokens of context. This continued training stage focused on a curated selection of programming languages, such as Python, C, C++, Go, Java, JavaScript, and TypeScript.
Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. Granite-3B-Code-Base-128K model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-3B-Code-Base-128K model with ethical intentions and in a responsible way.
- Downloads last month
- 302
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Datasets used to train QuantFactory/granite-3b-code-base-128k-GGUF
bigcode/starcoderdata
codeparrot/github-code-clean
Paper for QuantFactory/granite-3b-code-base-128k-GGUF
Evaluation results
- pass@1 on HumanEvalSynthesis (Python)self-reported36.000
- pass@1 on HumanEvalSynthesis (Average)self-reported30.500
- pass@1 on HumanEvalExplain (Average)self-reported22.400
- pass@1 on HumanEvalFix (Average)self-reported19.900
- pass@1 (thresh=0.5) on RepoQA (Python@16K)self-reported40.000
- pass@1 (thresh=0.5) on RepoQA (C++@16K)self-reported36.000
- pass@1 (thresh=0.5) on RepoQA (Java@16K)self-reported37.000
- pass@1 (thresh=0.5) on RepoQA (TypeScript@16K)self-reported27.000
- pass@1 (thresh=0.5) on RepoQA (Rust@16K)self-reported29.000
- Exact Match@4K on LCC (Balanced)self-reported54.600
- Exact Match@8K on LCC (Balanced)self-reported56.800
- Exact Match@16K on LCC (Balanced)self-reported52.200
- Exact Match@32K on LCC (Balanced)self-reported57.800
- Exact Match@4K on RepoBench-P (Balanced)self-reported39.800
- Exact Match@8K on RepoBench-P (Balanced)self-reported46.800
- Exact Match@16K on RepoBench-P (Balanced)self-reported43.100

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/granite-3b-code-base-128k-GGUF")