Instructions to use ibm-granite/granite-8b-code-instruct-4k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ibm-granite/granite-8b-code-instruct-4k with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ibm-granite/granite-8b-code-instruct-4k") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-8b-code-instruct-4k") model = AutoModelForCausalLM.from_pretrained("ibm-granite/granite-8b-code-instruct-4k") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ibm-granite/granite-8b-code-instruct-4k with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ibm-granite/granite-8b-code-instruct-4k" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ibm-granite/granite-8b-code-instruct-4k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ibm-granite/granite-8b-code-instruct-4k
- SGLang
How to use ibm-granite/granite-8b-code-instruct-4k with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ibm-granite/granite-8b-code-instruct-4k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ibm-granite/granite-8b-code-instruct-4k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ibm-granite/granite-8b-code-instruct-4k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ibm-granite/granite-8b-code-instruct-4k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ibm-granite/granite-8b-code-instruct-4k with Docker Model Runner:
docker model run hf.co/ibm-granite/granite-8b-code-instruct-4k
Function calling and Streaming support
Hi ,
I've been exploring the IBM Granite models and have a couple of questions that I'd appreciate if someone could help me with.
Function Calling: I haven't found any documentation or resources that indicate whether IBM Granite models support function calling, i.e., invoking a particular function or method of the model. Can anyone confirm if this feature is supported?
Streaming: Similarly, I'm also curious to know whether IBM Granite models support Streaming. any pointers or resources in this regard would be greatly appreciated.
I've searched through both the Git repo and Hugging Face documentation but haven't found clear information on these features. If anyone has any insights, it would be a great help.
Thanks.
@skumarai function calling might work with the model but its not an actively trained feature AFAIK.
I think HuggingFace doesn't support streaming.
And streaming has nothing to do with the model but rather the HuggingFace ecosystem.
I suggest you take a look at vllm: https://github.com/vllm-project/vllm