Text Generation
Transformers
Safetensors
MLX
llama
code
conversational
Eval Results (legacy)
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mlx-community/granite-3b-code-instruct-4bit")
model = AutoModelForCausalLM.from_pretrained("mlx-community/granite-3b-code-instruct-4bit")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
mlx-community/granite-3b-code-instruct-4bit
The Model mlx-community/granite-3b-code-instruct-4bit was converted to MLX format from ibm-granite/granite-3b-code-instruct using mlx-lm version 0.12.0.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/granite-3b-code-instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 98
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for mlx-community/granite-3b-code-instruct-4bit
Base model
ibm-granite/granite-3b-code-base-2kDatasets used to train mlx-community/granite-3b-code-instruct-4bit
Evaluation results
- pass@1 on HumanEvalSynthesis(Python)self-reported51.200
- pass@1 on HumanEvalSynthesis(Python)self-reported43.900
- pass@1 on HumanEvalSynthesis(Python)self-reported41.500
- pass@1 on HumanEvalSynthesis(Python)self-reported31.700
- pass@1 on HumanEvalSynthesis(Python)self-reported40.200
- pass@1 on HumanEvalSynthesis(Python)self-reported29.300
- pass@1 on HumanEvalSynthesis(Python)self-reported39.600
- pass@1 on HumanEvalSynthesis(Python)self-reported26.800
- pass@1 on HumanEvalSynthesis(Python)self-reported39.000
- pass@1 on HumanEvalSynthesis(Python)self-reported14.000
- pass@1 on HumanEvalSynthesis(Python)self-reported23.800
- pass@1 on HumanEvalSynthesis(Python)self-reported12.800
- pass@1 on HumanEvalSynthesis(Python)self-reported26.800
- pass@1 on HumanEvalSynthesis(Python)self-reported28.000
- pass@1 on HumanEvalSynthesis(Python)self-reported33.500
- pass@1 on HumanEvalSynthesis(Python)self-reported27.400
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mlx-community/granite-3b-code-instruct-4bit") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)