Text Generation
Transformers
Safetensors
MLX
gpt_bigcode
code
granite
conversational
Eval Results (legacy)
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mlx-community/granite-34b-code-instruct-8bit")
model = AutoModelForCausalLM.from_pretrained("mlx-community/granite-34b-code-instruct-8bit")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
mlx-community/granite-34b-code-instruct-8bit
The Model mlx-community/granite-34b-code-instruct-8bit was converted to MLX format from ibm-granite/granite-34b-code-instruct using mlx-lm version 0.13.0.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/granite-34b-code-instruct-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 18
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for mlx-community/granite-34b-code-instruct-8bit
Base model
ibm-granite/granite-34b-code-base-8kDatasets used to train mlx-community/granite-34b-code-instruct-8bit
Evaluation results
- pass@1 on HumanEvalSynthesis(Python)self-reported62.200
- pass@1 on HumanEvalSynthesis(Python)self-reported56.700
- pass@1 on HumanEvalSynthesis(Python)self-reported62.800
- pass@1 on HumanEvalSynthesis(Python)self-reported47.600
- pass@1 on HumanEvalSynthesis(Python)self-reported57.900
- pass@1 on HumanEvalSynthesis(Python)self-reported41.500
- pass@1 on HumanEvalSynthesis(Python)self-reported53.000
- pass@1 on HumanEvalSynthesis(Python)self-reported45.100
- pass@1 on HumanEvalSynthesis(Python)self-reported50.600
- pass@1 on HumanEvalSynthesis(Python)self-reported36.000
- pass@1 on HumanEvalSynthesis(Python)self-reported42.700
- pass@1 on HumanEvalSynthesis(Python)self-reported23.800
- pass@1 on HumanEvalSynthesis(Python)self-reported54.900
- pass@1 on HumanEvalSynthesis(Python)self-reported47.600
- pass@1 on HumanEvalSynthesis(Python)self-reported55.500
- pass@1 on HumanEvalSynthesis(Python)self-reported51.200
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mlx-community/granite-34b-code-instruct-8bit") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)