mlx-community/CodeLlama-70b-Python-hf-4bit-MLX
This model was converted to MLX format from codellama/CodeLlama-70b-Python-hf.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-70b-Python-hf-4bit-MLX")
response = generate(model, tokenizer, prompt="Write python code for Fibonacci serie.", verbose=True)
- Downloads last month
- 142
Hardware compatibility
Log In to add your hardware
Quantized

# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-70b-Python-hf-4bit-MLX") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True)