Instructions to use mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX" --prompt "Once upon a time"
Update README.md
#2
by sbnc - opened
README.md
CHANGED
|
@@ -6,6 +6,7 @@ tags:
|
|
| 6 |
- llama-2
|
| 7 |
- mlx
|
| 8 |
pipeline_tag: text-generation
|
|
|
|
| 9 |
---
|
| 10 |

|
| 11 |
# mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX
|
|
@@ -22,4 +23,4 @@ from mlx_lm import load, generate
|
|
| 22 |
|
| 23 |
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX")
|
| 24 |
response = generate(model, tokenizer, prompt="hello", verbose=True)
|
| 25 |
-
```
|
|
|
|
| 6 |
- llama-2
|
| 7 |
- mlx
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
+
new_version: mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2
|
| 10 |
---
|
| 11 |

|
| 12 |
# mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX
|
|
|
|
| 23 |
|
| 24 |
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX")
|
| 25 |
response = generate(model, tokenizer, prompt="hello", verbose=True)
|
| 26 |
+
```
|