| | --- |
| | language: |
| | - code |
| | license: llama2 |
| | tags: |
| | - llama-2 |
| | - mlx |
| | pipeline_tag: text-generation |
| | --- |
| |  |
| | # mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX |
| | This model was converted to MLX format from [`codellama/CodeLlama-7b-Instruct-hf`](). |
| | Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) for more details on the model. |
| | ## Use with mlx |
| |
|
| | ```bash |
| | pip install mlx-lm |
| | ``` |
| |
|
| | ```python |
| | from mlx_lm import load, generate |
| | |
| | model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX") |
| | response = generate(model, tokenizer, prompt="hello", verbose=True) |
| | ``` |
| |
|