Instructions to use mlx-community/CodeLlama-13b-Python-4bit-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/CodeLlama-13b-Python-4bit-MLX with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-13b-Python-4bit-MLX") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use mlx-community/CodeLlama-13b-Python-4bit-MLX with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/CodeLlama-13b-Python-4bit-MLX" --prompt "Once upon a time"
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,9 @@ tags:
|
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
| 11 |
# mlx-community/CodeLlama-13b-Python-4bit
|
| 12 |
This model was converted to MLX format from [`codellama/CodeLlama-13b-Python-hf`]().
|
| 13 |
Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) for more details on the model.
|
|
|
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
---
|
| 10 |
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
|
| 14 |
# mlx-community/CodeLlama-13b-Python-4bit
|
| 15 |
This model was converted to MLX format from [`codellama/CodeLlama-13b-Python-hf`]().
|
| 16 |
Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) for more details on the model.
|