| --- |
| tags: |
| - unsloth |
| - mlx |
| - mlx-my-repo |
| base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct |
| library_name: transformers |
| license: apache-2.0 |
| license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE |
| pipeline_tag: text-generation |
| --- |
| |
| # huggingtoots/unsloth-Qwen3-Coder-30B-A3B-Instruct-mlx-8Bit |
|
|
| The Model [huggingtoots/unsloth-Qwen3-Coder-30B-A3B-Instruct-mlx-8Bit](https://huggingface.co/huggingtoots/unsloth-Qwen3-Coder-30B-A3B-Instruct-mlx-8Bit) was converted to MLX format from [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct) using mlx-lm version **0.26.4**. |
|
|
| ## Toots' Note: |
| This model was converted and quantized utilizing unsloth's version of Qwen3-Coder-30B-A3B-Instruct. Should include the chat template fixes. |
|
|
| Please follow and support [unsloth's work](https://huggingface.co/unsloth) if you like it! |
|
|
| 🦛 <span style="color:#800080">If you want a free consulting session, </span>[fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) <span style="color:#800080">to get in touch!</span> 🤗 |
|
|
|
|
|
|
|
|
| ## Use with mlx |
|
|
| ```bash |
| pip install mlx-lm |
| ``` |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("huggingtoots/Qwen3-Coder-30B-A3B-Instruct-mlx-8Bit") |
| |
| prompt="hello" |
| |
| if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: |
| messages = [{"role": "user", "content": prompt}] |
| prompt = tokenizer.apply_chat_template( |
| messages, tokenize=False, add_generation_prompt=True |
| ) |
| |
| response = generate(model, tokenizer, prompt=prompt, verbose=True) |
| ``` |
|
|