| | ---
|
| | base_model: Qwen/Qwen2.5-3B-Instruct
|
| | language:
|
| | - zho
|
| | - eng
|
| | - fra
|
| | - spa
|
| | - por
|
| | - deu
|
| | - ita
|
| | - rus
|
| | - jpn
|
| | - kor
|
| | - vie
|
| | - tha
|
| | - ara
|
| | library_name: transformers
|
| | license: other
|
| | license_name: qwen-research
|
| | license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
|
| | pipeline_tag: text-generation
|
| | tags:
|
| | - chat
|
| | - mlx
|
| | ---
|
| |
|
| | # TheBlueObserver/Qwen2.5-3B-Instruct-MLX
|
| |
|
| | The Model [TheBlueObserver/Qwen2.5-3B-Instruct-MLX](https://huggingface.co/TheBlueObserver/Qwen2.5-3B-Instruct-MLX) was
|
| | converted to MLX format from [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
|
| | using mlx-lm version **0.20.2**.
|
| |
|
| | ## Use with mlx
|
| |
|
| | ```bash
|
| | pip install mlx-lm
|
| | ```
|
| |
|
| | ```python
|
| | from mlx_lm import load, generate
|
| |
|
| | model, tokenizer = load("TheBlueObserver/Qwen2.5-3B-Instruct-MLX")
|
| |
|
| | prompt="hello"
|
| |
|
| | if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
|
| | messages = [{"role": "user", "content": prompt}]
|
| | prompt = tokenizer.apply_chat_template(
|
| | messages, tokenize=False, add_generation_prompt=True
|
| | )
|
| |
|
| | response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
| | ```
|
| |
|