| | ---
|
| | base_model: Qwen/Qwen2.5-1.5B
|
| | language:
|
| | - zho
|
| | - eng
|
| | - fra
|
| | - spa
|
| | - por
|
| | - deu
|
| | - ita
|
| | - rus
|
| | - jpn
|
| | - kor
|
| | - vie
|
| | - tha
|
| | - ara
|
| | library_name: transformers
|
| | license: apache-2.0
|
| | license_link: https://huggingface.co/Qwen/Qwen2.5-Math-1.5B/blob/main/LICENSE
|
| | pipeline_tag: text-generation
|
| | tags:
|
| | - mlx
|
| | ---
|
| |
|
| | # mlx-community/Qwen2.5-Math-1.5B-bf16
|
| |
|
| | The Model [mlx-community/Qwen2.5-Math-1.5B-bf16](https://huggingface.co/mlx-community/Qwen2.5-Math-1.5B-bf16) was converted to MLX format from [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) using mlx-lm version **0.18.1**.
|
| |
|
| | ## Use with mlx
|
| |
|
| | ```bash
|
| | pip install mlx-lm
|
| | ```
|
| |
|
| | ```python
|
| | from mlx_lm import load, generate
|
| |
|
| | model, tokenizer = load("mlx-community/Qwen2.5-Math-1.5B-bf16")
|
| | response = generate(model, tokenizer, prompt="hello", verbose=True)
|
| | ```
|
| |
|