| | --- |
| | license: apache-2.0 |
| | base_model: stepfun-ai/Step-3.5-Flash |
| | library_name: mlx |
| | pipeline_tag: text-generation |
| | tags: |
| | - mlx |
| | --- |
| | |
| | # mlx-community/Step-3.5-Flash-bf16 |
| |
|
| | This model [mlx-community/Step-3.5-Flash-bf16](https://huggingface.co/mlx-community/Step-3.5-Flash-bf16) was |
| | converted to MLX format from [stepfun-ai/Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) |
| | using mlx-lm version **0.30.7**. |
| |
|
| | ## Use with mlx |
| |
|
| | ```bash |
| | pip install mlx-lm |
| | ``` |
| |
|
| | ```python |
| | from mlx_lm import load, generate |
| | |
| | model, tokenizer = load("mlx-community/Step-3.5-Flash-bf16") |
| | |
| | prompt = "hello" |
| | |
| | if tokenizer.chat_template is not None: |
| | messages = [{"role": "user", "content": prompt}] |
| | prompt = tokenizer.apply_chat_template( |
| | messages, add_generation_prompt=True, return_dict=False, |
| | ) |
| | |
| | response = generate(model, tokenizer, prompt=prompt, verbose=True) |
| | ``` |
| |
|