Update README.md
Browse files
README.md
CHANGED
|
@@ -11,34 +11,7 @@ tags:
|
|
| 11 |
- gguf
|
| 12 |
- llama-cpp
|
| 13 |
- mlx
|
| 14 |
-
- mlx-my-repo
|
| 15 |
language:
|
| 16 |
- en
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
---
|
| 19 |
-
|
| 20 |
-
# fmasterpro27/LocoOperator-4B-mlx-4Bit
|
| 21 |
-
|
| 22 |
-
The Model [fmasterpro27/LocoOperator-4B-mlx-4Bit](https://huggingface.co/fmasterpro27/LocoOperator-4B-mlx-4Bit) was converted to MLX format from [LocoreMind/LocoOperator-4B](https://huggingface.co/LocoreMind/LocoOperator-4B) using mlx-lm version **0.29.1**.
|
| 23 |
-
|
| 24 |
-
## Use with mlx
|
| 25 |
-
|
| 26 |
-
```bash
|
| 27 |
-
pip install mlx-lm
|
| 28 |
-
```
|
| 29 |
-
|
| 30 |
-
```python
|
| 31 |
-
from mlx_lm import load, generate
|
| 32 |
-
|
| 33 |
-
model, tokenizer = load("fmasterpro27/LocoOperator-4B-mlx-4Bit")
|
| 34 |
-
|
| 35 |
-
prompt="hello"
|
| 36 |
-
|
| 37 |
-
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
|
| 38 |
-
messages = [{"role": "user", "content": prompt}]
|
| 39 |
-
prompt = tokenizer.apply_chat_template(
|
| 40 |
-
messages, tokenize=False, add_generation_prompt=True
|
| 41 |
-
)
|
| 42 |
-
|
| 43 |
-
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
| 44 |
-
```
|
|
|
|
| 11 |
- gguf
|
| 12 |
- llama-cpp
|
| 13 |
- mlx
|
|
|
|
| 14 |
language:
|
| 15 |
- en
|
| 16 |
pipeline_tag: text-generation
|
| 17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|