Update README.md

#1
by mrtoots - opened
Files changed (1) hide show
  1. README.md +0 -45
README.md CHANGED
@@ -1,45 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- pipeline_tag: text-generation
4
- library_name: transformers
5
- tags:
6
- - vllm
7
- - mlx
8
- - mlx-my-repo
9
- base_model: unsloth/gpt-oss-120b
10
- ---
11
-
12
- # mrtoots/unsloth-gpt-oss-120b-mlx-8Bit
13
-
14
- The Model [mrtoots/unsloth-gpt-oss-120b-mlx-8Bit](https://huggingface.co/mrtoots/unsloth-gpt-oss-120b-mlx-8Bit) was converted to MLX format from [unsloth/gpt-oss-120b](https://huggingface.co/unsloth/gpt-oss-120b) using mlx-lm version **0.26.4**.
15
-
16
-
17
- ## Toots' Note:
18
- This model was converted and quantized utilizing unsloth's version of gpt-oss-120b.
19
-
20
- Please follow and support [unsloth's work](https://huggingface.co/unsloth) if you like it!
21
-
22
- 🦛 <span style="color:#800080">If you want a free consulting session, </span>[fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) <span style="color:#800080">to get in touch!</span> 🤗
23
-
24
-
25
- ## Use with mlx
26
-
27
- ```bash
28
- pip install mlx-lm
29
- ```
30
-
31
- ```python
32
- from mlx_lm import load, generate
33
-
34
- model, tokenizer = load("mrtoots/gpt-oss-120b-mlx-8Bit")
35
-
36
- prompt="hello"
37
-
38
- if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
39
- messages = [{"role": "user", "content": prompt}]
40
- prompt = tokenizer.apply_chat_template(
41
- messages, tokenize=False, add_generation_prompt=True
42
- )
43
-
44
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
45
- ```