bibproj commited on
Commit
35bbebb
·
verified ·
1 Parent(s): 906a016

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -1,7 +1,46 @@
1
  ---
2
  language: en
3
- pipeline_tag: text-generation
4
  library_name: mlx
 
5
  tags:
 
6
  - mlx
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language: en
3
+ license: mit
4
  library_name: mlx
5
+ pipeline_tag: text-generation
6
  tags:
7
+ - transformers
8
  - mlx
9
+ base_model:
10
+ - MiniMaxAI/MiniMax-M2
11
+ ---
12
+
13
+ # mlx-community/MiniMax-M2-mlx-8bit-gs32
14
+
15
+ This model [mlx-community/MiniMax-M2-mlx-8bit-gs32](https://huggingface.co/mlx-community/MiniMax-M2-mlx-8bit-gs32) was
16
+ converted to MLX format from [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
17
+ using mlx-lm version **0.28.1**.
18
+
19
+ ## Recipe:
20
+ * 8-bit
21
+ * group-size 32
22
+ * 9 bits per weight (bpw)
23
+
24
  ---
25
+
26
+ ## Use with mlx
27
+
28
+ ```bash
29
+ pip install mlx-lm
30
+ ```
31
+
32
+ ```python
33
+ from mlx_lm import load, generate
34
+
35
+ model, tokenizer = load("mlx-community/MiniMax-M2-mlx-8bit-gs32")
36
+
37
+ prompt = "hello"
38
+
39
+ if tokenizer.chat_template is not None:
40
+ messages = [{"role": "user", "content": prompt}]
41
+ prompt = tokenizer.apply_chat_template(
42
+ messages, add_generation_prompt=True
43
+ )
44
+
45
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
46
+ ```