bibproj commited on
Commit
9aa3da4
·
verified ·
1 Parent(s): d19f2a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -1,7 +1,40 @@
1
  ---
 
 
 
 
 
2
  license: other
3
  license_name: modified-mit
4
  license_link: https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE
5
  base_model:
6
  - MiniMaxAI/MiniMax-M2.1
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: mlx
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - transformers
6
+ - mlx
7
  license: other
8
  license_name: modified-mit
9
  license_link: https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE
10
  base_model:
11
  - MiniMaxAI/MiniMax-M2.1
12
+ ---
13
+
14
+ # mlx-community/MiniMax-M2.1-8bit
15
+
16
+ This model [mlx-community/MiniMax-M2-8bit](https://huggingface.co/mlx-community/MiniMax-M2-8bit) was
17
+ converted to MLX format from [MiniMaxAI/MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1)
18
+ using mlx-lm version **0.29.1**.
19
+
20
+ ## Use with mlx
21
+
22
+ ```bash
23
+ pip install mlx-lm
24
+ ```
25
+
26
+ ```python
27
+ from mlx_lm import load, generate
28
+
29
+ model, tokenizer = load("mlx-community/MiniMax-M2.1-8bit")
30
+
31
+ prompt = "hello"
32
+
33
+ if tokenizer.chat_template is not None:
34
+ messages = [{"role": "user", "content": prompt}]
35
+ prompt = tokenizer.apply_chat_template(
36
+ messages, add_generation_prompt=True
37
+ )
38
+
39
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
40
+ ```