bibproj commited on
Commit
238e16d
·
verified ·
1 Parent(s): 9ce3214

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -4
README.md CHANGED
@@ -1,6 +1,41 @@
1
  ---
2
- license: mit
3
- base_model:
4
- - zai-org/GLM-4.7
5
  library_name: mlx
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
  library_name: mlx
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - mlx
10
+ base_model: zai-org/GLM-4.7
11
+ ---
12
+
13
+ # mlx-community/GLM-4.7-8bit
14
+
15
+ This model [mlx-community/GLM-4.7-8bit](https://huggingface.co/mlx-community/GLM-4.7-8bit) was
16
+ converted to MLX format from [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7)
17
+ using mlx-lm version **0.30.0**.
18
+
19
+ You can find more similar MLX model quants for the Apple Mac Studio at https://huggingface.co/bibproj
20
+
21
+ ## Use with mlx
22
+
23
+ ```bash
24
+ pip install mlx-lm
25
+ ```
26
+
27
+ ```python
28
+ from mlx_lm import load, generate
29
+
30
+ model, tokenizer = load("mlx-community/GLM-4.7-8bit")
31
+
32
+ prompt = "hello"
33
+
34
+ if tokenizer.chat_template is not None:
35
+ messages = [{"role": "user", "content": prompt}]
36
+ prompt = tokenizer.apply_chat_template(
37
+ messages, add_generation_prompt=True
38
+ )
39
+
40
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
41
+ ```