bibproj commited on
Commit
371ac08
·
verified ·
1 Parent(s): 40a42a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -4
README.md CHANGED
@@ -1,6 +1,43 @@
1
  ---
2
- license: mit
3
- base_model:
4
- - zai-org/GLM-4.7
5
  library_name: mlx
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
  library_name: mlx
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - mlx
10
+ base_model: zai-org/GLM-4.7
11
+ ---
12
+
13
+ # mlx-community/GLM-4.7-4bit
14
+
15
+ This model [mlx-community/GLM-4.7-4bit](https://huggingface.co/mlx-community/GLM-4.7-4bit) was
16
+ converted to MLX format from [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7)
17
+ using mlx-lm version **0.30.0**.
18
+
19
+ You can find more similar MLX model quants for the Apple Mac Studio at https://huggingface.co/bibproj
20
+
21
+ ---
22
+
23
+ ## Use with mlx
24
+
25
+ ```bash
26
+ pip install mlx-lm
27
+ ```
28
+
29
+ ```python
30
+ from mlx_lm import load, generate
31
+
32
+ model, tokenizer = load("mlx-community/GLM-4.7-4bit")
33
+
34
+ prompt = "hello"
35
+
36
+ if tokenizer.chat_template is not None:
37
+ messages = [{"role": "user", "content": prompt}]
38
+ prompt = tokenizer.apply_chat_template(
39
+ messages, add_generation_prompt=True
40
+ )
41
+
42
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
43
+ ```