KYUNGYONG commited on
Commit
4c7398e
·
verified ·
1 Parent(s): eea7586

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: seallm
4
+ license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ - zh
8
+ - hi
9
+ - es
10
+ - fr
11
+ - ar
12
+ - bn
13
+ - ru
14
+ - pt
15
+ - id
16
+ - ur
17
+ - de
18
+ - ja
19
+ - sw
20
+ - ta
21
+ - tr
22
+ - ko
23
+ - vi
24
+ - jv
25
+ - it
26
+ - ha
27
+ - th
28
+ - fa
29
+ - tl
30
+ - my
31
+ tags:
32
+ - multilingual
33
+ - babel
34
+ - mlx
35
+ - mlx-my-repo
36
+ base_model: Tower-Babel/Babel-9B
37
+ ---
38
+
39
+ # KYUNGYONG/Babel-9B-4bit
40
+
41
+ The Model [KYUNGYONG/Babel-9B-4bit](https://huggingface.co/KYUNGYONG/Babel-9B-4bit) was converted to MLX format from [Tower-Babel/Babel-9B](https://huggingface.co/Tower-Babel/Babel-9B) using mlx-lm version **0.21.5**.
42
+
43
+ ## Use with mlx
44
+
45
+ ```bash
46
+ pip install mlx-lm
47
+ ```
48
+
49
+ ```python
50
+ from mlx_lm import load, generate
51
+
52
+ model, tokenizer = load("KYUNGYONG/Babel-9B-4bit")
53
+
54
+ prompt="hello"
55
+
56
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
57
+ messages = [{"role": "user", "content": prompt}]
58
+ prompt = tokenizer.apply_chat_template(
59
+ messages, tokenize=False, add_generation_prompt=True
60
+ )
61
+
62
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
63
+ ```