guanwenyu1995 commited on
Commit
73f367a
·
verified ·
1 Parent(s): aa8a851

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - zh
5
+ - en
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
+ ---
9
+ <div align="center">
10
+ <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
11
+ </div>
12
+
13
+ ## Usage
14
+ ### Prebuilt [mlx-lm](https://github.com/ml-explore/mlx-lm.git)
15
+ ```bash
16
+ pip install mlx-lm
17
+ ```
18
+ ### Inference
19
+ ```python
20
+ from mlx_lm import load, generate
21
+ model_path = "MiniCPM4.1-8B-MLX "
22
+ model, tokenizer = load(model_path)
23
+ messages = [{"role": "user", "content": "北京有什么好玩的地方?"}]
24
+ # if open think mode, use the following code
25
+ # prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
26
+ # if close think mode, use the following code
27
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False, enable_thinking=False)
28
+ response = generate(
29
+ model=model,
30
+ tokenizer=tokenizer,
31
+ prompt=prompt,
32
+ max_tokens=1500
33
+ )
34
+ print(response)
35
+ ```
36
+
37
+ <p align="center">
38
+ <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
39
+ <a href="https://arxiv.org/abs/2506.07900" target="_blank">Technical Report</a> |
40
+ <a href="https://mp.weixin.qq.com/s/KIhH2nCURBXuFXAtYRpuXg?poc_token=HBIsUWijxino8oJ5s6HcjcfXFRi0Xj2LJlxPYD9c">Join Us</a>
41
+ </p>
42
+ <p align="center">
43
+ 👋 Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
44
+ </p>