Update model card: remove transformers tag, add paper/project links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -2,7 +2,6 @@
2
  language:
3
  - zh
4
  - en
5
- library_name: transformers
6
  license: apache-2.0
7
  pipeline_tag: text-generation
8
  ---
@@ -13,14 +12,15 @@ pipeline_tag: text-generation
13
 
14
  <p align="center">
15
  <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
16
- <a href="https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf" target="_blank">Technical Report</a>
 
17
  </p>
18
  <p align="center">
19
  👋 Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
20
  </p>
21
 
22
  ## What's New
23
- - [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).🔥🔥🔥
24
 
25
  ## MiniCPM4 Series
26
  MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.
@@ -60,7 +60,10 @@ MiniCPM 4 is an extremely efficient edge-side large model that has undergone eff
60
  ### Inference with [llama.cpp](https://github.com/ggml-org/llama.cpp)
61
 
62
  ```bash
63
- ./llama-cli -c 1024 -m MiniCPM4-8B-Q4_K_M.gguf -n 1024 --top-p 0.7 --temp 0.7 --prompt "<|im_start|>user\n请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。<|im_end|>\n<|im_start|>assistant\n"
 
 
 
64
  ```
65
 
66
  ## Statement
@@ -73,7 +76,7 @@ MiniCPM 4 is an extremely efficient edge-side large model that has undergone eff
73
  - This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
74
 
75
  ## Citation
76
- - Please cite our [paper](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
77
 
78
  ```bibtex
79
  @article{minicpm4,
 
2
  language:
3
  - zh
4
  - en
 
5
  license: apache-2.0
6
  pipeline_tag: text-generation
7
  ---
 
12
 
13
  <p align="center">
14
  <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
15
+ <a href="https://huggingface.co/papers/2506.07900" target="_blank">Paper: MiniCPM4: Ultra-Efficient LLMs on End Devices</a> |
16
+ <a href="https://huggingface.co/collections/openbmb/minicpm4-6841ab29d180257e940baa9b" target="_blank">Hugging Face Collection</a>
17
  </p>
18
  <p align="center">
19
  👋 Join us on <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
20
  </p>
21
 
22
  ## What's New
23
+ - [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report [here](https://huggingface.co/papers/2506.07900).🔥🔥🔥
24
 
25
  ## MiniCPM4 Series
26
  MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.
 
60
  ### Inference with [llama.cpp](https://github.com/ggml-org/llama.cpp)
61
 
62
  ```bash
63
+ ./llama-cli -c 1024 -m MiniCPM4-8B-Q4_K_M.gguf -n 1024 --top-p 0.7 --temp 0.7 --prompt "<|im_start|>user
64
+ 请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。<|im_end|>
65
+ <|im_start|>assistant
66
+ "
67
  ```
68
 
69
  ## Statement
 
76
  - This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
77
 
78
  ## Citation
79
+ - Please cite our [paper](https://huggingface.co/papers/2506.07900) if you find our work valuable.
80
 
81
  ```bibtex
82
  @article{minicpm4,