liyadong commited on
Commit ·
af1cd40
1
Parent(s): b20e44e
update llama.cpp note
Browse files- README.md +1 -1
- README_ZH.md +2 -1
README.md
CHANGED
|
@@ -209,7 +209,7 @@ from modelscope import AutoModelForCausalLM, AutoTokenizer
|
|
| 209 |
|
| 210 |
### llama.cpp
|
| 211 |
|
| 212 |
-
|
| 213 |
|
| 214 |
## How to Deploy
|
| 215 |
|
|
|
|
| 209 |
|
| 210 |
### llama.cpp
|
| 211 |
|
| 212 |
+
llama.cpp enables LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware. Now supported, please refer to the [support-megrez branch](https://github.com/infinigence/llama.cpp/tree/support-megrez) for details.
|
| 213 |
|
| 214 |
## How to Deploy
|
| 215 |
|
README_ZH.md
CHANGED
|
@@ -196,7 +196,8 @@ from modelscope import AutoModelForCausalLM, AutoTokenizer
|
|
| 196 |
```
|
| 197 |
|
| 198 |
### llama.cpp
|
| 199 |
-
|
|
|
|
| 200 |
|
| 201 |
## 如何部署
|
| 202 |
|
|
|
|
| 196 |
```
|
| 197 |
|
| 198 |
### llama.cpp
|
| 199 |
+
|
| 200 |
+
llama.cpp 支持在各种硬件上以最小的设置和最先进的性能来启用LLM推断。现已支持,具体请查看 [support-megrez 分支](https://github.com/infinigence/llama.cpp/tree/support-megrez)。
|
| 201 |
|
| 202 |
## 如何部署
|
| 203 |
|