im0qianqian commited on
Commit
d29650c
·
verified ·
1 Parent(s): d3a7f58

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - inclusionAI/Ring-flash-2.0
5
+ ---
6
+
7
+
8
+ ## Introduction
9
+
10
+ Use https://github.com/im0qianqian/llama.cpp to quantize.
11
+
12
+ For model inference, please download our release package from this url https://github.com/im0qianqian/llama.cpp/releases .
13
+
14
+
15
+
16
+ ## Quick start
17
+
18
+ ```bash
19
+ # Use a local model file
20
+ llama-cli -m my_model.gguf
21
+
22
+ # Launch OpenAI-compatible API server
23
+ llama-server -m my_model.gguf
24
+ ```
25
+
26
+
27
+ ## Demo
28
+
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png)
30
+
31
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png)
32
+
33
+
34
+ ## PR
35
+
36
+ Let's look forward to the following PR being merged:
37
+
38
+ - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063)
39
+ - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)