Xidong commited on
Commit
0a7afe3
·
verified ·
1 Parent(s): 6433cbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -3
README.md CHANGED
@@ -32,7 +32,22 @@ RobotAI: (1.0, -0.3)
32
 
33
 
34
  ## ℹ️ Usage
35
- 1. DownLoad [Model]() and Follow [Qwen.cpp](https://github.com/QwenLM/qwen.cpp.git) get model.bin and qwen.tiktoken.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  2. Install package serial.tar.gz
37
  ```
38
  cd serial
@@ -44,6 +59,7 @@ RobotAI: (1.0, -0.3)
44
  cmake --build build -j --config Release
45
  ```
46
  4. Now you may chat and control your AI car with the quantized RobotAI model by running:
 
47
  ```
48
  ./build/bin/main -m robot1_8b-ggml.bin --tiktoken qwen.tiktoken -p 请快速向前
49
  ```
@@ -79,6 +95,5 @@ Please use the following citation if you intend to use our dataset for training
79
  }
80
  ```
81
 
82
-
83
  ## 🤖 Acknowledgement
84
- - We thank [Qwen.cpp](https://github.com/QwenLM/qwen.cpp.git) and [llama.cpp](https://github.com/ggerganov/llama.cpp) for their excellent work.
 
32
 
33
 
34
  ## ℹ️ Usage
35
+ 1. DownLoad 🤗 [Model](https://huggingface.co/FreedomIntelligence/EmbodyAICar) get model.bin.
36
+ ```
37
+ cd EmbodyAICar
38
+ git submodule update --init --recursive
39
+ python qwen_cpp/convert.py -i {Model_Path} -t {type} -o robot1_8b-ggml.bin
40
+ ```
41
+ You are free to try any of the below quantization types by specifying -t <type>:
42
+
43
+ - q4_0: 4-bit integer quantization with fp16 scales.
44
+ - q4_1: 4-bit integer quantization with fp16 scales and minimum values.
45
+ - q5_0: 5-bit integer quantization with fp16 scales.
46
+ - q5_1: 5-bit integer quantization with fp16 scales and minimum values.
47
+ - q8_0: 8-bit integer quantization with fp16 scales.
48
+ - f16: half precision floating point weights without quantization.
49
+ - f32: single precision floating point weights without quantization.
50
+
51
  2. Install package serial.tar.gz
52
  ```
53
  cd serial
 
59
  cmake --build build -j --config Release
60
  ```
61
  4. Now you may chat and control your AI car with the quantized RobotAI model by running:
62
+ - qwen.tiktoken is in the model directory
63
  ```
64
  ./build/bin/main -m robot1_8b-ggml.bin --tiktoken qwen.tiktoken -p 请快速向前
65
  ```
 
95
  }
96
  ```
97
 
 
98
  ## 🤖 Acknowledgement
99
+ - We thank [Qwen.cpp](https://github.com/QwenLM/qwen.cpp.git) and [llama.cpp](https://github.com/ggerganov/llama.cpp) for their excellent work.