Text Generation
Transformers
Safetensors
step3p5
conversational
custom_code
fp8
WinstonDeng commited on
Commit
db48185
·
verified ·
1 Parent(s): 0f4b195

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -303,10 +303,11 @@ print(output_text)
303
  - Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
304
  - Recommended: 128GB unified memory
305
  #### Steps
306
- 1. Use llama.cpp:
 
307
  ```bash
308
- git clone git@github.com:stepfun-ai/Step-3.5-Flash.git
309
- cd Step-3.5-Flash/llama.cpp
310
  ```
311
  2. Build llama.cpp on Mac:
312
  ```bash
 
303
  - Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
304
  - Recommended: 128GB unified memory
305
  #### Steps
306
+ 1. Use official llama.cpp:
307
+ > the folder `Step-3.5-Flash/tree/main/llama.cpp` is **obsolete**
308
  ```bash
309
+ git clone https://github.com/ggml-org/llama.cpp
310
+ cd llama.cpp
311
  ```
312
  2. Build llama.cpp on Mac:
313
  ```bash