Update README.md
Browse files
README.md
CHANGED
|
@@ -298,11 +298,10 @@ print(output_text)
|
|
| 298 |
- Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
|
| 299 |
- Recommended: 128GB unified memory
|
| 300 |
#### Steps
|
| 301 |
-
1.
|
| 302 |
```bash
|
| 303 |
-
git clone
|
| 304 |
-
cd llama.cpp
|
| 305 |
-
git checkout feature/step3.5-flash
|
| 306 |
```
|
| 307 |
2. Build llama.cpp on Mac:
|
| 308 |
```bash
|
|
|
|
| 298 |
- Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
|
| 299 |
- Recommended: 128GB unified memory
|
| 300 |
#### Steps
|
| 301 |
+
1. Use llama.cpp:
|
| 302 |
```bash
|
| 303 |
+
git clone git@github.com:stepfun-ai/Step-3.5-Flash.git
|
| 304 |
+
cd Step-3.5-Flash/llama.cpp
|
|
|
|
| 305 |
```
|
| 306 |
2. Build llama.cpp on Mac:
|
| 307 |
```bash
|