Update README.md
Browse files
README.md
CHANGED
|
@@ -303,10 +303,11 @@ print(output_text)
|
|
| 303 |
- Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
|
| 304 |
- Recommended: 128GB unified memory
|
| 305 |
#### Steps
|
| 306 |
-
1. Use llama.cpp:
|
|
|
|
| 307 |
```bash
|
| 308 |
-
git clone
|
| 309 |
-
cd
|
| 310 |
```
|
| 311 |
2. Build llama.cpp on Mac:
|
| 312 |
```bash
|
|
|
|
| 303 |
- Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
|
| 304 |
- Recommended: 128GB unified memory
|
| 305 |
#### Steps
|
| 306 |
+
1. Use official llama.cpp:
|
| 307 |
+
> the folder `Step-3.5-Flash/tree/main/llama.cpp` is **obsolete**
|
| 308 |
```bash
|
| 309 |
+
git clone https://github.com/ggml-org/llama.cpp
|
| 310 |
+
cd llama.cpp
|
| 311 |
```
|
| 312 |
2. Build llama.cpp on Mac:
|
| 313 |
```bash
|