Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,7 @@ pip install llama-cpp-python[server]
|
|
| 28 |
|
| 29 |
2. Ensure your model file is in the correct location:
|
| 30 |
```bash
|
| 31 |
-
models/Phi-4-mini-instruct-Q4_K_M-
|
| 32 |
```
|
| 33 |
|
| 34 |
## Running the Server
|
|
@@ -37,7 +37,7 @@ Start the llama-server with the following command:
|
|
| 37 |
|
| 38 |
```bash
|
| 39 |
llama-server \
|
| 40 |
-
--model models/Phi-4-mini-instruct-Q4_K_M-
|
| 41 |
--port 8080 \
|
| 42 |
--jinja
|
| 43 |
```
|
|
|
|
| 28 |
|
| 29 |
2. Ensure your model file is in the correct location:
|
| 30 |
```bash
|
| 31 |
+
models/Phi-4-mini-instruct-Q4_K_M-function_calling.gguf
|
| 32 |
```
|
| 33 |
|
| 34 |
## Running the Server
|
|
|
|
| 37 |
|
| 38 |
```bash
|
| 39 |
llama-server \
|
| 40 |
+
--model models/Phi-4-mini-instruct-Q4_K_M-function_calling.gguf \
|
| 41 |
--port 8080 \
|
| 42 |
--jinja
|
| 43 |
```
|