Update README.md
Browse files
README.md
CHANGED
|
@@ -49,7 +49,7 @@ language:
|
|
| 49 |
<|im_start|>assistant
|
| 50 |
```
|
| 51 |
|
| 52 |
-
- Context size: `
|
| 53 |
|
| 54 |
- Run as LlamaEdge service
|
| 55 |
|
|
@@ -58,7 +58,7 @@ language:
|
|
| 58 |
llama-api-server.wasm \
|
| 59 |
--model-name Qwen2.5-Coder-0.5B-Instruct \
|
| 60 |
--prompt-template chatml \
|
| 61 |
-
--ctx-size
|
| 62 |
```
|
| 63 |
|
| 64 |
- Run as LlamaEdge command app
|
|
@@ -67,7 +67,7 @@ language:
|
|
| 67 |
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-0.5B-Instruct-Q5_K_M.gguf \
|
| 68 |
llama-chat.wasm \
|
| 69 |
--prompt-template chatml \
|
| 70 |
-
--ctx-size
|
| 71 |
```
|
| 72 |
|
| 73 |
## Quantized GGUF Models
|
|
|
|
| 49 |
<|im_start|>assistant
|
| 50 |
```
|
| 51 |
|
| 52 |
+
- Context size: `32000`
|
| 53 |
|
| 54 |
- Run as LlamaEdge service
|
| 55 |
|
|
|
|
| 58 |
llama-api-server.wasm \
|
| 59 |
--model-name Qwen2.5-Coder-0.5B-Instruct \
|
| 60 |
--prompt-template chatml \
|
| 61 |
+
--ctx-size 32000
|
| 62 |
```
|
| 63 |
|
| 64 |
- Run as LlamaEdge command app
|
|
|
|
| 67 |
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-0.5B-Instruct-Q5_K_M.gguf \
|
| 68 |
llama-chat.wasm \
|
| 69 |
--prompt-template chatml \
|
| 70 |
+
--ctx-size 32000
|
| 71 |
```
|
| 72 |
|
| 73 |
## Quantized GGUF Models
|