Update README.md
#2
by
reach-vb
- opened
README.md
CHANGED
|
@@ -7,6 +7,7 @@ tags:
|
|
| 7 |
- llama-cpp
|
| 8 |
- gguf-my-repo
|
| 9 |
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF
|
|
@@ -51,4 +52,4 @@ Step 3: Run inference through the main binary.
|
|
| 51 |
or
|
| 52 |
```
|
| 53 |
./llama-server --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -c 2048
|
| 54 |
-
```
|
|
|
|
| 7 |
- llama-cpp
|
| 8 |
- gguf-my-repo
|
| 9 |
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
---
|
| 12 |
|
| 13 |
# ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF
|
|
|
|
| 52 |
or
|
| 53 |
```
|
| 54 |
./llama-server --hf-repo ngxson/SmolLM2-1.7B-Instruct-Q4_K_M-GGUF --hf-file smollm2-1.7b-instruct-q4_k_m.gguf -c 2048
|
| 55 |
+
```
|