Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ base_model: google/codegemma-1.1-2b
|
|
| 21 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
| 22 |
## Use with Ollama
|
| 23 |
```bash
|
| 24 |
-
ollama run "hf.co/olegshulyakov/codegemma-1.1-2b-GGUF:
|
| 25 |
```
|
| 26 |
## Use with LM Studio
|
| 27 |
```bash
|
|
@@ -29,9 +29,9 @@ lms load "olegshulyakov/codegemma-1.1-2b-GGUF"
|
|
| 29 |
```
|
| 30 |
## Use with llama.cpp CLI
|
| 31 |
```bash
|
| 32 |
-
llama-cli -hf olegshulyakov/codegemma-1.1-2b-GGUF:
|
| 33 |
```
|
| 34 |
## Use with llama.cpp Server:
|
| 35 |
```bash
|
| 36 |
-
llama-server -hf olegshulyakov/codegemma-1.1-2b-GGUF:
|
| 37 |
```
|
|
|
|
| 21 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
| 22 |
## Use with Ollama
|
| 23 |
```bash
|
| 24 |
+
ollama run "hf.co/olegshulyakov/codegemma-1.1-2b-GGUF:Q5_K_XL"
|
| 25 |
```
|
| 26 |
## Use with LM Studio
|
| 27 |
```bash
|
|
|
|
| 29 |
```
|
| 30 |
## Use with llama.cpp CLI
|
| 31 |
```bash
|
| 32 |
+
llama-cli -hf olegshulyakov/codegemma-1.1-2b-GGUF:Q5_K_XL -p "The meaning to life and the universe is"
|
| 33 |
```
|
| 34 |
## Use with llama.cpp Server:
|
| 35 |
```bash
|
| 36 |
+
llama-server -hf olegshulyakov/codegemma-1.1-2b-GGUF:Q5_K_XL -c 4096
|
| 37 |
```
|