Philip Monk
commited on
Commit
·
56f262a
1
Parent(s):
4ebff14
update readme to reflect llama.cpp and ollama released support
Browse files
README.md
CHANGED
|
@@ -3,31 +3,18 @@ license: apache-2.0
|
|
| 3 |
---
|
| 4 |
This is a GGUF-formatted checkpoint of
|
| 5 |
[rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) suitable
|
| 6 |
-
for use in llama.cpp. This has been quantized with the
|
| 7 |
-
results in model weights of size 4.8GB.
|
| 8 |
|
| 9 |
-
|
| 10 |
-
build from source with these instructions for MacOS. For Linux, install cmake
|
| 11 |
-
using your package manager. For Windows, consult the llama.cpp [build
|
| 12 |
-
guide](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md).
|
| 13 |
|
| 14 |
```bash
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
cd llama.cpp
|
| 18 |
-
git checkout rnj-1
|
| 19 |
-
cmake -B build
|
| 20 |
-
cmake --build build --config Release
|
| 21 |
```
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
```
|
| 26 |
-
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
To run it in the CLI, use this command:
|
| 30 |
-
|
| 31 |
-
```
|
| 32 |
-
build/bin/llama-cli -hf EssentialAI/rnj-1-instruct-GGUF
|
| 33 |
```
|
|
|
|
| 3 |
---
|
| 4 |
This is a GGUF-formatted checkpoint of
|
| 5 |
[rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) suitable
|
| 6 |
+
for use in llama.cpp, Ollama, or others. This has been quantized with the
|
| 7 |
+
Q4\_K\_M scheme, which results in model weights of size 4.8GB.
|
| 8 |
|
| 9 |
+
For llama.cpp, install (after version 7328) and run either of these commands:
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
```bash
|
| 12 |
+
llama-cli -hf EssentialAI/rnj-1-instruct-GGUF
|
| 13 |
+
llama-server -hf EssentialAI/rnj-1-instruct-GGUF -c 0 # and open browser to localhost:8080
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
```
|
| 15 |
|
| 16 |
+
For Ollama, install (after version v0.13.3) and run:
|
| 17 |
|
| 18 |
+
```bash
|
| 19 |
+
ollama run rnj-1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
```
|