Update README.md
Browse files
README.md
CHANGED
|
@@ -6,10 +6,12 @@ language:
|
|
| 6 |
|
| 7 |
# On-chain llama.cpp - Internet Computer
|
| 8 |
|
|
|
|
| 9 |
|
| 10 |
-
|
| 11 |
-
- Run local with [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
|
| 12 |
- Try them out at [ICGPT](https://icgpt.icpp.world/)
|
|
|
|
|
|
|
| 13 |
- The models were created with the training procedure outlined in [karpathy/llama2.c](https://github.com/karpathy/llama2.c) and then converted into *.gguf format as described below.
|
| 14 |
|
| 15 |
|
|
|
|
| 6 |
|
| 7 |
# On-chain llama.cpp - Internet Computer
|
| 8 |
|
| 9 |
+
You can run any *.gguf file in a llama_cpp_canister, but these are smaller models you can use for testing [onicai/llama_cpp_canister](https://github.com/onicai/llama_cpp_canister)
|
| 10 |
|
| 11 |
+
Notes:
|
|
|
|
| 12 |
- Try them out at [ICGPT](https://icgpt.icpp.world/)
|
| 13 |
+
- To use on the Internet Computer, follow instructions of [onicai/llama_cpp_canister](https://github.com/onicai/llama_cpp_canister)
|
| 14 |
+
- Run local with [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
|
| 15 |
- The models were created with the training procedure outlined in [karpathy/llama2.c](https://github.com/karpathy/llama2.c) and then converted into *.gguf format as described below.
|
| 16 |
|
| 17 |
|