Update README.md
Browse files
README.md
CHANGED
|
@@ -17,9 +17,9 @@ license: other
|
|
| 17 |
</div>
|
| 18 |
<!-- header end -->
|
| 19 |
|
| 20 |
-
# OptimalScale's Robin 7B GGML
|
| 21 |
|
| 22 |
-
These files are GGML format model files for [OptimalScale's Robin 7B](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
|
| 23 |
|
| 24 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
| 25 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
|
@@ -30,9 +30,9 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
| 30 |
|
| 31 |
## Repositories available
|
| 32 |
|
| 33 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-GPTQ)
|
| 34 |
-
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-GGML)
|
| 35 |
-
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-fp16)
|
| 36 |
|
| 37 |
## Prompt template
|
| 38 |
|
|
@@ -135,6 +135,6 @@ Thank you to all my generous patrons and donaters!
|
|
| 135 |
|
| 136 |
<!-- footer end -->
|
| 137 |
|
| 138 |
-
# Original model card: OptimalScale's Robin 7B
|
| 139 |
|
| 140 |
No model card provided in source repository.
|
|
|
|
| 17 |
</div>
|
| 18 |
<!-- header end -->
|
| 19 |
|
| 20 |
+
# OptimalScale's Robin 7B v2 GGML
|
| 21 |
|
| 22 |
+
These files are GGML format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
|
| 23 |
|
| 24 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
| 25 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
|
|
|
| 30 |
|
| 31 |
## Repositories available
|
| 32 |
|
| 33 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
|
| 34 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
|
| 35 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
|
| 36 |
|
| 37 |
## Prompt template
|
| 38 |
|
|
|
|
| 135 |
|
| 136 |
<!-- footer end -->
|
| 137 |
|
| 138 |
+
# Original model card: OptimalScale's Robin 7B v2
|
| 139 |
|
| 140 |
No model card provided in source repository.
|