Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,13 @@
|
|
| 1 |
---
|
| 2 |
inference: false
|
| 3 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
<!-- header start -->
|
|
@@ -34,6 +41,31 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
| 34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
|
| 35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
|
| 36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
<!-- compatibility_ggml start -->
|
| 38 |
## Compatibility
|
| 39 |
|
|
@@ -45,19 +77,9 @@ These are guaranteed to be compatbile with any UIs, tools and libraries released
|
|
| 45 |
|
| 46 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
| 47 |
|
| 48 |
-
These
|
| 49 |
-
|
| 50 |
-
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
|
| 51 |
-
|
| 52 |
-
## Explanation of the new k-quant methods
|
| 53 |
|
| 54 |
-
|
| 55 |
-
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
| 56 |
-
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
| 57 |
-
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
| 58 |
-
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
| 59 |
-
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
| 60 |
-
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
| 61 |
|
| 62 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
| 63 |
<!-- compatibility_ggml end -->
|
|
|
|
| 1 |
---
|
| 2 |
inference: false
|
| 3 |
+
license: mit
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
library_name: transformers
|
| 7 |
+
datasets:
|
| 8 |
+
- psmathur/alpaca_orca
|
| 9 |
+
- psmathur/dolly-v2_orca
|
| 10 |
+
- psmathur/WizardLM_Orca
|
| 11 |
---
|
| 12 |
|
| 13 |
<!-- header start -->
|
|
|
|
| 41 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
|
| 42 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
|
| 43 |
|
| 44 |
+
## Prompt template: Alpaca with system message
|
| 45 |
+
|
| 46 |
+
```
|
| 47 |
+
### System:
|
| 48 |
+
You are an AI assistant that follows instruction extremely well. Help as much as you can.
|
| 49 |
+
|
| 50 |
+
### User:
|
| 51 |
+
prompt
|
| 52 |
+
|
| 53 |
+
### Response
|
| 54 |
+
```
|
| 55 |
+
or
|
| 56 |
+
```
|
| 57 |
+
### System:
|
| 58 |
+
You are an AI assistant that follows instruction extremely well. Help as much as you can.
|
| 59 |
+
|
| 60 |
+
### User:
|
| 61 |
+
prompt
|
| 62 |
+
|
| 63 |
+
### Input
|
| 64 |
+
input
|
| 65 |
+
|
| 66 |
+
### Response
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
<!-- compatibility_ggml start -->
|
| 70 |
## Compatibility
|
| 71 |
|
|
|
|
| 77 |
|
| 78 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
| 79 |
|
| 80 |
+
These cannot be provided with Open Llama 3B models at this time, due to an issue in llama.cpp.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
+
This is being worked on in the llama.cpp repo. More issues here: https://github.com/ggerganov/llama.cpp/issues/1919
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
| 85 |
<!-- compatibility_ggml end -->
|