Update README.md
Browse files
README.md
CHANGED
|
@@ -15,13 +15,6 @@ tags:
|
|
| 15 |
|
| 16 |
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
|
| 17 |
|
| 18 |
-
Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
|
| 19 |
-
|
| 20 |
-
- [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
|
| 21 |
-
- [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
|
| 22 |
-
- [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
|
| 23 |
-
- [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
|
| 24 |
-
|
| 25 |
# Prompting Format
|
| 26 |
|
| 27 |
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
|
|
|
|
| 15 |
|
| 16 |
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
# Prompting Format
|
| 19 |
|
| 20 |
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
|