Update README.md
Browse files
README.md
CHANGED
|
@@ -47,7 +47,7 @@ Overall, the model is suitable when making a pretrained version so you can conti
|
|
| 47 |
- Main tasks: reasoning, multi-tasking knowledge and function tools.
|
| 48 |
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.vercel.app/ghost-7b-license).
|
| 49 |
- Based on: Mistral 7B.
|
| 50 |
-
- Distributions: Standard (BF16),
|
| 51 |
- Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu).
|
| 52 |
|
| 53 |
### Links
|
|
@@ -63,14 +63,14 @@ We create many distributions to give you the best access options that best suit
|
|
| 63 |
| Version | Model card |
|
| 64 |
| ------- | -------------------------------------------------------------------- |
|
| 65 |
| BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) |
|
| 66 |
-
|
|
| 67 |
| AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) |
|
| 68 |
|
| 69 |
### Standard (BF16)
|
| 70 |
|
| 71 |
The standard distribution was used to run the assessments and was found to have the best performance in text generation quality.
|
| 72 |
|
| 73 |
-
###
|
| 74 |
|
| 75 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
| 76 |
|
|
|
|
| 47 |
- Main tasks: reasoning, multi-tasking knowledge and function tools.
|
| 48 |
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.vercel.app/ghost-7b-license).
|
| 49 |
- Based on: Mistral 7B.
|
| 50 |
+
- Distributions: Standard (BF16), GGUF, AWQ.
|
| 51 |
- Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu).
|
| 52 |
|
| 53 |
### Links
|
|
|
|
| 63 |
| Version | Model card |
|
| 64 |
| ------- | -------------------------------------------------------------------- |
|
| 65 |
| BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) |
|
| 66 |
+
| GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-gguf) |
|
| 67 |
| AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) |
|
| 68 |
|
| 69 |
### Standard (BF16)
|
| 70 |
|
| 71 |
The standard distribution was used to run the assessments and was found to have the best performance in text generation quality.
|
| 72 |
|
| 73 |
+
### GGUF
|
| 74 |
|
| 75 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
| 76 |
|