Commit
·
2219291
1
Parent(s):
dc3ce23
Update README.md
Browse files
README.md
CHANGED
|
@@ -57,6 +57,7 @@ Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface
|
|
| 57 |
|
| 58 |
Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes ~353GB memory, this version takes ~180GB.
|
| 59 |
|
|
|
|
| 60 |
### How to use
|
| 61 |
|
| 62 |
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
|
|
|
|
| 57 |
|
| 58 |
Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes ~353GB memory, this version takes ~180GB.
|
| 59 |
|
| 60 |
+
Our main objective is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
|
| 61 |
### How to use
|
| 62 |
|
| 63 |
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
|