Commit ·
0784a51
1
Parent(s): 42c6b2b
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
|
| 2 |
+
|
| 3 |
+
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune** the model in colab or equivalent desktop gpu (e.g. single 1080Ti).
|
| 4 |
+
|
| 5 |
+
The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main) takes 22+ GB memory for float32 parameters. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
|
| 6 |
+
|
| 7 |
+
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
|
| 8 |
+
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
|
| 9 |
+
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
|
| 10 |
+
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
|
| 11 |
+
|
| 12 |
+
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).
|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
__What about model quality?__ TODO compare on wikitext TODO brag about visual quality
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
### How should I fine-tune the model?
|
| 24 |
+
|
| 25 |
+
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
|
| 26 |
+
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
|
| 27 |
+
As a result, the larger batch size you can fit, the more efficient you will train.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
### Where can I train for free?
|
| 31 |
+
|
| 32 |
+
You can train fine in colab, but if you get a k80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances)
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
### Can I use this technique with other models?
|
| 36 |
+
|
| 37 |
+
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
| 38 |
+
|