Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,82 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
**English** | [中文](./README_zh.md)
|
| 6 |
+
|
| 7 |
+
## Code implementation of new GTE embeddings
|
| 8 |
+
|
| 9 |
+
This model is a BERT-like encoder with the following optimizations implemented:
|
| 10 |
+
|
| 11 |
+
1. Replacing absolute position embeddings with RoPE [^1].
|
| 12 |
+
2. Substituting the conventional activation functions with Gated Linear Units (GLU) [^2].
|
| 13 |
+
3. Setting attention dropout to 0 to use `xformers` and `flash_attn`.
|
| 14 |
+
4. Using unpadding to eliminate the needless computations for padding tokens [^3]. (this is off by default and should be used in conjunction with `xformers` for optimal acceleration).
|
| 15 |
+
5. Setting `vocab_size` as a multiple of 64.
|
| 16 |
+
|
| 17 |
+
### Recommendation: Enable Unpadding and Acceleration with `xformers`
|
| 18 |
+
|
| 19 |
+
This code supports the acceleration of attention computations using `xformers`, which can automatically choose the optimal implementation based on the type of device, such as `flash_attn`. Therefore, we can also achieve significant acceleration on old devices like the V100.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
Firstly, install `xformers` (with `pytorch` pre-installed):
|
| 23 |
+
```
|
| 24 |
+
if pytorch is installed using conda:
|
| 25 |
+
conda install xformers -c xformers
|
| 26 |
+
elif pytorch is installed using pip:
|
| 27 |
+
# cuda 11.8 version
|
| 28 |
+
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
|
| 29 |
+
# cuda 12.1 version
|
| 30 |
+
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121
|
| 31 |
+
```
|
| 32 |
+
For more information, refer to [Installing xformers](https://github.com/facebookresearch/xformers?tab=readme-ov-file#installing-xformers).
|
| 33 |
+
|
| 34 |
+
Then, when loading the model, set `unpad_inputs` and `use_memory_efficient_attention` to `true`, and enable `fp16` mixed precision computation to achieve the fastest acceleration.
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
import torch
|
| 38 |
+
from transformers import AutoModel, AutoTokenizer
|
| 39 |
+
|
| 40 |
+
path = 'Alibaba-NLP/gte-base-en-v1.5'
|
| 41 |
+
device = torch.device('cuda')
|
| 42 |
+
tokenzier = AutoTokenizer.from_pretrained(path)
|
| 43 |
+
model = AutoModel.from_pretrained(
|
| 44 |
+
path,
|
| 45 |
+
trust_remote_code=True,
|
| 46 |
+
unpad_inputs=True,
|
| 47 |
+
use_memory_efficient_attention=True,
|
| 48 |
+
).to(device)
|
| 49 |
+
|
| 50 |
+
with torch.autocast(device_type=device.type, dtype=torch.float16): # or bfloat16
|
| 51 |
+
with torch.inference_mode():
|
| 52 |
+
outputs = model(**inputs.to(device))
|
| 53 |
+
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
Alternatively, you can directly modify the `unpad_inputs` and `use_memory_efficient_attention` settings to `true` in the model's `config.json`, eliminating the need to set them in the code.
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
<details>
|
| 62 |
+
<summary> Clarification of Relationship with nomic-embed and nomicBERT </summary>
|
| 63 |
+
|
| 64 |
+
One may question the originality of our work and consider it a mere replication of `nomicBERT`. To clarify, our work is parallel but stems from the same idea as `nomicBERT`.
|
| 65 |
+
|
| 66 |
+
Applying RoPE and GLU to BERT to support longer texts is a straightforward idea. Our exploration of the transformer++ encoder (i.e., BERT + RoPE + GLU) began in August 2023.
|
| 67 |
+
And by November 2023, we had completed the `gte-base-en-v1.1`. Then, I went on to prepare for the ACL submission of the other project...
|
| 68 |
+
|
| 69 |
+
The release of `nomic-embed` [^4] brought to our attention the pressure, as well as provided us with more resources, which allowed us to continue with this project.
|
| 70 |
+
Without the outstanding work of `nomicai`, the release of `gte-v1.5` could have been delayed much longer. Thanks!
|
| 71 |
+
|
| 72 |
+
</details>
|
| 73 |
+
|
| 74 |
+
---
|
| 75 |
+
|
| 76 |
+
[^1]: Su, Jianlin, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. "Roformer: Enhanced transformer with rotary position embedding." Neurocomputing 568 (2024): 127063.
|
| 77 |
+
|
| 78 |
+
[^2]: Shazeer, Noam. "Glu variants improve transformer." arXiv preprint arXiv:2002.05202 (2020).
|
| 79 |
+
|
| 80 |
+
[^3]: Portes, Jacob, Alexander Trott, Sam Havens, Daniel King, Abhinav Venigalla, Moin Nadeem, Nikhil Sardana, Daya Khudia, and Jonathan Frankle. "MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining." Advances in Neural Information Processing Systems 36 (2024).
|
| 81 |
+
|
| 82 |
+
[^4]: Nussbaum, Zach, John X. Morris, Brandon Duderstadt, and Andriy Mulyar. "Nomic Embed: Training a Reproducible Long Context Text Embedder." arXiv preprint arXiv:2402.01613 (2024).
|