Update README.md
Browse files
README.md
CHANGED
|
@@ -13,15 +13,12 @@ Moreover, Visualized BGE fully preserves the strong text embedding capabilities
|
|
| 13 |
|
| 14 |
## Specs
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
### Model
|
| 19 |
| **Model Name** | **Dimension** | **Text Embedding Model** | **Language** | **Weight** |
|
| 20 |
| --- | --- | --- | --- | --- |
|
| 21 |
| BAAI/bge-visualized-base-en-v1.5 | 768 | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [🤗 HF link](https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_base_en_v1.5.pth) |
|
| 22 |
| BAAI/bge-visualized-m3 | 1024 | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [🤗 HF link](https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_m3.pth) |
|
| 23 |
|
| 24 |
-
|
| 25 |
### Data
|
| 26 |
We have generated a hybrid multi-modal dataset consisting of over 500,000 instances for training. The dataset will be released at a later time.
|
| 27 |
|
|
@@ -127,6 +124,6 @@ A1: While it is technically possible, it's not the recommended use case. Our mod
|
|
| 127 |
The image token embedding model in this project is built upon the foundations laid by [EVA-CLIP](https://github.com/baaivision/EVA/tree/master/EVA-CLIP).
|
| 128 |
|
| 129 |
## Citation
|
| 130 |
-
If you find this repository useful, please consider giving a
|
| 131 |
> Paper will be released soon
|
| 132 |
|
|
|
|
| 13 |
|
| 14 |
## Specs
|
| 15 |
|
|
|
|
|
|
|
| 16 |
### Model
|
| 17 |
| **Model Name** | **Dimension** | **Text Embedding Model** | **Language** | **Weight** |
|
| 18 |
| --- | --- | --- | --- | --- |
|
| 19 |
| BAAI/bge-visualized-base-en-v1.5 | 768 | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [🤗 HF link](https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_base_en_v1.5.pth) |
|
| 20 |
| BAAI/bge-visualized-m3 | 1024 | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [🤗 HF link](https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_m3.pth) |
|
| 21 |
|
|
|
|
| 22 |
### Data
|
| 23 |
We have generated a hybrid multi-modal dataset consisting of over 500,000 instances for training. The dataset will be released at a later time.
|
| 24 |
|
|
|
|
| 124 |
The image token embedding model in this project is built upon the foundations laid by [EVA-CLIP](https://github.com/baaivision/EVA/tree/master/EVA-CLIP).
|
| 125 |
|
| 126 |
## Citation
|
| 127 |
+
If you find this repository useful, please consider giving a like and citation
|
| 128 |
> Paper will be released soon
|
| 129 |
|