Update README.md
Browse files
README.md
CHANGED
|
@@ -22,6 +22,8 @@ The model was pretrained model on English language using masked language modelin
|
|
| 22 |
The abstract from the paper is the following:
|
| 23 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
| 24 |
|
|
|
|
|
|
|
| 25 |
## Intended uses & limitations(TODO)
|
| 26 |
|
| 27 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|
|
@@ -38,18 +40,16 @@ from PIL import Image
|
|
| 38 |
|
| 39 |
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
| 40 |
image = Image.open(requests.get(url, stream=True).raw)
|
| 41 |
-
text = "
|
| 42 |
-
|
| 43 |
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base")
|
| 44 |
-
model =
|
| 45 |
# Prepare inputs
|
| 46 |
encoding = processor(image, text, return_tensors="pt")
|
| 47 |
# Forward pass
|
| 48 |
outputs = model(**encoding)
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
# Image and Text Classification
|
| 52 |
-
model = BridgeTowerForImageAndTextClassification.from_pretrained("BridgeTower/bridgetower-base")
|
| 53 |
```
|
| 54 |
### Limitations and bias
|
| 55 |
|
|
@@ -77,11 +77,9 @@ The model was pre-trained for 100k steps on 8 NVIDIA A100 GPUs with a batch size
|
|
| 77 |
The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
|
| 78 |
|
| 79 |
## Evaluation results
|
| 80 |
-
When fine-tuned on downstream tasks, this model achieves the following results:
|
| 81 |
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
| | | | | | | | | |
|
| 85 |
|
| 86 |
### BibTeX entry and citation info
|
| 87 |
```bibtex
|
|
|
|
| 22 |
The abstract from the paper is the following:
|
| 23 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
| 24 |
|
| 25 |
+
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
|
| 26 |
+
|
| 27 |
## Intended uses & limitations(TODO)
|
| 28 |
|
| 29 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|
|
|
|
| 40 |
|
| 41 |
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
| 42 |
image = Image.open(requests.get(url, stream=True).raw)
|
| 43 |
+
text = "hello world"
|
| 44 |
+
|
| 45 |
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base")
|
| 46 |
+
model = BridgeTowerForModel.from_pretrained("BridgeTower/bridgetower-base")
|
| 47 |
# Prepare inputs
|
| 48 |
encoding = processor(image, text, return_tensors="pt")
|
| 49 |
# Forward pass
|
| 50 |
outputs = model(**encoding)
|
| 51 |
+
outputs.keys()
|
| 52 |
+
odict_keys(['text_feats', 'image_feats', 'pooler_output'])
|
|
|
|
|
|
|
| 53 |
```
|
| 54 |
### Limitations and bias
|
| 55 |
|
|
|
|
| 77 |
The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
|
| 78 |
|
| 79 |
## Evaluation results
|
|
|
|
| 80 |
|
| 81 |
+
Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks.
|
| 82 |
+
|
|
|
|
| 83 |
|
| 84 |
### BibTeX entry and citation info
|
| 85 |
```bibtex
|