Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ widget:
|
|
| 18 |
|
| 19 |
Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT).
|
| 20 |
|
| 21 |
-
Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S.
|
| 22 |
|
| 23 |
## Model description
|
| 24 |
|
|
|
|
| 18 |
|
| 19 |
Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT).
|
| 20 |
|
| 21 |
+
Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S. [@Xrenya]](https://huggingface.co/Xrenya).
|
| 22 |
|
| 23 |
## Model description
|
| 24 |
|