Update README.md
Browse files
README.md
CHANGED
|
@@ -5,14 +5,14 @@ This is the official repo for paper [Supervised Fine-tuning *in turn* Improves V
|
|
| 5 |
|
| 6 |
<div align="center">
|
| 7 |
|
| 8 |
-
📃[**Paper (ArXiv)**](https://arxiv.org/abs/2401.10222) **|** [**Code**](https://github.com/TencentARC/ViSFT
|
| 9 |
|
| 10 |
|
| 11 |
|
| 12 |
</div>
|
| 13 |
|
| 14 |
## News
|
| 15 |
-
* [2024/01/
|
| 16 |
|
| 17 |
## Introduction
|
| 18 |
Image-text training like CLIP has dominated the pretraining of vision foundation models in recent years. Subsequent efforts have been made to introduce region-level visual learning into CLIP’s pretraining but face scalability challenges due to the lack of large-scale region-level datasets. Drawing inspiration from supervised fine-tuning (SFT) in natural language processing such as instruction tuning, we explore the potential of fine-grained SFT in enhancing the generation of vision foundation models after their pretraining. Thus a two-stage method **ViSFT** (**Vi**sion **SFT**) is proposed to unleash the fine-grained knowledge of vision foun- dation models. In ViSFT, the vision foundation model is enhanced by performing visual joint learning on some in-domain tasks and then tested on out-of-domain benchmarks. With updating using ViSFT on 8 V100 GPUs in less than 2 days, a vision transformer with over 4.4B parameters shows improvements across various out-of-domain benchmarks including vision and vision-linguistic scenarios.
|
|
|
|
| 5 |
|
| 6 |
<div align="center">
|
| 7 |
|
| 8 |
+
📃[**Paper (ArXiv)**](https://arxiv.org/abs/2401.10222) **|** [**Code**](https://github.com/TencentARC/ViSFT) **|** 🤗[**Huggingface**](https://huggingface.co/TencentARC/ViSFT)
|
| 9 |
|
| 10 |
|
| 11 |
|
| 12 |
</div>
|
| 13 |
|
| 14 |
## News
|
| 15 |
+
* [2024/01/19] We open source the [ViSFT]() including training scripts and weights. Evaluation codes will be released soon.
|
| 16 |
|
| 17 |
## Introduction
|
| 18 |
Image-text training like CLIP has dominated the pretraining of vision foundation models in recent years. Subsequent efforts have been made to introduce region-level visual learning into CLIP’s pretraining but face scalability challenges due to the lack of large-scale region-level datasets. Drawing inspiration from supervised fine-tuning (SFT) in natural language processing such as instruction tuning, we explore the potential of fine-grained SFT in enhancing the generation of vision foundation models after their pretraining. Thus a two-stage method **ViSFT** (**Vi**sion **SFT**) is proposed to unleash the fine-grained knowledge of vision foun- dation models. In ViSFT, the vision foundation model is enhanced by performing visual joint learning on some in-domain tasks and then tested on out-of-domain benchmarks. With updating using ViSFT on 8 V100 GPUs in less than 2 days, a vision transformer with over 4.4B parameters shows improvements across various out-of-domain benchmarks including vision and vision-linguistic scenarios.
|