JREion's picture
Update README.md
4f7e79b verified
metadata
license: mit
task_categories:
  - image-classification
language:
  - en
tags:
  - prompt-tuning
  - prompt-learning
  - CLIP

⭐ Dataset Introduction

The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning research (e.g., CoOp).

Based on the original datasets, this repository adds foreground segmentation masks (generated by SEEM) of all raw images.

fail
  • For the foreground masks, the RGB value of the foreground region is [255, 255, 255], and the background region is [0, 0, 0].

  • The shorter side is always fixed to 512 px, and the scaling ratio is the same as that of the corresponding raw image.

We provide masks for the following datasets: ImageNet, Caltech101, Oxford Pets, StanfordCars, Flowers102, Food101, FGVC Aircraft, SUN397, DTD, EuroSAT and UCF101.

If you only want to download the mask data (do not contain raw images), please download from [_OPTIONAL_FOREGROUND_MASK_ONLY_DATA] folder.


🏷️ Scope of Application

Datasets are suitable for training and improving foreground-supervised prompt tuning methods. For example:

  • FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models   [GitHub]   [Paper]

Also, they are fully compatible with other original prompt tuning approaches.


⚙ Data Preparation

The datasets include the original images, the split_zhou_xxx.json annotations, and foreground masks.

The mask directory is located under the dataset root, and its internal subpath is consistent with the image directory, e.g.:

  • Image directory: ./flowers-102/oxford_flowers/jpg
  • Mask directory: ./flowers-102/mask/oxford_flowers/jpg

Additionally, you can prepare ImageNet dataset from: [Raw Images] [annoations] [val conversion script]

NOTE: You can build the file tree by referring to the [FVG-PT repository].


🖥︎ When You Write Your Own Code ...

  1. You can refer to ./Dassl.pytorch in the FVG-PT repository to build a DataLoader that can pass masks.

    mask_path = join(dataset_root, "mask", image_path_suffix)
    
  2. During pre-processing, the mask input needs to be resized, e.g.:

    def _transform_pair(self, img, mask):
            if mask.size != img.size:
                mask = TF.resize(
                    mask,
                    [img.height, img.width],
                    interpolation=InterpolationMode.NEAREST
                )
    

Acknowledgements

Our repository is built based on DPC, FVG-PT, DAPT and zhengli97.