| | --- |
| | license: mit |
| | task_categories: |
| | - image-classification |
| | language: |
| | - en |
| | tags: |
| | - prompt-tuning |
| | - prompt-learning |
| | - CLIP |
| | --- |
| | |
| | # ⭐ Dataset Introduction |
| |
|
| | The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning research (e.g., [CoOp](https://github.com/KaiyangZhou/CoOp)). |
| |
|
| | Based on the original datasets, this repository adds **foreground segmentation masks** (generated by [SEEM](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)) of all raw images. |
| |
|
| | <div align="left"> |
| | <img src="_mask_examples.png" alt="fail" width="50%""> |
| | </div> |
| | |
| | - For the foreground masks, the RGB value of the foreground region is `[255, 255, 255]`, and the background region is `[0, 0, 0]`. |
| |
|
| | - The shorter side is always fixed to `512 px`, and the scaling ratio is the same as that of the corresponding raw image. |
| |
|
| | We provide masks for the following datasets: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php). |
| |
|
| | If you only want to download the mask data (do not contain raw images), please download from [[_OPTIONAL_FOREGROUND_MASK_ONLY_DATA](https://huggingface.co/datasets/JREion/Prompt_Tuning_Datasets_with_Foreground/tree/main/_OPTIONAL_FOREGROUND_MASK_ONLY_DATA)] folder. |
| |
|
| | <br> |
| |
|
| | # 🏷️ Scope of Application |
| |
|
| | Datasets are suitable for training and improving **foreground-supervised prompt tuning** methods. For example: |
| |
|
| | - _FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models_   [[GitHub](https://github.com/JREion/FVG-PT)]   [[Paper](arxiv.org/abs/2603.08708)] |
| |
|
| |
|
| | Also, they are **fully compatible** with other original prompt tuning approaches. |
| |
|
| | <br> |
| |
|
| | # ⚙ Data Preparation |
| |
|
| | The datasets include the original images, the `split_zhou_xxx.json` annotations, and **foreground masks**. |
| |
|
| | The `mask` directory is located under the dataset root, and its internal subpath is consistent with the image directory, e.g.: |
| |
|
| | - Image directory: `./flowers-102/oxford_flowers/jpg` |
| | - Mask directory: `./flowers-102/mask/oxford_flowers/jpg` |
| |
|
| | Additionally, you can prepare ImageNet dataset from: [[Raw Images](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)] [[annoations](https://drive.google.com/file/d/1-61f_ol79pViBFDG_IDlUQSwoLcn2XXF/view?usp=sharing)] [[val conversion script](https://github.com/soumith/imagenetloader.torch/blob/master/valprep.sh)] |
| |
|
| | **_NOTE: You can build the file tree by referring to the [[FVG-PT repository](https://github.com/JREion/FVG-PT/blob/main/docs/DATASETS.md)]._** |
| |
|
| | <br> |
| |
|
| | # 🖥︎ When You Write Your Own Code ... |
| |
|
| | 1. You can refer to `./Dassl.pytorch` in the [FVG-PT repository](https://github.com/JREion/FVG-PT/tree/main) to build a DataLoader that can pass masks. |
| |
|
| | ```Python |
| | mask_path = join(dataset_root, "mask", image_path_suffix) |
| | ``` |
| | |
| | 2. During pre-processing, the mask input needs to be resized, e.g.: |
| |
|
| | ```Python |
| | def _transform_pair(self, img, mask): |
| | if mask.size != img.size: |
| | mask = TF.resize( |
| | mask, |
| | [img.height, img.width], |
| | interpolation=InterpolationMode.NEAREST |
| | ) |
| | ``` |
| | |
| |
|
| | <br> |
| |
|
| | # Acknowledgements |
| |
|
| | Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [FVG-PT](https://github.com/JREion/FVG-PT), [DAPT](https://github.com/SII-Ferenas/DAPT) and [zhengli97](https://huggingface.co/zhengli97/prompt_learning_dataset). |