File size: 4,007 Bytes
29a514c
 
 
 
 
 
 
 
 
 
 
 
c09246c
29a514c
 
 
 
 
7ab07db
 
 
 
c09246c
 
 
 
b92571a
29a514c
a39fd34
f8f7218
17f1bb6
 
c09246c
29a514c
78b58db
29a514c
4f7e79b
17f1bb6
29a514c
 
 
17f1bb6
 
c09246c
7ab07db
 
 
 
 
 
 
 
c09246c
 
 
7ab07db
17f1bb6
 
c09246c
7ab07db
c09246c
7ab07db
c09246c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ab07db
 
17f1bb6
7ab07db
29a514c
 
c09246c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: mit
task_categories:
- image-classification
language:
- en
tags:
- prompt-tuning
- prompt-learning
- CLIP
---

# ⭐ Dataset Introduction

The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning research (e.g., [CoOp](https://github.com/KaiyangZhou/CoOp)).

Based on the original datasets, this repository adds **foreground segmentation masks** (generated by [SEEM](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)) of all raw images.

<div align="left">
	<img src="_mask_examples.png" alt="fail" width="50%"">
</div>     

- For the foreground masks, the RGB value of the foreground region is `[255, 255, 255]`, and the background region is `[0, 0, 0]`. 

- The shorter side is always fixed to `512 px`, and the scaling ratio is the same as that of the corresponding raw image.

We provide masks for the following datasets: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).

If you only want to download the mask data (do not contain raw images), please download from [[_OPTIONAL_FOREGROUND_MASK_ONLY_DATA](https://huggingface.co/datasets/JREion/Prompt_Tuning_Datasets_with_Foreground/tree/main/_OPTIONAL_FOREGROUND_MASK_ONLY_DATA)] folder.

<br>

# 🏷️ Scope of Application

Datasets are suitable for training and improving **foreground-supervised prompt tuning** methods. For example:

- _FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models_ &emsp; [[GitHub](https://github.com/JREion/FVG-PT)] &ensp; [[Paper](arxiv.org/abs/2603.08708)]


Also, they are **fully compatible** with other original prompt tuning approaches.

<br>

# ⚙ Data Preparation

The datasets include the original images, the `split_zhou_xxx.json` annotations, and **foreground masks**.

The `mask` directory is located under the dataset root, and its internal subpath is consistent with the image directory, e.g.:

- Image directory: `./flowers-102/oxford_flowers/jpg`
- Mask directory: `./flowers-102/mask/oxford_flowers/jpg`

Additionally, you can prepare ImageNet dataset from: [[Raw Images](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)] [[annoations](https://drive.google.com/file/d/1-61f_ol79pViBFDG_IDlUQSwoLcn2XXF/view?usp=sharing)] [[val conversion script](https://github.com/soumith/imagenetloader.torch/blob/master/valprep.sh)]

**_NOTE: You can build the file tree by referring to the [[FVG-PT repository](https://github.com/JREion/FVG-PT/blob/main/docs/DATASETS.md)]._**

<br>

# 🖥︎ When You Write Your Own Code ...

1. You can refer to `./Dassl.pytorch` in the [FVG-PT repository](https://github.com/JREion/FVG-PT/tree/main) to build a DataLoader that can pass masks.

    ```Python
    mask_path = join(dataset_root, "mask", image_path_suffix)
    ```

2. During pre-processing, the mask input needs to be resized, e.g.:

    ```Python
    def _transform_pair(self, img, mask):
            if mask.size != img.size:
                mask = TF.resize(
                    mask,
                    [img.height, img.width],
                    interpolation=InterpolationMode.NEAREST
                )
    ```


<br>

# Acknowledgements

Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [FVG-PT](https://github.com/JREion/FVG-PT), [DAPT](https://github.com/SII-Ferenas/DAPT) and [zhengli97](https://huggingface.co/zhengli97/prompt_learning_dataset).