Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ tags:
|
|
| 10 |
- CLIP
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# Dataset Introduction
|
| 14 |
|
| 15 |
The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning research (e.g., [CoOp](https://github.com/KaiyangZhou/CoOp)).
|
| 16 |
|
|
@@ -20,19 +20,21 @@ Based on the original datasets, this repository adds **foreground segmentation m
|
|
| 20 |
<img src="_mask_examples.png" alt="fail" width="50%"">
|
| 21 |
</div>
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
Datasets contain: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 24 |
|
| 25 |
-
# Scope of Application
|
| 26 |
|
| 27 |
Datasets are suitable for training and improving **foreground-supervised prompt tuning** methods. For example:
|
| 28 |
|
| 29 |
-
- _Decouple before Align: Visual Disentanglement Enhances Prompt Tuning_
|
| 30 |
-
|
| 31 |
- _FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models_
|
| 32 |
|
| 33 |
Also, they are **fully compatible** with other original prompt tuning approaches.
|
| 34 |
|
| 35 |
-
# Data Preparation
|
| 36 |
|
| 37 |
The datasets include the original images, the `split_zhou_xxx.json` annotations, and **foreground masks**.
|
| 38 |
|
|
@@ -41,28 +43,33 @@ The `mask` directory is located under the dataset root, and its internal subpath
|
|
| 41 |
- Image directory: `./flowers-102/oxford_flowers/jpg`
|
| 42 |
- Mask directory: `./flowers-102/mask/oxford_flowers/jpg`
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
```Python
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
Additionally, you can prepare ImageNet dataset from: [[Raw Images](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)] [[annoations](https://drive.google.com/file/d/1-61f_ol79pViBFDG_IDlUQSwoLcn2XXF/view?usp=sharing)] [[val conversion script](https://github.com/soumith/imagenetloader.torch/blob/master/valprep.sh)]
|
| 63 |
|
| 64 |
|
| 65 |
|
| 66 |
# Acknowledgements
|
| 67 |
|
| 68 |
-
Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [DAPT](https://github.com/SII-Ferenas/DAPT) and [zhengli97](https://huggingface.co/zhengli97/prompt_learning_dataset).
|
|
|
|
| 10 |
- CLIP
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# ⭐ Dataset Introduction
|
| 14 |
|
| 15 |
The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning research (e.g., [CoOp](https://github.com/KaiyangZhou/CoOp)).
|
| 16 |
|
|
|
|
| 20 |
<img src="_mask_examples.png" alt="fail" width="50%"">
|
| 21 |
</div>
|
| 22 |
|
| 23 |
+
- For the foreground masks, the RGB value of the foreground region is `[255, 255, 255]`, and the background region is `[0, 0, 0]`.
|
| 24 |
+
|
| 25 |
+
- The shorter side is always fixed to `512 px`, and the scaling ratio is the same as that of the corresponding raw image.
|
| 26 |
+
|
| 27 |
Datasets contain: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 28 |
|
| 29 |
+
# 🏷️ Scope of Application
|
| 30 |
|
| 31 |
Datasets are suitable for training and improving **foreground-supervised prompt tuning** methods. For example:
|
| 32 |
|
|
|
|
|
|
|
| 33 |
- _FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models_
|
| 34 |
|
| 35 |
Also, they are **fully compatible** with other original prompt tuning approaches.
|
| 36 |
|
| 37 |
+
# ⚙ Data Preparation
|
| 38 |
|
| 39 |
The datasets include the original images, the `split_zhou_xxx.json` annotations, and **foreground masks**.
|
| 40 |
|
|
|
|
| 43 |
- Image directory: `./flowers-102/oxford_flowers/jpg`
|
| 44 |
- Mask directory: `./flowers-102/mask/oxford_flowers/jpg`
|
| 45 |
|
| 46 |
+
Additionally, you can prepare ImageNet dataset from: [[Raw Images](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)] [[annoations](https://drive.google.com/file/d/1-61f_ol79pViBFDG_IDlUQSwoLcn2XXF/view?usp=sharing)] [[val conversion script](https://github.com/soumith/imagenetloader.torch/blob/master/valprep.sh)]
|
| 47 |
+
|
| 48 |
+
**_NOTE: You can build the file tree by referring to the [[FVG-PT repository](https://github.com/JREion/FVG-PT/blob/main/docs/DATASETS.md)]._**
|
| 49 |
|
| 50 |
+
# 🖥︎ When You Write Your Own Code ...
|
| 51 |
|
| 52 |
+
1. You can refer to `./Dassl.pytorch` in the [FVG-PT repository](https://github.com/JREion/FVG-PT/tree/main) to build a DataLoader that can pass masks.
|
| 53 |
|
| 54 |
+
```Python
|
| 55 |
+
mask_path = join(dataset_root, "mask", image_path_suffix)
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
2. During pre-processing, the mask input needs to be resized, e.g.:
|
| 59 |
+
|
| 60 |
+
```Python
|
| 61 |
+
def _transform_pair(self, img, mask):
|
| 62 |
+
if mask.size != img.size:
|
| 63 |
+
mask = TF.resize(
|
| 64 |
+
mask,
|
| 65 |
+
[img.height, img.width],
|
| 66 |
+
interpolation=InterpolationMode.NEAREST
|
| 67 |
+
)
|
| 68 |
+
```
|
| 69 |
|
|
|
|
| 70 |
|
| 71 |
|
| 72 |
|
| 73 |
# Acknowledgements
|
| 74 |
|
| 75 |
+
Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [FVG-PT](https://github.com/JREion/FVG-PT), [DAPT](https://github.com/SII-Ferenas/DAPT) and [zhengli97](https://huggingface.co/zhengli97/prompt_learning_dataset).
|