Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -16,6 +16,10 @@ The standard datasets (except ImageNet) used for CLIP-based Prompt Tuning resear
|
|
| 16 |
|
| 17 |
Based on the original datasets, this repository adds **foreground segmentation masks** (generated by [SEEM](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)) of all raw images.
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
Datasets contain: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 20 |
|
| 21 |
# Scope of Application
|
|
@@ -28,6 +32,37 @@ Datasets are suitable for training and improving **foreground-supervised prompt
|
|
| 28 |
|
| 29 |
Also, they are **fully compatible** with other original prompt tuning approaches.
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
# Acknowledgements
|
| 32 |
|
| 33 |
-
Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [DAPT](
|
|
|
|
| 16 |
|
| 17 |
Based on the original datasets, this repository adds **foreground segmentation masks** (generated by [SEEM](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)) of all raw images.
|
| 18 |
|
| 19 |
+
<div align="left">
|
| 20 |
+
<img src="_mask_examples.png" alt="fail" width="50%"">
|
| 21 |
+
</div>
|
| 22 |
+
|
| 23 |
Datasets contain: [ImageNet](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT) and [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 24 |
|
| 25 |
# Scope of Application
|
|
|
|
| 32 |
|
| 33 |
Also, they are **fully compatible** with other original prompt tuning approaches.
|
| 34 |
|
| 35 |
+
# Data Preparation
|
| 36 |
+
|
| 37 |
+
The datasets include the original images, the `split_zhou_xxx.json` annotations, and **foreground masks**.
|
| 38 |
+
|
| 39 |
+
The `mask` directory is located under the dataset root, and its internal subpath is consistent with the image directory, e.g.:
|
| 40 |
+
|
| 41 |
+
- Image directory: `./flowers-102/oxford_flowers/jpg`
|
| 42 |
+
- Mask directory: `./flowers-102/mask/oxford_flowers/jpg`
|
| 43 |
+
|
| 44 |
+
```Python
|
| 45 |
+
mask_path = join(dataset_root, "mask", image_path_suffix)
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
For the foreground masks, the RGB value of the foreground region is `[255, 255, 255]`, and the background region is `[0, 0, 0]`. The shorter side is always fixed to 512 px, and the scaling ratio is the same as that of the corresponding raw image.
|
| 49 |
+
|
| 50 |
+
During pre-processing, the mask input needs to be resized, e.g.:
|
| 51 |
+
|
| 52 |
+
```Python
|
| 53 |
+
def _transform_pair(self, img, mask):
|
| 54 |
+
if mask.size != img.size:
|
| 55 |
+
mask = TF.resize(
|
| 56 |
+
mask,
|
| 57 |
+
[img.height, img.width],
|
| 58 |
+
interpolation=InterpolationMode.NEAREST
|
| 59 |
+
)
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
Additionally, you can prepare ImageNet dataset from: [[Raw Images](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)] [[annoations](https://drive.google.com/file/d/1-61f_ol79pViBFDG_IDlUQSwoLcn2XXF/view?usp=sharing)] [[val conversion script](https://www.kaggle.com/c/imagenet-object-localization-challenge/overview/description)]
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
|
| 66 |
# Acknowledgements
|
| 67 |
|
| 68 |
+
Our repository is built based on [DPC](arxiv.org/abs/2503.13443), [DAPT](https://github.com/SII-Ferenas/DAPT) and [zhengli97](https://huggingface.co/zhengli97/prompt_learning_dataset).
|