Update README.md
Browse files
README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
| 6 |
|
| 7 |
Base-to-Novel: [ImageNet-1K](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT), [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 8 |
|
|
@@ -12,9 +12,9 @@ Due to various factors, the links to some datasets may be outdated or invalid.
|
|
| 12 |
|
| 13 |
To make it easy for you to download these datasets, we maintain a repository on HuggingFace, which contains all the datasets to be used (except ImageNet). Each dataset also includes the corresponding split_zhou_xx.json file.
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
Install the CLI tool if not already installed.
|
| 20 |
|
|
@@ -24,4 +24,18 @@ Download the datasets.
|
|
| 24 |
|
| 25 |
`huggingface-cli download zhengli97/prompt_learning_dataset`
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
|
|
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
|
| 5 |
+
# Datasets:
|
| 6 |
|
| 7 |
Base-to-Novel: [ImageNet-1K](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT), [UCF101](https://www.crcv.ucf.edu/data/UCF101.php).
|
| 8 |
|
|
|
|
| 12 |
|
| 13 |
To make it easy for you to download these datasets, we maintain a repository on HuggingFace, which contains all the datasets to be used (except ImageNet). Each dataset also includes the corresponding split_zhou_xx.json file.
|
| 14 |
|
| 15 |
+
# Instructions for How to download these datasets:
|
| 16 |
|
| 17 |
+
## Using the huggingface-cli command-line tool:
|
| 18 |
|
| 19 |
Install the CLI tool if not already installed.
|
| 20 |
|
|
|
|
| 24 |
|
| 25 |
`huggingface-cli download zhengli97/prompt_learning_dataset`
|
| 26 |
|
| 27 |
+
<hr/>
|
| 28 |
+
|
| 29 |
+
# Some projects from our lab may familiarize you with prompt learning:
|
| 30 |
+
|
| 31 |
+
- Open Source Paper List: https://github.com/zhengli97/Awesome-Prompt-Adapter-Learning-for-VLMs
|
| 32 |
+
- 中文视频解读:《视觉语言模型CLIP的提示学习方法研究》,[链接](https://www.techbeat.net/talk-info?id=915)
|
| 33 |
+
- Published Papers:
|
| 34 |
+
- **Advancing Textual Prompt Learning with Anchored Attributes.** ICCV 2025. [[Paper](https://arxiv.org/abs/2412.09442)] [[Project Page](https://zhengli97.github.io/ATPrompt/)] [[Code](https://github.com/zhengli97/ATPrompt)] [[中文解读](https://zhuanlan.zhihu.com/p/11787739769)] [[中文翻译](https://github.com/zhengli97/ATPrompt/blob/main/docs/ATPrompt_chinese_version.pdf)]
|
| 35 |
+
- **PromptKD: Unsupervised Prompt Distillation for Vision-Language Models.** CVPR 2024. [[Paper](https://arxiv.org/abs/2403.02781)] [[Project Page](https://zhengli97.github.io/PromptKD)] [[Code](https://github.com/zhengli97/PromptKD)] [[中文解读](https://zhuanlan.zhihu.com/p/684269963)] [[中文翻译](https://github.com/zhengli97/PromptKD/blob/main/docs/PromptKD_chinese_version.pdf)]
|
| 36 |
+
- **Cascade Prompt Learning for Vision-Language Model Ddaptation.** ECCV 2024. [[Paper](https://arxiv.org/abs/2409.17805)] [[Code](https://github.com/megvii-research/CasPL)] [[中文解读](https://zhuanlan.zhihu.com/p/867291664)]
|
| 37 |
+
- **Fine-Grained Visual Prompting.** NeurIPS 2023. [[Paper](https://arxiv.org/abs/2306.04356)] [[Code](https://github.com/ylingfeng/FGVP)]
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
|
| 41 |
|