VTAB-1k / README.md
nielsr's picture
nielsr HF Staff
Add comprehensive dataset card for V-PETL Bench
ef1aa93 verified
|
raw
history blame
7.24 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - image-classification
  - video-action-recognition
  - object-detection
  - image-segmentation
library_name: timm
language:
  - en
tags:
  - benchmark
  - parameter-efficient-fine-tuning
  - peft
  - computer-vision
  - vision
  - fine-tuning

V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark

This repository contains the V-PETL Bench, a unified benchmark for evaluating Parameter-Efficient Fine-Tuning (PEFT) methods for pre-trained vision models. This benchmark was introduced in the paper:

Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey and Benchmark

Project Page: https://v-petl-bench.github.io/ GitHub Repository: https://github.com/synbol/Parameter-Efficient-Transfer-Learning-Benchmark

Logo

Introduction

Parameter-efficient transfer learning (PETL) methods show promise in adapting a pre-trained model to various downstream tasks while training only a few parameters. In the computer vision (CV) domain, numerous PETL algorithms have been proposed, but their direct employment or comparison remains inconvenient. To address this challenge, we construct a Unified Visual PETL Benchmark (V-PETL Bench) for the CV domain by selecting 30 diverse, challenging, and comprehensive datasets from image recognition, video action recognition, and dense prediction tasks. On these datasets, we systematically evaluate 25 dominant PETL algorithms and open-source a modular and extensible codebase for a fair evaluation of these algorithms.

Dataset Details

V-PETL Bench comprises a comprehensive collection of datasets across three main vision tasks:

  1. Image Classification Datasets:

    • Fine-Grained Visual Classification tasks (FGVC): Comprises 5 fine-grained visual classification datasets (CUB200 2011, NABirds, Oxford Flowers, Stanford Dogs, Stanford Cars). The splitted dataset can be found on Hugging Face: XiN0919/FGVC.
    • Visual Task Adaptation Benchmark (VTAB): Comprises 19 diverse visual classification datasets. The processed data can be downloaded from Hugging Face: XiN0919/VTAB-1k.
  2. Video Action Recognition Datasets:

    • Kinetics-400: Requires downloading from official links and preprocessing.
    • Something-Something V2 (SSv2): Requires downloading from official links and preprocessing.
  3. Dense Prediction Datasets:

    • MS-COCO: Available from the official COCO dataset website.
    • ADE20K: Training and validation sets available.
    • PASCAL VOC: Available from the official PASCAL VOC website, with extra augmentation data.

Detailed data preparation instructions and original download links for all datasets can be found in the GitHub repository's Data Preparation section.

Quick Start (Training and Evaluation Demo)

To get started with the V-PETL Bench, follow these steps.

Install V-PETL Bench

First, clone the repository:

git clone https://github.com/synbol/Parameter-Efficient-Transfer-Learning-Benchmark.git

Environment Setup

V-PETL Bench is built on PyTorch. You can create a conda environment and install the required packages:

conda create --name v-petl-bench python=3.8
conda activate v-petl-bench
cd Parameter-Efficient-Transfer-Learning-Benchmark
pip install -r requirements.txt

Training and Evaluation Demo

Here's an example of how to set up V-PETL Bench locally and run a training and evaluation demo, taking LoRA on VTAB Cifar100 as an example:

import sys
sys.path.append("Parameter-Efficient-Transfer-Learning-Benchmark")
import torch
from ImageClassification import utils
from ImageClassification.dataloader import vtab
from ImageClassification.train import train
# get lora methods
from timm.scheduler.cosine_lr import CosineLRScheduler
from ImageClassification.models import vision_transformer_lora
import timm


# path to save model and logs
exp_base_path = '../output'
utils.mkdirss(exp_base_path)

# create logger
logger = utils.create_logger(log_path=exp_base_path, log_name='training')

# dataset config parameter
config = utils.get_config('model_lora', 'vtab', 'cifar100')

# get vtab dataset
# IMPORTANT: Replace with your actual data path where VTAB-1k is downloaded
data_path = '/home/ma-user/work/haozhe/synbol/vtab-1k'
train_dl, test_dl = vtab.get_data(data_path, 'cifar100', logger, evaluate=False, train_aug=config['train_aug'], batch_size=config['batch_size'])

# get pretrained model
# IMPORTANT: Ensure 'ViT-B_16.npz' is downloaded and placed in './released_models/' or adjust the path
# e.g., wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz -O ./released_models/ViT-B_16.npz
model = timm.models.create_model('vit_base_patch16_224_in21k_lora', checkpoint_path='./released_models/ViT-B_16.npz', drop_path_rate=0.1, tuning_mode='lora')
model.reset_classifier(config['class_num'])

# training parameters
trainable = []
for n, p in model.named_parameters():
    if 'linear_a' in n or 'linear_b' in n or 'head' in n:
        trainable.append(p)
        logger.info(str(n))
    else:
        p.requires_grad = False
opt = torch.optim.AdamW(trainable, lr=1e-4, weight_decay=5e-2)
scheduler = CosineLRScheduler(opt, t_initial=config['epochs'], warmup_t=config['warmup_epochs'], lr_min=1e-5, warmup_lr_init=1e-6, cycle_decay = 0.1)

# crossEntropyLoss function
criterion = torch.nn.CrossEntropyLoss()

# training
model = train.train(config, model, criterion, train_dl, opt, scheduler, logger, config['epochs'], 'vtab', 'cifar100')

# evaluation
eval_acc = train.test(model, test_dl, 'vtab')
print(f"Evaluation Accuracy for LoRA on VTAB Cifar100: {eval_acc}")

You can also train with a specific PETL algorithm on a dataset using the provided scripts (e.g., python python train/train_model_sct.py). For more details on running various experiments and accessing pre-trained model checkpoints and benchmark results, please refer to the official GitHub repository.

Citation

If you find this benchmark and repository useful for your research, please cite the paper:

@article{xin2024bench,
  title={V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark},
  author={Yi Xin, Siqi Luo, Xuyang Liu, Haodi Zhou, Xinyu Cheng, Christina Luoluo Lee, Junlong Du, Yuntao Du., Haozhe Wang, MingCai Chen, Ting Liu, Guimin Hu, Zhongwei Wan, Rongchao Zhang, Aoxue Li, Mingyang Yi, Xiaohong Liu},
  year={2024}
}