DaViT: Dual Attention Vision Transformers
Paper
• 2204.03645 • Published
DaViT-Base (ws=4, channel_attn_v2) trained from scratch on ImageNet-1K @128x128.
import timm
model = timm.create_model('hf-hub:PRadecki/davit-base-ws4-cav2-in1k-128', pretrained=True)
model.eval()