File size: 1,011 Bytes
3288125
 
 
 
26c8d53
 
 
 
3288125
 
92fedb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
datasets:
- ILSVRC/imagenet-1k
library_name: timm
---

# LocAtViT: Locality-Attending Vision Transformer

[![arXiv](https://img.shields.io/badge/arXiv-2603.04892-b31b1b.svg)](https://arxiv.org/abs/2603.04892)
[![hfpaper](https://img.shields.io/badge/GitHub-LocAtViT-black)](https://github.com/sinahmr/LocAtViT)

> Pretrain vision transformers so that their patch representations transfer better to dense prediction (e.g., segmentation), without changing the pretraining objective.


## Usage

```python
import timm
model = timm.create_model("hf_hub:sinahmr/locatvit_tiny", pretrained=True)
```

## Citation

```bibtex
@inproceedings{hajimiri2026locatvit,
  author    = {Hajimiri, Sina and Beizaee, Farzad and Shakeri, Fereshteh and Desrosiers, Christian and Ben Ayed, Ismail and Dolz, Jose},
  title     = {Locality-Attending Vision Transformer},
  booktitle = {International Conference on Learning Representations},
  year      = {2026}
}
```