| --- |
| license: apache-2.0 |
| language: |
| - en |
| --- |
| |
| # STRADAViT |
| Self-supervised Vision Transformers for Radio Astronomy Discovery Algorithms |
|
|
| ## License |
| This model is released under the Apache License 2.0. |
|
|
| ## Citation |
| If you use STRADAViT in research, please cite the associated work: |
|
|
| @article{demarco2026stradavit, |
| title = {STRADAViT: Towards a Foundational Model for Radio Astronomy through Self-Supervised Transfer}, |
| author = {DeMarco, Andrea and Fenech Conti, Ian and Camilleri, Hayley and Bushi, Ardiana and Riggi, Simone}, |
| year = {2026}, |
| note = {Under review}, |
| archivePrefix = {arXiv}, |
| primaryClass = {astro-ph.IM}, |
| url = {https://arxiv.org/abs/2603.29660v3} |
| } |
|
|
| ## Acknowledgement |
| This model was developed as part of the STRADA project on self-supervised transformers for radio astronomy. |
| If you build on this model, please acknowledge the project and cite the associated publication. |
|
|
| ## Intended Use |
| STRADAViT is intended as a domain-adapted starting point for radio astronomy imaging tasks. |
| It is suitable for: |
|
|
| - frozen-backbone transfer via linear probing |
| - downstream fine-tuning for morphology classification |
| - reuse as a vision backbone in broader radio astronomy pipelines, including detection and segmentation models |
|
|
| ## Limitations |
| STRADAViT is trained for transfer on radio astronomy imaging and should not be assumed to |
| outperform all off-the-shelf vision backbones in every downstream setting. In the current study: |
|
|
| - gains are strongest under frozen-backbone evaluation |
| - fine-tuning gains are more dataset-dependent |
| - performance remains sensitive to view generation and dataset heterogeneity |
| - broader validation on additional surveys and downstream tasks is still needed |
|
|
| ## Class Files |
|
|
| HF-style classes for using STRADAViT can be found on [GitHub](https://github.com/andreademarco86/stradavit). |