DOFA-CLIP-VIT-L-14

This model is a mirror/redistribution of the original GeoLB-ViT-14-SigLIP-so400m-384-EO model.

Original Repository and Links

License and Terms

Creative Commons Attribution 4.0 International (CC BY 4.0)

GeoLangBind model weights are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

License details: https://creativecommons.org/licenses/by/4.0/

Additional Terms on Commercial Use

While CC BY 4.0 permits commercial use, we request that any commercial use of the GeoLangBind model weights obtain explicit permission from the authors. This request is to ensure ethical and responsible use of the model.

For inquiries regarding commercial usage, please contact: xiongzhitong@gmail.com

Disclaimer

This model is provided "as is" without warranty of any kind, express or implied. The authors are not responsible for any use or misuse of this model.

Citation

If you use this model in your research, please cite the original work:

@misc{xiong2025dofaclipmultimodalvisionlanguagefoundation,
      title={DOFA-CLIP: Multimodal Vision-Language Foundation Models for Earth Observation}, 
      author={Zhitong Xiong and Yi Wang and Weikang Yu and Adam J Stewart and Jie Zhao and Nils Lehmann and Thomas Dujardin and Zhenghang Yuan and Pedram Ghamisi and Xiao Xiang Zhu},
      year={2025},
      eprint={2503.06312},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.06312}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including BiliSakura/DOFA-CLIP-VIT-L-14

Paper for BiliSakura/DOFA-CLIP-VIT-L-14