BiliSakura commited on
Commit
3c23d23
·
verified ·
1 Parent(s): 5e91724

Update README with citation and original repo links

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Remote-CLIP-ViT-L-14
2
+
3
+ This model is a mirror/redistribution of the original [RemoteCLIP](https://huggingface.co/chendelong/RemoteCLIP) model.
4
+
5
+ ## Original Repository and Links
6
+ - **Original Hugging Face Model**: [chendelong/RemoteCLIP](https://huggingface.co/chendelong/RemoteCLIP)
7
+ - **Official GitHub Repository**: [ChenDelong1999/RemoteCLIP](https://github.com/ChenDelong1999/RemoteCLIP)
8
+
9
+ ## Description
10
+ RemoteCLIP is a vision-language foundation model for remote sensing, trained on a large-scale dataset of remote sensing image-text pairs. It is based on the CLIP architecture and is designed to handle the unique characteristics of remote sensing imagery.
11
+
12
+ ## Citation
13
+ If you use this model in your research, please cite the original work:
14
+
15
+ ```bibtex
16
+ @article{remoteclip,
17
+ author = {Fan Liu and
18
+ Delong Chen and
19
+ Zhangqingyun Guan and
20
+ Xiaocong Zhou and
21
+ Jiale Zhu and
22
+ Qiaolin Ye and
23
+ Liyong Fu and
24
+ Jun Zhou},
25
+ title = {RemoteCLIP: {A} Vision Language Foundation Model for Remote Sensing},
26
+ journal = {{IEEE} Transactions on Geoscience and Remote Sensing},
27
+ volume = {62},
28
+ pages = {1--16},
29
+ year = {2024},
30
+ url = {https://doi.org/10.1109/TGRS.2024.3390838},
31
+ doi = {10.1109/TGRS.2024.3390838},
32
+ }
33
+ ```
34
+