Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ tags:
|
|
| 18 |
base_model:
|
| 19 |
- sentence-transformers/all-MiniLM-L6-v2
|
| 20 |
---
|
| 21 |
-
# CLOSP-
|
| 22 |
|
| 23 |
CLOSP (Contrastive Language Optical SAR Pretraining) is a multimodal architecture designed for text-to-image retrieval.
|
| 24 |
It creates a unified embedding space for text, Sentinel-2 (MSI), and Sentinel-1 (SAR) data.
|
|
@@ -42,7 +42,7 @@ During training, it uses a contrastive objective to align the textual embeddings
|
|
| 42 |
Use the code below to get started with the model.
|
| 43 |
|
| 44 |
```python
|
| 45 |
-
model = AutoModel.from_pretrained("DarthReca/CLOSP-
|
| 46 |
```
|
| 47 |
|
| 48 |
## Citation
|
|
|
|
| 18 |
base_model:
|
| 19 |
- sentence-transformers/all-MiniLM-L6-v2
|
| 20 |
---
|
| 21 |
+
# CLOSP-VS
|
| 22 |
|
| 23 |
CLOSP (Contrastive Language Optical SAR Pretraining) is a multimodal architecture designed for text-to-image retrieval.
|
| 24 |
It creates a unified embedding space for text, Sentinel-2 (MSI), and Sentinel-1 (SAR) data.
|
|
|
|
| 42 |
Use the code below to get started with the model.
|
| 43 |
|
| 44 |
```python
|
| 45 |
+
model = AutoModel.from_pretrained("DarthReca/CLOSP-VS", trust_remote_code=True)
|
| 46 |
```
|
| 47 |
|
| 48 |
## Citation
|