Update README.md
Browse files
README.md
CHANGED
|
@@ -6,27 +6,19 @@ license: mit
|
|
| 6 |
|
| 7 |
[Guillaume Astruc](https://gastruc.github.io/), [Nicolas Gonthier](https://ngonthier.github.io/), [Clement Mallet](https://www.umr-lastig.fr/clement-mallet/), [Loic Landrieu](https://loiclandrieu.com/)
|
| 8 |
|
| 9 |
-
|
| 10 |
-
Official models for [_AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities_](https://arxiv.org/pdf/2404.08351.pdf)
|
| 11 |
-
|
| 12 |
-
|
| 13 |
<p align="center">
|
| 14 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/Jh9eOnMePFiL84TOzhe86.png" alt="image/png" width="600" height="300">
|
| 15 |
</p>
|
| 16 |
|
| 17 |
## Abstract
|
| 18 |
|
| 19 |
-
|
| 20 |
-
<div style="flex: 1;">
|
| 21 |
-
<p>We introduce AnySat: a JEPA-based multimodal Earth Observation model that train simultaneously on diverse datasets with different scales, resolutions (spatial, spectral, temporal), and modality combinations.
|
| 22 |
|
|
|
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/2tc0cFdOF2V0_KgptA-qV.png" alt="image/png" width="400"/>
|
| 28 |
-
</div>
|
| 29 |
-
</div>
|
| 30 |
|
| 31 |
### Inference 🔥
|
| 32 |
|
|
@@ -75,6 +67,11 @@ features = AnySat(data, scale=scale, keep_subpatch=True, modality_keep=modality)
|
|
| 75 |
|
| 76 |
Note that the features will be of size 2*D. If you have several modalities of the same desired resolution, you should pick the most informative one (or modify the code to concatenate also the other modalities)
|
| 77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
To reproduce results, add new modalities, or do more experiments see the full code on [github]('https://github.com/gastruc/AnySat').
|
| 79 |
|
| 80 |
### Citing 💫
|
|
|
|
| 6 |
|
| 7 |
[Guillaume Astruc](https://gastruc.github.io/), [Nicolas Gonthier](https://ngonthier.github.io/), [Clement Mallet](https://www.umr-lastig.fr/clement-mallet/), [Loic Landrieu](https://loiclandrieu.com/)
|
| 8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
<p align="center">
|
| 10 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/Jh9eOnMePFiL84TOzhe86.png" alt="image/png" width="600" height="300">
|
| 11 |
</p>
|
| 12 |
|
| 13 |
## Abstract
|
| 14 |
|
| 15 |
+
We introduce AnySat: a JEPA-based multimodal Earth Observation model that train simultaneously on diverse datasets with different scales, resolutions (spatial, spectral, temporal), and modality combinations.
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
For more details and results, please check out our [github](https://github.com/gastruc/AnySat) and [project page](https://gastruc.github.io/projects/omnisat.html).
|
| 18 |
|
| 19 |
+
<p align="center">
|
| 20 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/2tc0cFdOF2V0_KgptA-qV.png" alt="image/png" width="400" height="200">
|
| 21 |
+
</p>
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
### Inference 🔥
|
| 24 |
|
|
|
|
| 67 |
|
| 68 |
Note that the features will be of size 2*D. If you have several modalities of the same desired resolution, you should pick the most informative one (or modify the code to concatenate also the other modalities)
|
| 69 |
|
| 70 |
+
|
| 71 |
+
Example of use of AnySat:
|
| 72 |
+

|
| 73 |
+
|
| 74 |
+
|
| 75 |
To reproduce results, add new modalities, or do more experiments see the full code on [github]('https://github.com/gastruc/AnySat').
|
| 76 |
|
| 77 |
### Citing 💫
|