Improve model card: add metadata and usage examples
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,17 +1,22 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
#
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
## Model Details
|
|
|
|
|
|
|
|
|
|
| 12 |
Differences with existing foundation models: DOFA is pre-trained using five different data modalities in remote sensing and Earth observation. It can handle images with any number of input channels.
|
| 13 |
|
| 14 |
-
DOFA is inspired by neuroplasticity Neuroplasticity is an important brain mechanism for adjusting to new experiences or environmental shifts. Inspired by this concept, we design DOFA to emulate this mechanism for processing multimodal EO data.
|
| 15 |
|
| 16 |
For more details, please take a look at the paper [Neural Plasticity-Inspired Foundation Model for Observing the Earth Crossing Modalities](https://arxiv.org/abs/2403.15356).
|
| 17 |
|
|
@@ -28,21 +33,20 @@ The development of individual, customized foundation models requires considerabl
|
|
| 28 |
The increasing number of specialized foundation models makes it difficult to select the most appropriate one for a specific downstream task.
|
| 29 |
|
| 30 |
DOFA supports input images with any number of channels using our pre-trained foundation models.
|
| 31 |
-
The examples in the Github repo [DOFA](https://github.com/zhu-xlab/DOFA) show how to use DOFA for Sentinel-1 (SAR), Sentinel-2, NAIP RGB.
|
| 32 |
We will add example usage for Gaofen Multispectral, and Hyperspectral data soon.
|
| 33 |
|
| 34 |
---
|
| 35 |
|
| 36 |
-
-
|
| 37 |
-
-
|
| 38 |
-
-
|
| 39 |
-
- **License:** CC-BY-4.0
|
| 40 |
|
| 41 |
### Model Sources [optional]
|
| 42 |
|
| 43 |
-
-
|
| 44 |
-
-
|
| 45 |
-
-
|
| 46 |
|
| 47 |
---
|
| 48 |
Table 1: Linear probing results on six classification tasks. All models are trained
|
|
@@ -90,11 +94,52 @@ this domain.
|
|
| 90 |
|
| 91 |
---
|
| 92 |
|
| 93 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
|
|
|
|
| 97 |
|
|
|
|
| 98 |
```
|
| 99 |
@article{xiong2024neural,
|
| 100 |
title={Neural Plasticity-Inspired Foundation Model for Observing the {Earth} Crossing Modalities},
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
+
pipeline_tag: image-feature-extraction
|
| 4 |
+
library_name: pytorch
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# DOFA: Neural Plasticity-Inspired Multimodal Foundation Model for Earth Observation
|
| 8 |
|
| 9 |
+
[📄 Paper](https://arxiv.org/abs/2403.15356) - [💻 Code](https://github.com/zhu-xlab/DOFA)
|
| 10 |
|
| 11 |
+
**DOFA** (Dynamic One-For-All) is a unified, multimodal foundation framework designed for diverse vision tasks in Earth observation (EO). Inspired by neural plasticity, DOFA utilizes a wavelength-conditioned dynamic hypernetwork to process inputs from five distinct satellite sensors flexibly. By continually pretraining on five EO modalities, DOFA achieves state-of-the-art performance across multiple downstream tasks and generalizes well to unseen modalities.
|
| 12 |
|
| 13 |
## Model Details
|
| 14 |
+
|
| 15 |
+
What is DOFA: DOFA is a unified multimodal foundation model for different data modalities in remote sensing and Earth observation.
|
| 16 |
+
|
| 17 |
Differences with existing foundation models: DOFA is pre-trained using five different data modalities in remote sensing and Earth observation. It can handle images with any number of input channels.
|
| 18 |
|
| 19 |
+
DOFA is inspired by neuroplasticity. Neuroplasticity is an important brain mechanism for adjusting to new experiences or environmental shifts. Inspired by this concept, we design DOFA to emulate this mechanism for processing multimodal EO data.
|
| 20 |
|
| 21 |
For more details, please take a look at the paper [Neural Plasticity-Inspired Foundation Model for Observing the Earth Crossing Modalities](https://arxiv.org/abs/2403.15356).
|
| 22 |
|
|
|
|
| 33 |
The increasing number of specialized foundation models makes it difficult to select the most appropriate one for a specific downstream task.
|
| 34 |
|
| 35 |
DOFA supports input images with any number of channels using our pre-trained foundation models.
|
| 36 |
+
The examples in the Github repo [DOFA](https://github.com/zhu-xlab/DOFA) show how to use DOFA for Sentinel-1 (SAR), Sentinel-2, NAIP RGB.
|
| 37 |
We will add example usage for Gaofen Multispectral, and Hyperspectral data soon.
|
| 38 |
|
| 39 |
---
|
| 40 |
|
| 41 |
+
- **Developed by:** Techinical University of Munich, [Chair of Data Science in Earth Observation](https://www.asg.ed.tum.de/en/sipeo/home/)
|
| 42 |
+
- **Funded by:** Ekapex, ML4Earth
|
| 43 |
+
- **Model type:** Multimodal Foundation Model for Remote Sensing and Earth Observation
|
|
|
|
| 44 |
|
| 45 |
### Model Sources [optional]
|
| 46 |
|
| 47 |
+
- **Repository:** https://github.com/zhu-xlab/DOFA
|
| 48 |
+
- **Paper [optional]:** https://arxiv.org/abs/2403.15356
|
| 49 |
+
- **Demo [optional]:** https://github.com/ShadowXZT/DOFA-pytorch/blob/master/demo.ipynb
|
| 50 |
|
| 51 |
---
|
| 52 |
Table 1: Linear probing results on six classification tasks. All models are trained
|
|
|
|
| 94 |
|
| 95 |
---
|
| 96 |
|
| 97 |
+
## How to Use
|
| 98 |
+
|
| 99 |
+
Please refer to the Github repo [DOFA](https://github.com/zhu-xlab/DOFA) for more details, including `demo.ipynb` for extensive usage examples.
|
| 100 |
+
|
| 101 |
+
### Using `torch.hub` to Load the DOFA ViT Base Model
|
| 102 |
+
|
| 103 |
+
This snippet demonstrates how to load a ViT model—specifically, **DOFA ViT Base**—from a GitHub repository that includes a `hubconf.py` entrypoint. The model weights are hosted on Hugging Face via a direct download URL, so **no additional dependencies** beyond PyTorch are required.
|
| 104 |
+
|
| 105 |
+
```python
|
| 106 |
+
import torch
|
| 107 |
+
|
| 108 |
+
model = torch.hub.load(
|
| 109 |
+
'zhu-xlab/DOFA',
|
| 110 |
+
'vit_base_dofa', # The entry point defined in hubconf.py
|
| 111 |
+
pretrained=True,
|
| 112 |
+
)
|
| 113 |
|
| 114 |
+
model = model.cuda()
|
| 115 |
+
model.eval()
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
Now the model is ready for inference or further fine-tuning.
|
| 119 |
+
If you would like to fine-tune DOFA on different downstream tasks, please refer to [DOFA-pytorch](https://github.com/xiong-zhitong/DOFA-pytorch).
|
| 120 |
+
|
| 121 |
+
### TorchGeo
|
| 122 |
+
|
| 123 |
+
Alternatively, DOFA can be used via the [TorchGeo](https://github.com/microsoft/torchgeo) library:
|
| 124 |
+
|
| 125 |
+
```python
|
| 126 |
+
import torch
|
| 127 |
+
from torchgeo.models import DOFABase16_Weights, dofa_base_patch16_224
|
| 128 |
+
|
| 129 |
+
# Example NAIP image (wavelengths in $\mu$m)
|
| 130 |
+
x = torch.rand(2, 4, 224, 224)
|
| 131 |
+
wavelengths = [0.48, 0.56, 0.64, 0.81]
|
| 132 |
+
|
| 133 |
+
# Use pre-trained model weights
|
| 134 |
+
model = dofa_base_patch16_224(weights=DOFABase16_Weights.DOFA_MAE)
|
| 135 |
+
|
| 136 |
+
# Make a prediction (model may need to be fine-tuned first)
|
| 137 |
+
y = model(x, wavelengths)
|
| 138 |
+
```
|
| 139 |
|
| 140 |
+
## Citation
|
| 141 |
|
| 142 |
+
If you find the DOFA useful in your research, please kindly cite our paper:
|
| 143 |
```
|
| 144 |
@article{xiong2024neural,
|
| 145 |
title={Neural Plasticity-Inspired Foundation Model for Observing the {Earth} Crossing Modalities},
|