Improve model card with metadata and links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,32 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models
|
| 8 |
+
|
| 9 |
+
This repository contains the EarthMind-4B model, a novel vision-language framework for multi-granular and multi-sensor Earth Observation (EO) data understanding, as presented in the paper [EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models](https://huggingface.co/papers/2506.01667). EarthMind features Spatial Attention Prompting (SAP) and Cross-modal Fusion for enhanced EO data understanding.
|
| 10 |
+
|
| 11 |
+
**Code:** [https://github.com/sy1998/EarthMind](https://github.com/sy1998/EarthMind)
|
| 12 |
+
|
| 13 |
+
**Sample Usage:** (see GitHub README for detailed instructions)
|
| 14 |
+
|
| 15 |
+
```python
|
| 16 |
+
import argparse
|
| 17 |
+
import os
|
| 18 |
+
|
| 19 |
+
from PIL import Image
|
| 20 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 21 |
+
|
| 22 |
+
import cv2
|
| 23 |
+
try:
|
| 24 |
+
from mmengine.visualization import Visualizer
|
| 25 |
+
except ImportError:
|
| 26 |
+
Visualizer = None
|
| 27 |
+
print("Warning: mmengine is not installed, visualization is disabled.")
|
| 28 |
+
|
| 29 |
+
# ... (rest of the sample code from GitHub README) ...
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
**Tags:** image-text-to-text, earth-observation, multi-modal, vision-language
|