nielsr HF Staff commited on
Commit
4a3661b
·
verified ·
1 Parent(s): ec93003

Improve model card with paper details, code, usage, and pipeline tag

Browse files

This PR significantly enhances the model card for `UrbanFusion` by:
- Adding the full paper title and a link to its Hugging Face page: [UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations](https://huggingface.co/papers/2510.13774).
- Including the paper's abstract to give users a clear understanding of the model.
- Providing a direct link to the official GitHub repository for code access.
- Integrating a minimal usage example from the GitHub README to demonstrate how to perform location encoding.
- Adding the `pipeline_tag: image-feature-extraction` to the metadata, allowing users to discover the model more easily on the Hub.
- Adding `library_name: pytorch` based on the usage example showing `import torch`.
- Including the BibTeX citation for proper referencing.

Please review these additions and merge the pull request.

Files changed (1) hide show
  1. README.md +59 -4
README.md CHANGED
@@ -1,13 +1,68 @@
1
  ---
2
- license: cc-by-4.0
3
- datasets:
4
- - DominikM198/PP2-M
5
  base_model:
6
  - facebook/vit-mae-base
 
 
 
7
  tags:
8
  - OSM
9
  - OpenStreetMap
10
  - RepresentationLearning
11
  - Basemaps
12
  - Cartography
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - facebook/vit-mae-base
4
+ datasets:
5
+ - DominikM198/PP2-M
6
+ license: cc-by-4.0
7
  tags:
8
  - OSM
9
  - OpenStreetMap
10
  - RepresentationLearning
11
  - Basemaps
12
  - Cartography
13
+ pipeline_tag: image-feature-extraction
14
+ library_name: pytorch
15
+ ---
16
+
17
+ # UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations
18
+
19
+ This model is presented in the paper [UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations](https://huggingface.co/papers/2510.13774).
20
+
21
+ ## Abstract
22
+ Forecasting urban phenomena such as housing prices and public health indicators requires the effective integration of various geospatial data. Current methods primarily utilize task-specific models, while recent foundation models for spatial representations often support only limited modalities and lack multimodal fusion capabilities. To overcome these challenges, we present UrbanFusion, a Geo-Foundation Model (GeoFM) that features Stochastic Multimodal Fusion (SMF). The framework employs modality-specific encoders to process different types of inputs, including street view imagery, remote sensing data, cartographic maps, and points of interest (POIs) data. These multimodal inputs are integrated via a Transformer-based fusion module that learns unified representations. An extensive evaluation across 41 tasks in 56 cities worldwide demonstrates UrbanFusion’s strong generalization and predictive performance compared to state-of-the-art GeoAI models. Specifically, it 1) outperforms prior foundation models on location-encoding, 2) allows multimodal input during inference, and 3) generalizes well to regions unseen during training. UrbanFusion can flexibly utilize any subset of available modalities for a given location during both pretraining and inference, enabling broad applicability across diverse data availability scenarios.
23
+
24
+ ## Code
25
+ The official implementation and training scripts are available on the [UrbanFusion GitHub repository](https://github.com/DominikM198/UrbanFusion).
26
+
27
+ ## Minimal Usage Example
28
+ Using pretrained models for location encoding is straightforward. The example below demonstrates how to load the model and generate representations based solely on geographic coordinates (latitude and longitude), without requiring any additional input modalities.
29
+ ```python
30
+ import torch
31
+ from huggingface_hub import hf_hub_download
32
+ from srl.multi_modal_encoder.load import get_urbanfusion
33
+
34
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
35
+
36
+ # Coordinates: batch of 32 (lat, lon) pairs
37
+ coords = torch.randn(32, 2).to(device)
38
+
39
+ # Placeholders for other modalities (SV, RS, OSM, POI)
40
+ placeholder = torch.empty(32).to(device)
41
+ inputs = [coords, placeholder, placeholder, placeholder, placeholder]
42
+
43
+ # Mask all but coordinates (indices: 0=coords, 1=SV, 2=RS, 3=OSM, 4=POI)
44
+ mask_indices = [1, 2, 3, 4]
45
+
46
+ # Load pretrained UrbanFusion model
47
+ ckpt = hf_hub_download("DominikM198/UrbanFusion", "UrbanFusion/UrbanFusion.ckpt")
48
+ model = get_urbanfusion(ckpt, device=device).eval()
49
+
50
+ # Encode inputs (output shape: [32, 768])
51
+ with torch.no_grad():
52
+ embeddings = model(inputs, mask_indices=mask_indices, return_representations=True).cpu()
53
+ ```
54
+ For a more comprehensive guide—including instructions on applying the model to downstream tasks and incorporating additional modalities (with options for downloading, preprocessing, and using contextual prompts with or without precomputed features)—see the following tutorials:
55
+
56
+ - [`UrbanFusion_coordinates_only.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_coordinates_only.ipynb)
57
+ - [`UrbanFusion_multimodal.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_multimodal.ipynb)
58
+
59
+ ## Citation
60
+ If you find our work useful or interesting or use our code, please cite our paper as follows
61
+ ```latex
62
+ @article{muehlematter2025urbanfusion,
63
+ title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
64
+ author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
65
+ year = {2025},
66
+ journal = {arXiv preprint arXiv:xxxx.xxxxx}
67
+ }
68
+ ```