OSM-MAE / README.md
nielsr's picture
nielsr HF Staff
Improve model card with paper details, code, usage, and pipeline tag
4a3661b verified
|
raw
history blame
4.2 kB
metadata
base_model:
  - facebook/vit-mae-base
datasets:
  - DominikM198/PP2-M
license: cc-by-4.0
tags:
  - OSM
  - OpenStreetMap
  - RepresentationLearning
  - Basemaps
  - Cartography
pipeline_tag: image-feature-extraction
library_name: pytorch

UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations

This model is presented in the paper UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations.

Abstract

Forecasting urban phenomena such as housing prices and public health indicators requires the effective integration of various geospatial data. Current methods primarily utilize task-specific models, while recent foundation models for spatial representations often support only limited modalities and lack multimodal fusion capabilities. To overcome these challenges, we present UrbanFusion, a Geo-Foundation Model (GeoFM) that features Stochastic Multimodal Fusion (SMF). The framework employs modality-specific encoders to process different types of inputs, including street view imagery, remote sensing data, cartographic maps, and points of interest (POIs) data. These multimodal inputs are integrated via a Transformer-based fusion module that learns unified representations. An extensive evaluation across 41 tasks in 56 cities worldwide demonstrates UrbanFusion’s strong generalization and predictive performance compared to state-of-the-art GeoAI models. Specifically, it 1) outperforms prior foundation models on location-encoding, 2) allows multimodal input during inference, and 3) generalizes well to regions unseen during training. UrbanFusion can flexibly utilize any subset of available modalities for a given location during both pretraining and inference, enabling broad applicability across diverse data availability scenarios.

Code

The official implementation and training scripts are available on the UrbanFusion GitHub repository.

Minimal Usage Example

Using pretrained models for location encoding is straightforward. The example below demonstrates how to load the model and generate representations based solely on geographic coordinates (latitude and longitude), without requiring any additional input modalities.

import torch
from huggingface_hub import hf_hub_download
from srl.multi_modal_encoder.load import get_urbanfusion

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Coordinates: batch of 32 (lat, lon) pairs
coords = torch.randn(32, 2).to(device)

# Placeholders for other modalities (SV, RS, OSM, POI)
placeholder = torch.empty(32).to(device)
inputs = [coords, placeholder, placeholder, placeholder, placeholder]

# Mask all but coordinates (indices: 0=coords, 1=SV, 2=RS, 3=OSM, 4=POI)
mask_indices = [1, 2, 3, 4]

# Load pretrained UrbanFusion model
ckpt = hf_hub_download("DominikM198/UrbanFusion", "UrbanFusion/UrbanFusion.ckpt")
model = get_urbanfusion(ckpt, device=device).eval()

# Encode inputs (output shape: [32, 768])
with torch.no_grad():
    embeddings = model(inputs, mask_indices=mask_indices, return_representations=True).cpu()

For a more comprehensive guide—including instructions on applying the model to downstream tasks and incorporating additional modalities (with options for downloading, preprocessing, and using contextual prompts with or without precomputed features)—see the following tutorials:

Citation

If you find our work useful or interesting or use our code, please cite our paper as follows

@article{muehlematter2025urbanfusion,
  title   = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
  author  = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
  year    = {2025},
  journal = {arXiv preprint arXiv:xxxx.xxxxx}
}