PP2-M / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper/code links, sample usage, and update metadata/citation
39cf597 verified
|
raw
history blame
6.58 kB
---
language:
- en
license: other
size_categories:
- 100K<n<1M
pretty_name: Place Pulse 2.0 Multimodal
tags:
- GeoFM
- PlacePulse
- SpatialRepresentationLearning
- Multimodal
- OpenStreetMap
- StreetView
- geospatial
task_categories:
- other
---
# PP2-M: Place Pulse 2.0 - Multimodal
Paper: [UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations](https://huggingface.co/papers/2510.13774)
Code: https://github.com/DominikM198/UrbanFusion
**PP2-M** (Place Pulse 2.0 - Multimodal) is a dataset based on the original Place Pulse 2.0 dataset [1], enriched with additional geospatial modalities for training **multimodal Geo-Foundation Models (GeoFM)**.
The dataset includes aligned pairs of the following modalities:
- 🌍 **Geographical coordinates** (lat, lon) from Place Pulse 2.0 [1]
- 🏙 **Street view images** from Place Pulse 2.0 [1]
- 🛰 **Remote sensing images** from Sentinel-2 [2]
- 🗺 **Cartographic basemaps** from OpenStreetMap [3]
- 📍 **Points of interest (POIs)** from OpenStreetMap [3]
---
## 🚀 Sample Usage
Using pretrained models for location encoding is straightforward. The example below demonstrates how to load the model and generate representations based solely on geographic coordinates (latitude and longitude), without requiring any additional input modalities.
```python
import torch
from huggingface_hub import hf_hub_download
from srl.multi_modal_encoder.load import get_urbanfusion
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Coordinates: batch of 32 (lat, lon) pairs
coords = torch.randn(32, 2).to(device)
# Placeholders for other modalities (SV, RS, OSM, POI)
placeholder = torch.empty(32).to(device)
inputs = [coords, placeholder, placeholder, placeholder, placeholder]
# Mask all but coordinates (indices: 0=coords, 1=SV, 2=RS, 3=OSM, 4=POI)
mask_indices = [1, 2, 3, 4]
# Load pretrained UrbanFusion model
ckpt = hf_hub_download("DominikM198/UrbanFusion", "UrbanFusion/UrbanFusion.ckpt")
model = get_urbanfusion(ckpt, device=device).eval()
# Encode inputs (output shape: [32, 768])
with torch.no_grad():
embeddings = model(inputs, mask_indices=mask_indices, return_representations=True).cpu()
```
For a more comprehensive guide—including instructions on applying the model to downstream tasks and incorporating additional modalities (with options for downloading, preprocessing, and using contextual prompts with or without precomputed features)—see the following tutorials:
- [`UrbanFusion_coordinates_only.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_coordinates_only.ipynb)
- [`UrbanFusion_multimodal.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_multimodal.ipynb)
---
## 📜 License
Due to its multimodality, PP2-M comes with **different licenses per modality**, as described in the folder [`LICENSES`](./LICENSES).
---
## 📑 Modalities Description
### 📌 Coordinates
- **110,988 locations**, each with associated geographic coordinates.
### 🏙 Street View Images (SVI)
- Obtained from **Google Street View.
- Resolution: **400 × 300 pixels**.
### 🛰 Remote Sensing Images (Sentinel-2)
- Sentinel-2 **Level-2A** images.
- Acquisition period: **Jan 1 – Dec 31, 2024**.
- Filtered for minimal cloud coverage.
- Each patch includes spectral bands:
`B01, B02, B03, B04, B05, B06, B07, B08, B08A, B09, B11, B12`
- Resolution: **256 × 256 pixels**.
### 🗺 Cartographic Basemaps (OSM_basemaps)
- Tiles from **OpenStreetMap tile server**.
- Zoom levels: **15, 16, 17** → resolutions of **1200 m, 600 m, 300 m**.
- Downloaded: **May 2025**.
- Rendered at **256 × 256 pixels**.
### 📍 Points of Interest (OSM_pois)
- Extracted from **OpenStreetMap**.
- For each location: up to **15 nearest POIs within 200 m**.
- Adaptive search radius ensures coverage in sparse areas.
- Retained POIs with tags:
`amenity, shop, leisure, tourism, healthcare, theatre, cinema, building=religious, building=transportation, public_transport=station`
- **Excluded**: `parking, parking_space, bench, bicycle_parking, motorcycle_parking, post_box, toilets`
- Each POI is assigned a **representative category** (priority order: `amenity → leisure → religion → public_transport → shop → tourism`).
- Special cases:
- `healthcare` if substring matches
- `museum` if name contains "museum"
- Final POIs are used to construct **textual prompts** describing each POI’s name, category, and distance.
---
## 📂 Folder Structure
PP2-M/ <br>
<br>
├── LICENSES/ → Licenses for all modalities<br>
├── Tables_statistics/ → Statistics & tables (based on Place Pulse 2.0)<br>
├── SVI/ → Street View Images<br>
├── sentinel2/ → Sentinel-2 images<br>
├── OSM_basemaps/ → OSM basemaps (zoom 15, 16, 17)<br>
├── OSM_pois/ → Raw POIs + generated text prompts<br>
└── Precomputed_features/ → Pre-extracted modality-specific features<br>
## 🔀 Dataset Splits
- **training** – samples used for training.
- **validation_in_region** – interpolation evaluation.
- **validation_out_region** – extrapolation evaluation (unseen cities).
---
## 📊 Precomputed Features
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models.
See details in our paper: [UrbanFusion](https://huggingface.co/papers/2510.13774).
---
## 📖 Citation
If you use PP2-M, please cite our work:
```bibtex
@article{muehlematter2025urbanfusion,
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
year = {2025},
journal = {arXiv preprint arXiv:2510.13774},
url = {https://huggingface.co/papers/2510.13774}
}
```
---
## 📊 References
[1] Dubey, A., Naik, N., Parikh, D., Raskar, R., and Hidalgo, C. A. (2016). Deep learning the city: Quantifying urban perception at a global scale. In ECCV, pp. 196–212.<br>
[2] Drusch, M., Del Bello, U., Carlier, S., Colin, O., Fernandez, V., Gascon, F., ... Bargellini, P. (2012). Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 120:25–36.<br>
[3] OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org<br>