Improve dataset card: Add paper/code links, sample usage, and update metadata/citation
Browse filesThis PR enhances the dataset card for `PP2-M: Place Pulse 2.0 - Multimodal` by:
- **Updating Metadata:**
- Adding `task_categories: ['other']` for better classification.
- Correcting the typo `Mutlimodal` to `Multimodal` in the existing `tags`.
- Adding `geospatial` to the tags for improved discoverability.
- **Adding Key Links:**
- Providing a direct link to the paper ([UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations](https://huggingface.co/papers/2510.13774)) at the top.
- Adding a direct link to the GitHub repository ([https://github.com/DominikM198/UrbanFusion](https://github.com/DominikM198/UrbanFusion)).
- **Including Sample Usage:**
- Adding a "Sample Usage" section with a Python code snippet from the GitHub README, demonstrating how to load the model and generate representations.
- **Improving Cross-references and Citation:**
- Updating the paper link in the "Precomputed Features" section to point to the Hugging Face paper page.
- Updating the BibTeX citation to include the correct arXiv ID (`2510.13774`) and a URL to the Hugging Face paper page.
These improvements make the dataset card more informative and user-friendly.
|
@@ -1,30 +1,70 @@
|
|
| 1 |
---
|
| 2 |
-
license: other
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- GeoFM
|
| 7 |
- PlacePulse
|
| 8 |
- SpatialRepresentationLearning
|
| 9 |
-
-
|
| 10 |
- OpenStreetMap
|
| 11 |
- StreetView
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
-
|
| 15 |
---
|
| 16 |
|
| 17 |
# PP2-M: Place Pulse 2.0 - Multimodal
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
The dataset includes aligned pairs of the following modalities:
|
| 22 |
|
| 23 |
-
-
|
| 24 |
-
-
|
| 25 |
-
-
|
| 26 |
-
-
|
| 27 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
---
|
| 30 |
|
|
@@ -36,38 +76,38 @@ Due to its multimodality, PP2-M comes with **different licenses per modality**,
|
|
| 36 |
## 📑 Modalities Description
|
| 37 |
|
| 38 |
### 📌 Coordinates
|
| 39 |
-
-
|
| 40 |
|
| 41 |
### 🏙 Street View Images (SVI)
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
|
| 45 |
### 🛰 Remote Sensing Images (Sentinel-2)
|
| 46 |
-
-
|
| 47 |
-
-
|
| 48 |
-
-
|
| 49 |
-
-
|
| 50 |
-
|
| 51 |
-
-
|
| 52 |
|
| 53 |
### 🗺 Cartographic Basemaps (OSM_basemaps)
|
| 54 |
-
-
|
| 55 |
-
-
|
| 56 |
-
-
|
| 57 |
-
-
|
| 58 |
|
| 59 |
### 📍 Points of Interest (OSM_pois)
|
| 60 |
-
-
|
| 61 |
-
-
|
| 62 |
-
-
|
| 63 |
-
-
|
| 64 |
-
|
| 65 |
-
-
|
| 66 |
-
-
|
| 67 |
-
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
-
|
| 71 |
|
| 72 |
---
|
| 73 |
|
|
@@ -85,16 +125,16 @@ PP2-M/ <br>
|
|
| 85 |
|
| 86 |
|
| 87 |
## 🔀 Dataset Splits
|
| 88 |
-
-
|
| 89 |
-
-
|
| 90 |
-
-
|
| 91 |
|
| 92 |
|
| 93 |
---
|
| 94 |
|
| 95 |
## 📊 Precomputed Features
|
| 96 |
-
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models.
|
| 97 |
-
See details in our paper: [UrbanFusion](https://
|
| 98 |
|
| 99 |
---
|
| 100 |
|
|
@@ -106,7 +146,8 @@ If you use PP2-M, please cite our work:
|
|
| 106 |
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
|
| 107 |
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
|
| 108 |
year = {2025},
|
| 109 |
-
journal = {arXiv preprint arXiv:
|
|
|
|
| 110 |
}
|
| 111 |
```
|
| 112 |
---
|
|
@@ -115,6 +156,4 @@ If you use PP2-M, please cite our work:
|
|
| 115 |
|
| 116 |
[1] Dubey, A., Naik, N., Parikh, D., Raskar, R., and Hidalgo, C. A. (2016). Deep learning the city: Quantifying urban perception at a global scale. In ECCV, pp. 196–212.<br>
|
| 117 |
[2] Drusch, M., Del Bello, U., Carlier, S., Colin, O., Fernandez, V., Gascon, F., ... Bargellini, P. (2012). Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 120:25–36.<br>
|
| 118 |
-
[3] OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org<br>
|
| 119 |
-
|
| 120 |
-
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: other
|
| 5 |
+
size_categories:
|
| 6 |
+
- 100K<n<1M
|
| 7 |
+
pretty_name: Place Pulse 2.0 Multimodal
|
| 8 |
tags:
|
| 9 |
- GeoFM
|
| 10 |
- PlacePulse
|
| 11 |
- SpatialRepresentationLearning
|
| 12 |
+
- Multimodal
|
| 13 |
- OpenStreetMap
|
| 14 |
- StreetView
|
| 15 |
+
- geospatial
|
| 16 |
+
task_categories:
|
| 17 |
+
- other
|
| 18 |
---
|
| 19 |
|
| 20 |
# PP2-M: Place Pulse 2.0 - Multimodal
|
| 21 |
|
| 22 |
+
Paper: [UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations](https://huggingface.co/papers/2510.13774)
|
| 23 |
+
Code: https://github.com/DominikM198/UrbanFusion
|
| 24 |
+
|
| 25 |
+
**PP2-M** (Place Pulse 2.0 - Multimodal) is a dataset based on the original Place Pulse 2.0 dataset [1], enriched with additional geospatial modalities for training **multimodal Geo-Foundation Models (GeoFM)**.
|
| 26 |
|
| 27 |
The dataset includes aligned pairs of the following modalities:
|
| 28 |
|
| 29 |
+
- 🌍 **Geographical coordinates** (lat, lon) from Place Pulse 2.0 [1]
|
| 30 |
+
- 🏙 **Street view images** from Place Pulse 2.0 [1]
|
| 31 |
+
- 🛰 **Remote sensing images** from Sentinel-2 [2]
|
| 32 |
+
- 🗺 **Cartographic basemaps** from OpenStreetMap [3]
|
| 33 |
+
- 📍 **Points of interest (POIs)** from OpenStreetMap [3]
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## 🚀 Sample Usage
|
| 38 |
+
Using pretrained models for location encoding is straightforward. The example below demonstrates how to load the model and generate representations based solely on geographic coordinates (latitude and longitude), without requiring any additional input modalities.
|
| 39 |
+
```python
|
| 40 |
+
import torch
|
| 41 |
+
from huggingface_hub import hf_hub_download
|
| 42 |
+
from srl.multi_modal_encoder.load import get_urbanfusion
|
| 43 |
+
|
| 44 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 45 |
+
|
| 46 |
+
# Coordinates: batch of 32 (lat, lon) pairs
|
| 47 |
+
coords = torch.randn(32, 2).to(device)
|
| 48 |
+
|
| 49 |
+
# Placeholders for other modalities (SV, RS, OSM, POI)
|
| 50 |
+
placeholder = torch.empty(32).to(device)
|
| 51 |
+
inputs = [coords, placeholder, placeholder, placeholder, placeholder]
|
| 52 |
+
|
| 53 |
+
# Mask all but coordinates (indices: 0=coords, 1=SV, 2=RS, 3=OSM, 4=POI)
|
| 54 |
+
mask_indices = [1, 2, 3, 4]
|
| 55 |
+
|
| 56 |
+
# Load pretrained UrbanFusion model
|
| 57 |
+
ckpt = hf_hub_download("DominikM198/UrbanFusion", "UrbanFusion/UrbanFusion.ckpt")
|
| 58 |
+
model = get_urbanfusion(ckpt, device=device).eval()
|
| 59 |
+
|
| 60 |
+
# Encode inputs (output shape: [32, 768])
|
| 61 |
+
with torch.no_grad():
|
| 62 |
+
embeddings = model(inputs, mask_indices=mask_indices, return_representations=True).cpu()
|
| 63 |
+
```
|
| 64 |
+
For a more comprehensive guide—including instructions on applying the model to downstream tasks and incorporating additional modalities (with options for downloading, preprocessing, and using contextual prompts with or without precomputed features)—see the following tutorials:
|
| 65 |
+
|
| 66 |
+
- [`UrbanFusion_coordinates_only.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_coordinates_only.ipynb)
|
| 67 |
+
- [`UrbanFusion_multimodal.ipynb`](https://github.com/DominikM198/UrbanFusion/blob/main/tutorials/UrbanFusion_multimodal.ipynb)
|
| 68 |
|
| 69 |
---
|
| 70 |
|
|
|
|
| 76 |
## 📑 Modalities Description
|
| 77 |
|
| 78 |
### 📌 Coordinates
|
| 79 |
+
- **110,988 locations**, each with associated geographic coordinates.
|
| 80 |
|
| 81 |
### 🏙 Street View Images (SVI)
|
| 82 |
+
- Obtained from **Google Street View.
|
| 83 |
+
- Resolution: **400 × 300 pixels**.
|
| 84 |
|
| 85 |
### 🛰 Remote Sensing Images (Sentinel-2)
|
| 86 |
+
- Sentinel-2 **Level-2A** images.
|
| 87 |
+
- Acquisition period: **Jan 1 – Dec 31, 2024**.
|
| 88 |
+
- Filtered for minimal cloud coverage.
|
| 89 |
+
- Each patch includes spectral bands:
|
| 90 |
+
`B01, B02, B03, B04, B05, B06, B07, B08, B08A, B09, B11, B12`
|
| 91 |
+
- Resolution: **256 × 256 pixels**.
|
| 92 |
|
| 93 |
### 🗺 Cartographic Basemaps (OSM_basemaps)
|
| 94 |
+
- Tiles from **OpenStreetMap tile server**.
|
| 95 |
+
- Zoom levels: **15, 16, 17** → resolutions of **1200 m, 600 m, 300 m**.
|
| 96 |
+
- Downloaded: **May 2025**.
|
| 97 |
+
- Rendered at **256 × 256 pixels**.
|
| 98 |
|
| 99 |
### 📍 Points of Interest (OSM_pois)
|
| 100 |
+
- Extracted from **OpenStreetMap**.
|
| 101 |
+
- For each location: up to **15 nearest POIs within 200 m**.
|
| 102 |
+
- Adaptive search radius ensures coverage in sparse areas.
|
| 103 |
+
- Retained POIs with tags:
|
| 104 |
+
`amenity, shop, leisure, tourism, healthcare, theatre, cinema, building=religious, building=transportation, public_transport=station`
|
| 105 |
+
- **Excluded**: `parking, parking_space, bench, bicycle_parking, motorcycle_parking, post_box, toilets`
|
| 106 |
+
- Each POI is assigned a **representative category** (priority order: `amenity → leisure → religion → public_transport → shop → tourism`).
|
| 107 |
+
- Special cases:
|
| 108 |
+
- `healthcare` if substring matches
|
| 109 |
+
- `museum` if name contains "museum"
|
| 110 |
+
- Final POIs are used to construct **textual prompts** describing each POI’s name, category, and distance.
|
| 111 |
|
| 112 |
---
|
| 113 |
|
|
|
|
| 125 |
|
| 126 |
|
| 127 |
## 🔀 Dataset Splits
|
| 128 |
+
- **training** – samples used for training.
|
| 129 |
+
- **validation_in_region** – interpolation evaluation.
|
| 130 |
+
- **validation_out_region** – extrapolation evaluation (unseen cities).
|
| 131 |
|
| 132 |
|
| 133 |
---
|
| 134 |
|
| 135 |
## 📊 Precomputed Features
|
| 136 |
+
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models.
|
| 137 |
+
See details in our paper: [UrbanFusion](https://huggingface.co/papers/2510.13774).
|
| 138 |
|
| 139 |
---
|
| 140 |
|
|
|
|
| 146 |
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
|
| 147 |
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
|
| 148 |
year = {2025},
|
| 149 |
+
journal = {arXiv preprint arXiv:2510.13774},
|
| 150 |
+
url = {https://huggingface.co/papers/2510.13774}
|
| 151 |
}
|
| 152 |
```
|
| 153 |
---
|
|
|
|
| 156 |
|
| 157 |
[1] Dubey, A., Naik, N., Parikh, D., Raskar, R., and Hidalgo, C. A. (2016). Deep learning the city: Quantifying urban perception at a global scale. In ECCV, pp. 196–212.<br>
|
| 158 |
[2] Drusch, M., Del Bello, U., Carlier, S., Colin, O., Fernandez, V., Gascon, F., ... Bargellini, P. (2012). Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 120:25–36.<br>
|
| 159 |
+
[3] OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org<br>
|
|
|
|
|
|