|
|
--- |
|
|
license: other |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- GeoFM |
|
|
- PlacePulse |
|
|
- SpatialRepresentationLearning |
|
|
- OpenStreetMap |
|
|
- StreetView |
|
|
- Multimodal |
|
|
- Geospatial |
|
|
pretty_name: Place Pulse 2.0 Multimodal |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# PP2-M: Place Pulse 2.0 - Multimodal |
|
|
|
|
|
**PP2-M** (Place Pulse 2.0 - Multimodal) is a dataset based on the original Place Pulse 2.0 dataset [1], enriched with additional geospatial modalities for training **multimodal Geo-Foundation Models (GeoFM)**. |
|
|
|
|
|
The dataset includes aligned pairs of the following modalities: |
|
|
|
|
|
- 🌍 **Geographical coordinates** (lat, lon) from Place Pulse 2.0 [1] |
|
|
- 🏙 **Street view images** from Place Pulse 2.0 [1] |
|
|
- 🛰 **Remote sensing images** from Sentinel-2 [2] |
|
|
- 🗺 **Cartographic basemaps** from OpenStreetMap [3] |
|
|
- 📍 **Points of interest (POIs)** from OpenStreetMap [3] |
|
|
|
|
|
--- |
|
|
|
|
|
## 📜 License |
|
|
Due to its multimodality, PP2-M comes with **different licenses per modality**, as described in the folder [`LICENSES`](./LICENSES). |
|
|
|
|
|
--- |
|
|
|
|
|
## 📑 Modalities Description |
|
|
|
|
|
### 📌 Coordinates |
|
|
- **110,988 locations**, each with associated geographic coordinates. |
|
|
|
|
|
### 🏙 Street View Images (SVI) |
|
|
- Obtained from **Google Street View. |
|
|
- Resolution: **400 × 300 pixels**. |
|
|
|
|
|
### 🛰 Remote Sensing Images (Sentinel-2) |
|
|
- Sentinel-2 **Level-2A** images. |
|
|
- Acquisition period: **Jan 1 – Dec 31, 2024**. |
|
|
- Filtered for minimal cloud coverage. |
|
|
- Each patch includes spectral bands: |
|
|
`B01, B02, B03, B04, B05, B06, B07, B08, B08A, B09, B11, B12` |
|
|
- Resolution: **256 × 256 pixels**. |
|
|
|
|
|
### 🗺 Cartographic Basemaps (OSM_basemaps) |
|
|
- Tiles from **OpenStreetMap tile server**. |
|
|
- Zoom levels: **15, 16, 17** → resolutions of **1200 m, 600 m, 300 m**. |
|
|
- Downloaded: **May 2025**. |
|
|
- Rendered at **256 × 256 pixels**. |
|
|
|
|
|
### 📍 Points of Interest (OSM_pois) |
|
|
- Extracted from **OpenStreetMap**. |
|
|
- For each location: up to **15 nearest POIs within 200 m**. |
|
|
- Adaptive search radius ensures coverage in sparse areas. |
|
|
- Retained POIs with tags: |
|
|
`amenity, shop, leisure, tourism, healthcare, theatre, cinema, building=religious, building=transportation, public_transport=station` |
|
|
- **Excluded**: `parking, parking_space, bench, bicycle_parking, motorcycle_parking, post_box, toilets` |
|
|
- Each POI is assigned a **representative category** (priority order: `amenity → leisure → religion → public_transport → shop → tourism`). |
|
|
- Special cases: |
|
|
- `healthcare` if substring matches |
|
|
- `museum` if name contains "museum" |
|
|
- Final POIs are used to construct **textual prompts** describing each POI’s name, category, and distance. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📂 Folder Structure |
|
|
|
|
|
PP2-M/ <br> |
|
|
│<br> |
|
|
├── LICENSES/ → Licenses for all modalities<br> |
|
|
├── Tables_statistics/ → Statistics & tables (based on Place Pulse 2.0)<br> |
|
|
├── SVI/ → Street View Images<br> |
|
|
├── sentinel2/ → Sentinel-2 images<br> |
|
|
├── OSM_basemaps/ → OSM basemaps (zoom 15, 16, 17)<br> |
|
|
├── OSM_pois/ → Raw POIs + generated text prompts<br> |
|
|
└── Precomputed_features/ → Pre-extracted modality-specific features<br> |
|
|
|
|
|
|
|
|
## 🔀 Dataset Splits |
|
|
- **training** – samples used for training. |
|
|
- **validation_in_region** – interpolation evaluation. |
|
|
- **validation_out_region** – extrapolation evaluation (unseen cities). |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Precomputed Features |
|
|
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models. |
|
|
See details in our paper: [UrbanFusion](https://github.com/DominikM198/UrbanFusion/). |
|
|
|
|
|
--- |
|
|
|
|
|
## 📖 Citation |
|
|
If you use PP2-M, please cite our work: |
|
|
|
|
|
```bibtex |
|
|
@article{muehlematter2025urbanfusion, |
|
|
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations}, |
|
|
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann}, |
|
|
year = {2025}, |
|
|
journal = {arXiv preprint arXiv:2510.13774} |
|
|
} |
|
|
``` |
|
|
--- |
|
|
|
|
|
## 📊 References |
|
|
|
|
|
[1] Dubey, A., Naik, N., Parikh, D., Raskar, R., and Hidalgo, C. A. (2016). Deep learning the city: Quantifying urban perception at a global scale. In ECCV, pp. 196–212.<br> |
|
|
[2] Drusch, M., Del Bello, U., Carlier, S., Colin, O., Fernandez, V., Gascon, F., ... Bargellini, P. (2012). Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 120:25–36.<br> |
|
|
[3] OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org<br> |