File size: 4,559 Bytes
ee2487d fb6ec5e ee2487d fde369e 2843509 6e3827c 8a6bd4e 2843509 36aef51 2843509 fb6ec5e 2843509 ffd4e80 fb6ec5e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
license: other
language:
- en
tags:
- GeoFM
- PlacePulse
- SpatialRepresentationLearning
- OpenStreetMap
- StreetView
- Multimodal
- Geospatial
pretty_name: Place Pulse 2.0 Multimodal
size_categories:
- 100K<n<1M
---
# PP2-M: Place Pulse 2.0 - Multimodal
**PP2-M** (Place Pulse 2.0 - Multimodal) is a dataset based on the original Place Pulse 2.0 dataset [1], enriched with additional geospatial modalities for training **multimodal Geo-Foundation Models (GeoFM)**.
The dataset includes aligned pairs of the following modalities:
- 🌍 **Geographical coordinates** (lat, lon) from Place Pulse 2.0 [1]
- 🏙 **Street view images** from Place Pulse 2.0 [1]
- 🛰 **Remote sensing images** from Sentinel-2 [2]
- 🗺 **Cartographic basemaps** from OpenStreetMap [3]
- 📍 **Points of interest (POIs)** from OpenStreetMap [3]
---
## 📜 License
Due to its multimodality, PP2-M comes with **different licenses per modality**, as described in the folder [`LICENSES`](./LICENSES).
---
## 📑 Modalities Description
### 📌 Coordinates
- **110,988 locations**, each with associated geographic coordinates.
### 🏙 Street View Images (SVI)
- Obtained from **Google Street View.
- Resolution: **400 × 300 pixels**.
### 🛰 Remote Sensing Images (Sentinel-2)
- Sentinel-2 **Level-2A** images.
- Acquisition period: **Jan 1 – Dec 31, 2024**.
- Filtered for minimal cloud coverage.
- Each patch includes spectral bands:
`B01, B02, B03, B04, B05, B06, B07, B08, B08A, B09, B11, B12`
- Resolution: **256 × 256 pixels**.
### 🗺 Cartographic Basemaps (OSM_basemaps)
- Tiles from **OpenStreetMap tile server**.
- Zoom levels: **15, 16, 17** → resolutions of **1200 m, 600 m, 300 m**.
- Downloaded: **May 2025**.
- Rendered at **256 × 256 pixels**.
### 📍 Points of Interest (OSM_pois)
- Extracted from **OpenStreetMap**.
- For each location: up to **15 nearest POIs within 200 m**.
- Adaptive search radius ensures coverage in sparse areas.
- Retained POIs with tags:
`amenity, shop, leisure, tourism, healthcare, theatre, cinema, building=religious, building=transportation, public_transport=station`
- **Excluded**: `parking, parking_space, bench, bicycle_parking, motorcycle_parking, post_box, toilets`
- Each POI is assigned a **representative category** (priority order: `amenity → leisure → religion → public_transport → shop → tourism`).
- Special cases:
- `healthcare` if substring matches
- `museum` if name contains "museum"
- Final POIs are used to construct **textual prompts** describing each POI’s name, category, and distance.
---
## 📂 Folder Structure
PP2-M/ <br>
│<br>
├── LICENSES/ → Licenses for all modalities<br>
├── Tables_statistics/ → Statistics & tables (based on Place Pulse 2.0)<br>
├── SVI/ → Street View Images<br>
├── sentinel2/ → Sentinel-2 images<br>
├── OSM_basemaps/ → OSM basemaps (zoom 15, 16, 17)<br>
├── OSM_pois/ → Raw POIs + generated text prompts<br>
└── Precomputed_features/ → Pre-extracted modality-specific features<br>
## 🔀 Dataset Splits
- **training** – samples used for training.
- **validation_in_region** – interpolation evaluation.
- **validation_out_region** – extrapolation evaluation (unseen cities).
---
## 📊 Precomputed Features
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models.
See details in our paper: [UrbanFusion](https://github.com/DominikM198/UrbanFusion/).
---
## 📖 Citation
If you use PP2-M, please cite our work:
```bibtex
@article{muehlematter2025urbanfusion,
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
author = {Dominik J. Mühlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
year = {2025},
journal = {arXiv preprint arXiv:2510.13774}
}
```
---
## 📊 References
[1] Dubey, A., Naik, N., Parikh, D., Raskar, R., and Hidalgo, C. A. (2016). Deep learning the city: Quantifying urban perception at a global scale. In ECCV, pp. 196–212.<br>
[2] Drusch, M., Del Bello, U., Carlier, S., Colin, O., Fernandez, V., Gascon, F., ... Bargellini, P. (2012). Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 120:25–36.<br>
[3] OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org<br> |