Update README.md
Browse files
README.md
CHANGED
|
@@ -14,4 +14,97 @@ size_categories:
|
|
| 14 |
- 100K<n<1M
|
| 15 |
---
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
- 100K<n<1M
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# PP2-M: Place Pulse 2.0 - Multimodal
|
| 18 |
+
|
| 19 |
+
**PP2-M** (Place Pulse 2.0 - Multimodal) is a dataset based on the original Place Pulse 2.0 dataset [1], enriched with additional geospatial modalities for training **multimodal Geo-Foundation Models (GeoFM)**.
|
| 20 |
+
|
| 21 |
+
The dataset includes aligned pairs of the following modalities:
|
| 22 |
+
|
| 23 |
+
- 🌍 **Geographical coordinates** (lat, lon) from Place Pulse 2.0 [1]
|
| 24 |
+
- 🏙 **Street view images** from Place Pulse 2.0 [1]
|
| 25 |
+
- 🛰 **Remote sensing images** from Sentinel-2 [2]
|
| 26 |
+
- 🗺 **Cartographic basemaps** from OpenStreetMap [3]
|
| 27 |
+
- 📍 **Points of interest (POIs)** from OpenStreetMap [3]
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## 📜 License
|
| 32 |
+
Due to its multimodality, PP2-M comes with **different licenses per modality**, as described in the folder [`LICENSES`](./LICENSES).
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## 📑 Modalities Description
|
| 37 |
+
|
| 38 |
+
### 📌 Coordinates
|
| 39 |
+
- **110,988 locations**, each with associated geographic coordinates.
|
| 40 |
+
|
| 41 |
+
### 🏙 Street View Images (SVI)
|
| 42 |
+
- Obtained from **Google Street View.
|
| 43 |
+
- Resolution: **400 × 300 pixels**.
|
| 44 |
+
|
| 45 |
+
### 🛰 Remote Sensing Images (Sentinel-2)
|
| 46 |
+
- Sentinel-2 **Level-2A** images.
|
| 47 |
+
- Acquisition period: **Jan 1 – Dec 31, 2024**.
|
| 48 |
+
- Filtered for minimal cloud coverage.
|
| 49 |
+
- Each patch includes spectral bands:
|
| 50 |
+
`B01, B02, B03, B04, B05, B06, B07, B08, B08A, B09, B11, B12`
|
| 51 |
+
- Resolution: **256 × 256 pixels**.
|
| 52 |
+
|
| 53 |
+
### 🗺 Cartographic Basemaps (OSM_basemaps)
|
| 54 |
+
- Tiles from **OpenStreetMap tile server**.
|
| 55 |
+
- Zoom levels: **15, 16, 17** → resolutions of **1200 m, 600 m, 300 m**.
|
| 56 |
+
- Downloaded: **May 2025**.
|
| 57 |
+
- Rendered at **256 × 256 pixels**.
|
| 58 |
+
|
| 59 |
+
### 📍 Points of Interest (OSM_pois)
|
| 60 |
+
- Extracted from **OpenStreetMap**.
|
| 61 |
+
- For each location: up to **15 nearest POIs within 200 m**.
|
| 62 |
+
- Adaptive search radius ensures coverage in sparse areas.
|
| 63 |
+
- Retained POIs with tags:
|
| 64 |
+
`amenity, shop, leisure, tourism, healthcare, theatre, cinema, building=religious, building=transportation, public_transport=station`
|
| 65 |
+
- **Excluded**: `parking, parking_space, bench, bicycle_parking, motorcycle_parking, post_box, toilets`
|
| 66 |
+
- Each POI is assigned a **representative category** (priority order: `amenity → leisure → religion → public_transport → shop → tourism`).
|
| 67 |
+
- Special cases:
|
| 68 |
+
- `healthcare` if substring matches
|
| 69 |
+
- `museum` if name contains "museum"
|
| 70 |
+
- Final POIs are used to construct **textual prompts** describing each POI’s name, category, and distance.
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
## 📂 Folder Structure
|
| 75 |
+
|
| 76 |
+
PP2-M/ <br>
|
| 77 |
+
│<br>
|
| 78 |
+
├── LICENSES/ → Licenses for all modalities<br>
|
| 79 |
+
├── Tables_statistics/ → Statistics & tables (based on Place Pulse 2.0)<br>
|
| 80 |
+
├── SVI/ → Street View Images<br>
|
| 81 |
+
├── sentinel2/ → Sentinel-2 images<br>
|
| 82 |
+
├── OSM_basemaps/ → OSM basemaps (zoom 15, 16, 17)<br>
|
| 83 |
+
├── OSM_pois/ → Raw POIs + generated text prompts<br>
|
| 84 |
+
└── Precomputed_features/ → Pre-extracted modality-specific features<br>
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
## 🔀 Dataset Splits
|
| 88 |
+
- **Training** – samples used for training.
|
| 89 |
+
- **validation_out_region** – interpolation evaluation.
|
| 90 |
+
- **validation_in_region** – extrapolation evaluation (unseen cities).
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
## 📊 Precomputed Features
|
| 95 |
+
In addition to raw data, we provide **pre-extracted features** from each modality using modality-specific models.
|
| 96 |
+
See details in our paper: [UrbanFusion](https://github.com/DominikM198/UrbanFusion/).
|
| 97 |
+
|
| 98 |
+
---
|
| 99 |
+
|
| 100 |
+
## 📖 Citation
|
| 101 |
+
If you use PP2-M, please cite our work:
|
| 102 |
+
|
| 103 |
+
```bibtex
|
| 104 |
+
@article{muehlematter2025urbanfusion,
|
| 105 |
+
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
|
| 106 |
+
author = {Dominik J. Mühlematter and Nina Wiedemann and Lin Che and Ye Hong and Martin Raubal},
|
| 107 |
+
year = {2025},
|
| 108 |
+
journal = {arXiv preprint arXiv:xxxx.xxxxx}
|
| 109 |
+
}
|
| 110 |
+
|