You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: The task_categories "regression" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning: The task_categories "multimodal" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Multimodal Urban Livability Evaluation Dataset

πŸ“ Dataset Description

This multimodal dataset is designed for multi-task urban livability evaluation, combining remote sensing imagery (RS), digital surface models (DSM), night light remote sensing (NLRS) imagery, and point-of-interest (POI) text data to predict multiple aspects of urban livability.

It covers various urban areas in the Netherlands and provides a rich set of ground-truth labels for 6 dimensions of livability.

πŸ“Š Dataset Structure

The dataset is organized into three standard splits:

  • train: 29,308 samples
  • validation: 9,253 samples
  • test: 13,440 samples

Each sample contains:

  • id (int): Unique identifier.
  • text (string): POI information from OpenStreetMap.
  • rs_image (Image): 250x250 Remote Sensing RGB image.
  • dsm_image (Image): 250x250 Digital Surface Model (RGB converted).
  • giu_image (Image): 250x250 night light remote sensing image.
  • Labels (Float):
    • lbm: Overall livability score,(LivabilityοΌ‰.
    • fys: Physical environment quality (PHY).
    • onv: Noise and nuisance level (NUI).
    • soc: Social environment quality (SOC).
    • vrz: Amenity and attractiveness (AME).
    • won: Housing quality (HOU).
  • afw (float): Auxiliary feature.

πŸš€ How to Use

You can easily load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

# 1. Load the entire dataset
dataset = load_dataset("Vinjou/Multimodal_urban_livability_evaluation_dataset")

# 2. Access a specific split
train_data = dataset["train"]

# 3. Get a sample
example = train_data[0]

# 4. Access multimodal features
print(f"Sample ID: {example['id']}")
print(f"POI Text: {example['text']}")
print(f"Livability Score: {example['lbm']}")

# Images are automatically loaded as PIL objects
example["rs_image"].save("rs_sample.png")
example["dsm_image"].show()

Loading in Streaming Mode (Optional)

If you don't want to download the entire 50k+ images at once, use streaming:

dataset = load_dataset("Vinjou/Multimodal_urban_livability_evaluation_dataset", streaming=True)
next_sample = next(iter(dataset["train"]))

πŸ› οΈ Data Collection & Preprocessing

Images are provided in 250x250 pixel patches.

DSM Image Conversion: The original Digital Surface Model (DSM) data is single-channel grayscale (representing height). To ensure compatibility with standard vision backbones (like DenseNet or ResNet) that expect 3-channel inputs, we converted the grayscale images to RGB by duplicating the single intensity channel across the Red, Green, and Blue channels (using PIL's .convert("RGB")). This preserves the spatial height information while maintaining a standard 3-channel image format.

πŸ“œ Citation

If you use this dataset in your research, please cite:

@article{ZHOU2026115232,
title = {A transformer based multi-task deep learning model for urban livability evaluation by fusing remote sensing and textual geospatial data},
journal = {Remote Sensing of Environment},
volume = {334},
pages = {115232},
year = {2026},
issn = {0034-4257},
doi = {https://doi.org/10.1016/j.rse.2026.115232},
url = {https://www.sciencedirect.com/science/article/pii/S0034425726000027},
author = {Wen Zhou and Claudio Persello and Dongping Ming and Shaowen Wang and Alfred Stein},
keywords = {Urban livability, Multimodal deep learning, Satellite images, Digital surface model, Nighttime light remote sensing, Textual information},
abstract = {Livable cities enhance urban economic development, improve physical and mental health, foster well-being, and foster urban sustainability. Evaluating urban livability is therefore important for policymakers to develop urban planning and development strategies aimed at improving livability. Mainstream methods of evaluating urban livability assign different weights to diverse indicators extracted from survey data, statistical data, and geospatial data. To relieve such time-consuming and labor-intensive data collection, this study proposes a transformer-based multi-task multimodal regression (TMTMR) model for the simultaneous evaluation of urban livability focusing on five domain-specific scores. Pretrained state-of-the-art computer vision and natural language processing models serve as backbones to extract features from high spatial resolution remote sensing (RS) images, digital surface models (DSM), night light remote sensing (NLRS) images and point of interest (POI) data. An attention mechanism helps the TMTMR model to assign varying significance levels to features from different modalities, thus capturing both intrinsic information and interrelationships among modalities for livability evaluation. Focusing on 13 Dutch areas, our research demonstrates that the TMTMR model efficiently evaluates urban livability with correlation coefficients ranging from 0.605 to 0.779, and root mean square error values between 0.070 and 0.112 in four unseen test areas. Furthermore, we analyze the synergy between different modalities. We found that modalities of urban livability can be effectively evaluated by aligning, in a descending order, contributions from RS images, NLRS images, DSM, and POI data. We demonstrated that the proposed TMTMR model is capable of effectively evaluating urban livability directly from multimodal geospatial data.}
}
Downloads last month
207