---
license: apache-2.0
dataset_info:
features:
- name: condition
dtype: image
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: heading
dtype: float64
- name: elevation
dtype: float64
- name: panoid
dtype: string
- name: theta
dtype: int64
- name: phi
dtype: int64
- name: fov
dtype: int64
splits:
- name: train
num_bytes: 4626868770.85
num_examples: 99825
download_size: 4740521476
dataset_size: 4626868770.85
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Satellite to GroundScape - Large-scale Consistent Ground View Generation from Satellite Views
[**🌐 Homepage**](https://gdaosu.github.io/sat2groundscape/) | [**📖 arXiv**](https://arxiv.org/abs/2504.15786)
## Introduction
Generating consistent ground-view images from satellite imagery is challenging, primarily due to the large discrepancies in viewing angles and resolution between satellite and ground-level domains. Previous efforts mainly concentrated on single-view generation, often resulting in inconsistencies across neighboring ground views. In this work, we propose a novel cross-view synthesis approach designed to overcome these challenges by ensuring consistency across ground-view images generated from satellite views. Our method, based on a fixed latent diffusion model, introduces two conditioning modules: satellite-guided denoising, which extracts high-level scene layout to guide the denoising process, and satellite-temporal denoising, which captures camera motion to maintain consistency across multiple generated views. We further contribute a large-scale satellite-ground dataset containing over 100,000 perspective pairs to facilitate extensive ground scene or video generation. Experimental results demonstrate that our approach outperforms existing methods on perceptual and temporal metrics, achieving high photorealism and consistency in multi-view outputs.
## Description
The Sat2GroundScape contains 99,825 pairs of satellite-ground data in perspective format. Including:
* condition: [256x256x3] satellite rgb texture, rendered from ground-level camera.
* lat, lon: latitude, longtitude of the ground image.
* elevation: elevation (meters) of the ground image
* heading: the heading (degrees) of the ground image in panaroma format.
* pano_id: used for downloading corresponding GT ground-view image in panaroma format.
* theta,phi,fov: used for cropping out the perspective image from the panaroma image with theta,phi for cropping center, fov for cropping range.
## Downloading Ground-view panaroma image
Each GT ground-view image is associated with a unique ID, pano_id. Please refer to https://github.com/robolyst/streetview for downloading the original ground-view image (512x1024x3).
```
from streetview import get_streetview
image = get_streetview(
pano_id="z80QZ1_QgCbYwj7RrmlS0Q",
api_key=GOOGLE_MAPS_API_KEY,
)
image.save("image.jpg", "jpeg")
```
## Panaroma to Perspective
Given theta, phi, fov, we can crop the perspective image from the panaroma image. Please refer to https://github.com/fuenwang/Equirec2Perspec.
```
import os
import cv2
import Equirec2Perspec as E2P
if __name__ == '__main__':
equ = E2P.Equirectangular('src/image.jpg') # Load equirectangular image
#
# FOV unit is degree
# theta is z-axis angle(right direction is positive, left direction is negative)
# phi is y-axis angle(up direction positive, down direction negative)
# height and width is output image dimension
#
img = equ.GetPerspective(60, 0, 0, 720, 1080) # Specify parameters(FOV, theta, phi, height, width)
```
## Citation
**BibTex:**
```bibtex
@article{xu2025satellite,
title={Satellite to GroundScape--Large-scale Consistent Ground View Generation from Satellite Views},
author={Xu, Ningli and Qin, Rongjun},
journal={arXiv preprint arXiv:2504.15786},
year={2025}
}
```