text
stringlengths
0
820
{miaozhng, hs3673, lcc9673, rumi.chunara }@nyu.edu
Abstract
The increasing availability of high-resolution satellite
imagery has enabled the use of machine learning to support
land-cover measurement and inform policy-making. How-
ever, labelling satellite images is expensive and is available
for only some locations. This prompts the use of transfer
learning to adapt models from data-rich locations to others.
Given the potential for high-impact applications of satel-
lite imagery across geographies, a systematic assessment of
transfer learning implications is warranted. In this work,
we consider the task of land-cover segmentation and study
the fairness implications of transferring models across lo-
cations. We leverage a large satellite image segmentation
benchmark with 5987 images from 18 districts (9 urban
and 9 rural). Via fairness metrics we quantify disparities
in model performance along two axes – across urban-rural
locations and across land-cover classes. Findings show that
state-of-the-art models have better overall accuracy in ru-
ral areas compared to urban areas, through unsupervised
domain adaptation methods transfer learning better to ur-
ban versus rural areas and enlarge fairness gaps. In analy-
sis of reasons for these findings, we show that raw satellite
images are overall more dissimilar between source and tar-
get districts for rural than for urban locations. This work
highlights the need to conduct fairness analysis for satel-
lite imagery segmentation models and motivates the devel-
opment of methods for fair transfer learning in order not
to introduce disparities between places, particularly urban
and rural locations.
1. Introduction
Satellite imagery is becoming readily available with
around 1030 active satellites that are dedicated to earth ob-
servation [36]. Out of the different spectra of imagery avail-
able from such satellites, visible spectrum imagery is partic-
ularly relevant for many applications based on the extremelyhigh resolution and according ability to resolve specific ob-
jects of interest [5]. Consequently, satellite images com-
bined with semantic segmentation, the task of clustering
parts of an image together which belong to the same ob-
ject class, can be used to detect objects ranging from natu-
ral features (water bodies, forests) to human land-use types
(buildings, roads). The extracted information is being ap-
plied in a wide range of settings including urban planning
[34], modelling disease spread [1], aiding disaster relief
efforts [16, 57], and detecting and mapping environmental
phenomena [24,55]. However, because segmentation mod-
els employ supervised learning, availability of ground truth
data is a major bottleneck for their training. Annotation for
the segmentation task is particularly labor intensive as it re-
quires fine-grained labels at the level of pixels which results
in incomplete or noisy ground truth data [45]. In such sit-
uations, generalizing existing models to non-annotated data
bytransfer learning is a widely applied solution [43, 48].
Transfer learning uses knowledge learnt from the same
or related tasks to improve learning on the task at hand (see
Pan and Yang [39] for a survey). We will focus on a type of
transfer learning setting called domain adaptation , where
we have a single task but the train and test domains may
differ. The key challenge here is the discrepancy in data
distributions between domains. In the case of satellite im-
agery, the discrepancies commonly result from transferring
models to new geographies where the landscapes are dis-
similar to where the model was trained. For example, Islam
[22] finds that a well-trained seagrass detection model from
satellite images fails when tested at other locations with dif-
ferent seagrass density. To mitigate the degrading effects of
domain discrepancies on segmentation accuracy, previous
work has re-designed network architectures [28], loss func-
tions [18,50], and batch normalization methods [38] to im-
prove model generalization. Other approaches include us-
ing labels at a coarser granularity for the target domain (e.g.
image-level labels) as weak supervision [21] and learning
latent representations shared between source and target do-
2916
mains to help in adaptation [26, 29].
Simultaneously, while machine learning approaches
have been used to improve prediction in a variety of tasks,
recent studies have highlighted concerns towards model
fairness, exhibited by performance disparities across sen-
sitive groups based on geography, demographics, and eco-
nomic indicators [31,58]. A push for model fairness aligns
with the ideal of equity defined by World Health Organiza-
tion as “Equity is the absence of unfair, avoidable or reme-
diable differences among groups of people, whether those
groups are defined socially, economically, demographically,
or geographically or by other dimensions of inequality (e.g.
sex, gender, ethnicity, disability, or sexual orientation).”
Real-world examples have demonstrated the harmful effects
of unfair machine learning models, such as facial recogni-
tion software that performs worse on darker women [4] and
advertisement systems that deliver economic opportunity-
related ads less often to women than men [25]. Indeed,
discriminatory issues persist even in state-of-the-art learn-
ing methods [32]. Expanding types of data used in ma-
chine learning tasks, such as satellite imagery, enables in-
creased use in a wide range of daily-life applications and
ever-increasing social impacts. Accordingly, broader as-
pects and viewpoints of performance, such as fairness, need
to be ascertained in multiple machine learning subareas.