text stringlengths 0 820 |
|---|
It+1 |
It–k–1:t |
FIGURE 6. Another approach to hybrid ML modeling is that of |
including layers with physics motivation, which are learned from |
data end to end, into a deep neural network. The architecture |
learns a motion field with a convolution–deconvolution network, |
and the motion field is further processed using a warping physical |
model. The error is used to adjust the network weights, and, after |
training, the model can produce multiple time-step predictions |
recursively. (Adapted from [70].) |
Authorized licensed use limited to: ASU Library. Downloaded on March 07,2024 at 22:07:36 UTC from IEEE Xplore. Restrictions apply. |
97 |
JUNE 2021 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINEand its components to understand, for example, why |
the model came to a certain decision. Therefore, expla - |
nations become application dependent, and identical |
interpretations can lead to different explanations when |
linked to different domain knowledge. |
HOW EXPLAINABLE AI MAY HELP REMOTE SENSING |
Recently, many tools have been proposed for increasing |
interpretability and explainability when combined with |
domain knowledge [ 18]. Two major groups have emerged. |
1) Posthoc interpretability : In this group, the outcomes and |
decisions of the model are interpreted and explained |
by looking at the input. The most common visualiza - |
tions for interpretations are heatmaps and prototypes. |
Heatmaps highlight the parts of the input data that are |
prominent, important, or occlusion sensitive; for ex - |
ample, they are created using the gradient flows in the |
neural network. Prototypes are optimized input data |
that, given a model, maximize the targeted output. Both |
of these approaches help to explain what a model bases |
its decisions on, what influences the output, or what is a |
typical input for the learned input–output relationships. |
In all cases, attention must be paid to the confirmation |
bias, which is defined as the tendency to try to explain interpretations that are consistent with our existing |
knowledge even if the explanation does not apply to the |
given case (see [ 74] for an example of the overinterpreta - |
tion of saliency maps). |
2) Interpretability by design : In this case, the model is inher - |
ently designed so that it can be interpreted. Interpret - |
ability is achieved by representing model components |
or obtained latent variables such that they can be ex - |
plained with knowledge from a certain application |
domain. For instance, the units in hidden layers can |
be designed in such a way that the underlying factors |
of variation, such as the driving forces in Earth system |
data, become disentangled and are captured in separate |
units. This could be seen, for example, by the fact that |
simple correlations exist between variations of the input |
and the activation of the neurons. Interesting applica - |
tions of this idea are proposed in [ 75], where the authors |
disentangle the physical forces applied between objects |
in videos, or in [ 76], for explaining human perceptions |
of beauty in landscapes. |
To ensure the scientific value of the output, interpreta - |
tion tools can be used to check its reliability. Besides the |
inherently existing output score of the neural network, |
for example, visualizations of the processes within the |
Explainability |
Interpretability |
TransparencyScientific |
Discoveries |
and Insights |
Model OutputML Model |
Earth DataDomain Knowledge |
(Reference Labels, Exper t |
Knowledge, and so on) |
FIGURE 7. Explainable ML can be used to gain scientific discoveries and insights by explaining a learned model and/or results (shown in |
the light gray box). The prerequisites are interpretability and potential transparency, which lead to scientific explanations when combined |
with domain knowledge. A feedback loop allows for extending and improving the known domain knowledge. One potential application |
is the derivation of improved definitions, for example, for certain land-use classes, which are currently only vaguely, incompletely, or not |
uniformly described. |
Authorized licensed use limited to: ASU Library. Downloaded on March 07,2024 at 22:07:36 UTC from IEEE Xplore. Restrictions apply. |
98 |
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2021neural network can be used to check whether correct deci - |
sions have been made for the wrong reasons (the so-called |
Clever Hans effect [ 78]). This can be seen as an additional |
test for the reliability of the output due to the fact that a |
high score for the network does not always mean a correct |
result. In summary, these tools can increase confidence |
by improving traceability, as estimates are generated and |
reveal biases in the data through human-understand - |
able visualization. |
PERSPECTIVES |
Explainable ML has thus far received comparatively little |
attention in remote sensing, partly because of the still- |
predominant opinion that explainability is tightly coupled |
with the complexity of a model and, therefore, an increase |
in explainability leads directly to a decrease in accuracy |
(e.g., in [ 79]). In the meantime, however, several applica - |
tions have shown that this is no longer the case. |
Most of the approaches considered thus far are post hoc |
interpretations, but initial approaches that consider inter - |
pretability by design are appearing. In [ 76], for instance, the |
model is forced to predict human-interpretable concepts |
before predicting the final task ( Figure 8 ). Such approaches |
have the potential to provide both reliability checks and human-understandable explanations and could be used to |
move toward physics-explicit models, similar to those dis - |
cussed in the next section. |
As a further step, explanations going beyond today’s |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.