text stringlengths 0 820 |
|---|
and physical models. The results show that including the depen - |
dence regularizer (i.e., for higher NHSIC values) helps reduce the |
root-mean-square error (RMSE) and that the OC2 and OC4 physical |
models, in particular, improve the error and consistency of the |
data-driven model. Morel1: Morel’s version 1 algorithm; OC2: ocean |
chlorophyll 2 version; OC4: ocean chlorophyll 4 version; NHSIC: |
normalized Hilbert–Schmidt Independence Criterion. |
Authorized licensed use limited to: ASU Library. Downloaded on March 07,2024 at 22:07:36 UTC from IEEE Xplore. Restrictions apply. |
96 |
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2021 training of common deep networks that comply with |
physical constraints. |
Although the idea of adding physical layers on top of |
common deep network architectures seems intuitive, im - |
plementing it for a wide range of remote sensing tasks is far |
from trivial. One scheme could be to start with simplified |
versions that do not encode physics directly but rather some |
related, simplified constraints, for instance, for imposing |
maximum values for vegetation height mapping [ 65], tree |
stress [ 66], or flood water depth [ 57]. |
ENCODING AND LEARNING DIFFERENTIAL EQUATIONS |
Probably the biggest advances toward deep neural networks |
that incorporate physics are so-called physics-informed |
neural networks, which directly encode nonlinear ordi - |
nary differential equations (ODEs) and partial differential |
equations (PDEs) in deep learning architectures while al - |
lowing for end-to-end training [ 15], [67]. Instead of using |
standard network layers, the authors proposed a framework |
to directly encode nonlinear differential equations in the |
network, which is fully end-to-end trainable. This idea al - |
lows for learning yet-unknown correlations and coming up |
with novel research hypotheses in a data-driven way, a cen - |
tral point also raised in the previous research directions on |
interpretability. Probabilistic models like Gaussian process - |
es also allow for the encoding of ODEs as a form of con - |
volutional process [ 68] and report additional advantages: |
aside from the uncertainty of quantification and propaga - |
tion, they also learn the explicit form of the driving force |
and the ODE parameters, offering a solid basis for model |
understanding and interpretability (see the “Interpretable |
and Explainable ML” section). |
PERSPECTIVES |
When translated to remote sensing, physics-informed ML |
models enable the encoding and learning of radiative-transfer equations and further physical laws, such as the backscat - |
tering of SAR signals. Although starting directly with a full |
set of forward-modeling equations, e.g., for simulation |
engines, seems very hard, one could start with simplified |
versions and a subset of the most important components. |
Another idea would be the encoding of a simplified ver - |
sion of the spectral-property changes of vegetation as a |
function of seasonality. Similar to the warping model |
proposed in [ 64], one could encode a change of spectral |
canopy properties in the infrared domain to ease domain |
transfer between the summer and winter scenes. In the |
broader context of geosciences and climate sciences, learn - |
ing ODEs/PDEs from observational data and simulations |
is the direct way to explain the problem and variable rela - |
tionships mechanistically while still resorting to empiri - |
cal data. The main challenges are the needed simplicity of |
ODEs/PDEs so that scientists can understand (so sparsity |
gets involved here) and validating the plausibility of such |
equations (therefore, domain experts and computer sci - |
entists should work together). This strongly links physics- |
based deep learning to the explainable ML discussed in |
the next section. |
DIRECTION 5: INTERPRETABLE AND |
EXPLAINABLE ML |
Using ML for scientific applications aims at acquiring new |
scientific knowledge from observational data. In addition |
to the accuracy of results, their scientific consistency, reli - |
ability, and explainability are of central importance. A pre - |
requisite for achieving these is to design models that can be |
challenged, in other words, to create models whose inner |
functioning can be visualized, queried, or interpreted. In |
this section, we discuss the foundations of explainable AI |
(Figure 7 ) and its exciting perspectives and make links with |
the physics-aware/informed ML discussed in the previous |
section on physics-based ML. |
FROM TRANSPARENCY TO EXPLAINABILITY |
Explainable ML has various definitions (see [ 17]), but they |
all revolve around the properties of transparency, interpret - |
ability, and explainability. |
1) A transparent model allows us to access its components |
and provides the motivation for choosing certain model |
components. This is in contrast to black-box models as |
traditional neural networks, for which one could indeed |
write the mathematical relationships explicitly (they are |
transparent in this sense), but their complexity makes |
them inaccessible to users. |
2) An interpretable model counteracts the lack of transparen - |
cy by presenting complex facts, like the processes in a neu - |
ral network, in a space that can be understood by humans. |
3) Sorting by increasing interpretability power, such a |
space can be made of localized image coordinates [ 71], |
semantic concepts [ 72], or understandable text [ 73]. |
4) To achieve explainability, domain knowledge is exploited |
and used in combination with the interpretable model |
Model Super vision |
Warping |
Schemewt" |
It+1" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.