text stringlengths 0 820 |
|---|
than roads?” |
A: “0 (GT: 0)” |
Aerial VHR Sentinel-2 Images |
(a) (b) |
FIGURE 4. The RSVQA system predictions for (a) Sentinel-2 and (b) aerial images, respectively, and different questions for each. The same |
model is used to answer all of the questions related to one-resolution imagery. GT: ground truth; VHR: very high resolution. (Sources: (a) |
Copernicus and (b) USGS.) |
Authorized licensed use limited to: ASU Library. Downloaded on March 07,2024 at 22:07:36 UTC from IEEE Xplore. Restrictions apply. |
95 |
JUNE 2021 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINEmodel can (and must) be improved: for example, au - |
tomatic data generation has its flaws, especially due to |
the very simplistic language model used, for which new |
models from natural language processing could help im - |
prove the performance greatly. Also, fewer classical tasks |
(i.e., not reducible to classification, regression, or detec - |
tion) should be imagined; for instance, when allowing for |
more complex output spaces, the lessons learned from |
image captioning in remote sensing [ 60] show that it is |
possible to move toward models that generate descrip - |
tions of the image content, which could be used in, e.g., |
image retrieval [ 61]. |
DIRECTION 4: PHYSICS-AWARE ML |
As seen from the eyes of a practitioner, a major drawback |
of deep learning models is that they can lead to implau - |
sible results with scores that indicate high confidence |
in the outputs if no high-level constraints are imposed |
that check for consistency with theory. One possibility to |
compensate for this shortcoming is integrating domain |
knowledge into the modeling procedure. Particularly in |
the environmental and geosciences, the laws of physics, |
chemistry, or biology govern the underlying processes, |
and much theory exists. |
An interesting direction of research is thus how best |
to tightly couple ML, and especially deep learning, with |
physical laws. The hope is that this introduction of domain |
knowledge can help reduce the manual labeling effort |
for supervised learning, counter data set biases, lessen the |
influence of label noise, lead to good generalization ca - |
pabilities, and, eventually, result in plausible outputs that |
adhere to the underlying physical principles. Machine |
learning needs to incorporate domain physical knowledge |
to become consistent, explainable (see direction 5 in Table 1), |
and causal (see direction 6 in Table 1), while still learning |
from observational data, which makes them amenable to |
backpropagation. In addition, physics-consistent ML ap - |
proaches for remote sensing emphasize modeling natural |
phenomena with higher accuracy, which is not necessar - |
ily the case for the two other research directions. In the |
following sections, we present several ideas clustered into |
three lines of thought: constrained optimization, physics |
layers in deep neural networks, and encoding and learn - |
ing differential equations. A recent overview of the main |
families and approaches to the general field of the inter - |
action between physics and ML for Earth observation is |
available in [ 62]. |
CONSTRAINED OPTIMIZATION |
A first consideration when designing physics-consistent ML |
approaches is to impose constraints on the loss function |
[13], [63]. Loss functions that encode the physical princi - |
ples of a particular problem while using otherwise mostly |
unchanged model architectures can ensure that the learned |
model respects the laws of physics; see Figure 5 for an ex - |
ample for including a dependence-based regularizer [ 52]. In addition, this strategy can significantly reduce the num - |
ber of necessary labels required for training, down to prac - |
tically zero in some cases [ 14]. |
Designing custom-tailored loss functions and possibly |
combining them with models that are trained on simulat - |
ed data represent another promising direction of research. |
However, this approach calls for very specific designs of |
loss functions that are not always straightforward and may |
simply not exist for many problems in remote sensing. For |
example, it seems very difficult to design a corresponding |
loss function for the semantic segmentation of cars in aerial |
images or the detection of building facades in street-level |
panoramas because the large, intraclass variability of the ap - |
pearances would require a very large set of constraints. |
PHYSICS LAYERS IN DEEP NEURAL NETWORKS |
An interesting idea, that of making use of well-established |
deep neural networks but still learning and constraining |
the underlying physics, is adding additional layers that |
encode physics [ 1], [64] (see Figure 6 ). The general back - |
ground knowledge gained from physics can be encoded |
in the deeper network layers. Together with a custom-tailored |
loss function, this approach enables the end-to-end |
RMSE11.051.11.15NHSIC |
0.264 0.266 0.268 0.27Morel1 |
CalCOFI |
OC2 |
OC4 |
FIGURE 5. A standard family of hybrid modeling can be framed as |
a constrained optimization problem, where the physical rules are |
included as a particular form of regularizer [69]. The fair kernel learn - |
ing [52] method forces model predictions to be not only accurate |
but also statistically dependent on a physical model, simulations, or |
ancillary observations. In this example, we forced the dependence |
of a data-driven model with respect to four standard ocean-color |
parametric models (Morel1, CalCOFI two-band linear, OC2, and OC4) |
and trained our constrained model to estimate the levels of ocean |
chlorophyll content from input radiances. We did so with increased |
dependency (as estimated by the NHSIC metric) between the ML |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.