text stringlengths 0 820 |
|---|
that domain knowledge and model assumptions are of |
prime importance and that models must be challenged |
1) to respect the reality of the physical/biological/chemi - |
cal processes governing the system under study and 2) |
to be accountable—by the transparency of their internal |
reasoning—for the decisions to which they lead. This is |
important, especially when models are intended to be |
used for actual decision making and can affect balanc - |
es of power or society-changing decisions. Later in this |
article, we present ideas that are aligned with these di - |
rections (see Table 1 ) and centered on the injection of |
domain knowledge, but for different purposes. First we |
discuss physics-aware ML, which has the goal of using |
domain knowledge to restrict the solution spaces of the |
models so that the outcome is physically plausible (di - |
rection 4 in Table 1 ). This will ensure that the physical |
consistency of the solutions is maintained while avoid - |
ing aberrant outcomes that break physics (e.g., mass and |
energy conservation). |
We then discuss how best to obtain human-under - |
standable interpretations and explanations of the inner functioning of the models, to understand why and how |
models make decisions (direction 5 in Table 1 ). This has the |
advantage of making the model trustable and nonfalsifi - |
able and of avoiding situations where the right conclusions |
are reached for the wrong reasons. |
Explainability also enhances the potential for testing nov - |
el hypotheses and acquiring new scientific knowledge from |
the analysis of the model’s functioning. Directions 4 and 5 |
can be combined: they use domain knowledge in various |
ways with different goals. The transparency of the models’ |
weights is not absolutely necessary at this stage, as interpre - |
tations can be achieved by the analysis of inputs (e.g., Local |
Interpretable Model-agnostic Explanations, i.e., LIME [ 23]) |
and physics awareness realized by modified loss functions. |
Yet, as mentioned previously, science is about under - |
standing the world in which we live, not just approximat - |
ing it. We argue that, without learning causal relationships |
from observational data and assumptions, this ambitious |
goal of understanding the Earth system will not be possible |
(direction 6 in Table 1 ). In this case, learning of cause-and- |
effect relationships is a mix of the previous ingredients, as |
domain knowledge is needed to design the model in such |
a way that it can reveal (maybe novel) cause-and-effect |
relationships that can be then explained using domain |
TABLE 1. A SUMMARY OF THE SIX RESEARCH DIRECTIONS PRESENTED IN THIS ARTICLE. |
DIRECTION IN A NUTSHELL REFERENCES CURRENT ISSUES 10 YEARS FROM NOW |
1 Go beyond recognition |
toward induction, deduction, |
spatial and temporal |
reasoning, and structural |
inference.[6], [7] Missing or very limited |
benchmarks and novel tasks as |
well as reasoning models; the |
interpretability is unsolved.Intelligent systems linking meaningful |
transformation of entities, e.g., over space |
or time, and deriving knowledge as the |
way people understand the visual world |
and its processes. |
2 Think beyond the raster and |
consider all the possible |
inputs and sources of supervi - |
sion, in particular, geotagged |
social media data. [8]–[10] The presence of data set biases |
and of label noise; a spatiotem - |
poral mismatch between data |
sources and scalability, with an |
increasing number of sources. Systems that use a wide variety of sources |
to enable a fine-grained understanding of |
the world, all with minimal human effort |
required for data set building and system |
design. |
3 Query the world by asking |
questions about images and |
create descriptions. [11], [12] Simplistic language model, |
limited choice of thematic |
interactions, and a lack of |
large-scale infrastructure. Visual search engines have an understand - |
ing of questions about images and are able |
to adapt to different types of requests and |
are usable for everyone. |
4 Make models learned |
using deep neural networks |
consistent with domain- |
specific knowledge, like |
equations from physics.[1], [13]–[16] Networks’ outputs are not |
physically consistent; networks |
are often used as emulators of |
simulations but do not explore |
beyond current simulators’ |
constraints: they cannot |
discover new physical rules.Systems trainable with much fewer data |
because they constrain output space |
via physical knowledge; systems that |
learn a new hypothesis for new science |
generation. |
5 Enhancing interpretability |
and explainability to |
understand processes in ML |
models in a better way. [17], [18] A lack of human-understand - |
able interpretation, with a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.