text
stringlengths 1
298
|
|---|
the class of Borel probability measures on Rd such that the 2p-th moments of all univariate margins are
|
f
|
inite. The weights wij can be selected to emphasize or depreciate certain pair interactions. For example, in
|
a spatial context, it can be expected the dependence between pairs decays with the distance: choosing the
|
weights proportional to the inverse of the distance between locations can increase the signal-to-noise ratio
|
and improve the discriminatory power of the VS (Scheuerer and Hamill, 2015).
|
When the pdf f of the probabilistic forecast F is available, multivariate versions of the univariate scoring
|
rules based on the pdf are available. The multivariate versions of the scoring rules have the same properties
|
and limitations as their univariate counterpart. The logarithmic score (11) has a natural multivariate version
|
:
|
LogS(F,y) = −log(f(y)),
|
for y such that f(y) > 0. The logarithmic score is strictly proper relative to the class L1(Rd).
|
The Hyvärinen score (HS; Hyvärinen 2005) was initially proposed in its multivariate form
|
HS(F,y) = 2∆log(f(y))+|∇log(f(y))|2,
|
for y such that f(y) > 0, where ∆ is the Laplace operator (i.e., the sum of the second-order partial deriva
|
tives) and ∇ is the gradient operator (i.e., vector of the first-order partial derivatives). In the multivariate
|
setting, the HS can also be computed if the predicted pdf is known up to a normalizing constant. The HS
|
is proper relative to the subclass of P(Rd) such that the density f exists, is twice continuously differentiable
|
and satisfies ∥∇log(f(x))∥ → 0 as ∥x∥ → ∞.
|
8
|
The quadratic score and pseudospherical score are directly suited to the multivariate setting :
|
QuadS(F,y) = ∥f∥2
|
2 −2f(y);
|
PseudoS(F,y) = −f(y)α−1/∥f∥α−1
|
α ,
|
where ∥f∥α = ( Rd
|
f(y)αdy)1/α. The quadratic score is strictly proper relative to the class L2(Rd). The
|
pseudospherical score is strictly proper relative to the class Lα(Rd).
|
Additionally, other multivariate scoring rules have been proposed among which the marginal-copula score
|
(Ziel and Berk, 2019) or wavelet-based scoring rules (see, e.g., Buschow et al. 2019). These scoring rules will
|
be briefly mentioned in Section 4 in light of the proper scoring rule construction framework proposed in this
|
article. Appendix B provides formulas for the expected multivariate scoring rules presented above.
|
2.4 Spatial verification tools
|
Spatial forecasts are a very important group of multivariate forecasts as they are involved in various appli
|
cations (e.g., weather or renewable energy forecasting). Spatial fields are often characterized by high dimen
|
sionality and potentially strong correlations between neighboring locations. These characteristics make the
|
verification of spatial forecasts very demanding in terms of discriminating misspecified dependence struc
|
tures, for example. In the case of spatial forecasts, it is known that traditional verification methods (e.g.,
|
gridpoint-by-gridpoint verification) may result in a double penalty. The double-penalty effect was pinned in
|
Ebert (2008) and refers to the fact that if a forecast presents a spatial (or temporal) shift with respect to
|
observations, the error made would be penalized twice: once where the event was observed and again where
|
the forecast predicted it. In particular, high-resolution forecasts are more penalized than less realistic blurry
|
forecasts. The double-penalty effect may also affect spatio-temporal forecasts in general.
|
In parallel with the development of scoring rules, various application-focused spatial verification meth
|
ods have been developed to evaluate weather forecasts. The efforts toward improving spatial verification
|
methods have been guided by two projects: the intercomparison project (ICP; Gilleland et al. 2009) and its
|
second phase, called Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT; Dorninger
|
et al. 2018). These projects resulted in the comparison of spatial verification methods with a particular focus
|
on understanding their limitations and clarifying their interpretability. Only a few links exist between the
|
approaches studied in these projects (and the work they induced) and the proper scoring rules framework.
|
In particular, Casati et al. (2022) noted "a lack of representation of novel spatial verification methods for
|
ensemble prediction systems". In general, there is a clear lack of methods focusing on the spatial verification
|
of probabilistic forecasts. Moreover, to help bridging the gap between the two communities, we would like
|
to recall the approach of spatial verification tools in the light of the scoring rule framework introduced above.
|
One of the goals of the ICP was to provide insights on how to develop methods robust to the double
|
penalty effect. In particular, Gilleland et al. (2009) proposed a classification of spatial verification tools
|
updated later in Dorninger et al. (2018) resulting in a five-category classification. The classes differ in the
|
computing principle they rely on. Not all spatial verification tools mentioned in these studies can be applied
|
to probabilistic forecasts, some of them can solely be applied to deterministic forecasts. In the following
|
description of the classes, we try to focus on methods suited to probabilistic forecasts or at least the special
|
case of ensemble forecasts.
|
Neighborhood-based methods consist of applying a smoothing filter to the forecast and observation fields
|
to prevent the double-penalty effect. The smoothing filter can take various forms (e.g., a minimum, a
|
maximum, a mean, or a Gaussian filter) and be applied over a given neighborhood. For example, Stein
|
and Stoop (2022) proposed a neighborhood-based CRPS for ensemble forecasts gathering forecasts and
|
observations made within the neighborhood of the location considered. The use of a neighborhood prevents
|
the double-penalty effect from taking place at scales smaller than that of the neighborhood. In this general
|
definition, neighborhood-based methods can lead to proper scoring rules, in particular, see the notion of
|
patches in Section 4.
|
Scale-separation techniques denote methods for which the verification is obtained after comparing forecast
|
and observation fields across different scales. The scale-separation process can be seen as several single
|
bandpass spatial filters (e.g., projection onto a base of wavelets as wavelet-based scoring rules; Buschow et al.
|
9
|
2019). However, in order to obtain proper scoring rules, the comparison of the scale-specific characteristics
|
needs to be performed using a proper scoring rule. Section 4 provides a discussion on wavelet-based scoring
|
rules and their propriety.
|
Object-based methods rely on the identification of objects of interest and the comparison of the objects
|
obtained in the forecast and observation fields. Object identification is application-dependent and can
|
take the form of objects that forecasters are familiar with (e.g., storm cells for precipitation forecasts). A
|
well-known verification tool within this class is the structure-amplitude-location (SAL; Wernli et al. 2008)
|
method which has been generalized to ensemble forecasts in Radanovics et al. (2018). The three components
|
of the ensemble SAL do not lead to proper scoring rules. They rely on the mean of the forecast within
|
scoring functions inconsistent with the mean. Thus, the ideal forecast does not minimize the expected value.
|
Nonetheless, the three components of the SAL method could be adapted to use proper scoring rules sensitive
|
to the misspecification of the same features.
|
Field-deformation techniques consist of deforming the forecasts field into the observation field (the simi
|
larity between the fields can be ensured by a metric of interest). The field of distortion associated with the
|
morphing of the forecast field into the observation field becomes a measure of the predictive performance of
|
the forecast (see, e.g., Han and Szunyogh 2018).
|
Distance measures between binary images, such as exceedance of a threshold of interest, of the forecast
|
and observation fields. These methods are inspired by development in image processing (e.g., Baddeley’s
|
delta measure Gilleland 2011).
|
These five categories are partially overlapping as it can be argued that some methods belong to multiple
|
categories (e.g., some distance measures techniques can be seen as a mix of field-deformation and object
|
based). They define different principles that can be used to build verification tools that are not subject
|
to the double-penalty effect. The reader may refer to Dorninger et al. (2018) and references therein for
|
details on the classification and the spatial verification methods not used thereafter. The frontier between
|
the aforementioned spatial verification methods and the proper scoring rules framework is porous with,
|
for example, wavelet-based scoring rules belonging to both. It appears that numerous spatial verification
|
methods seek interpretability and we believe that this is not incompatible with the use of proper scoring
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.