text
stringlengths
1
298
for y such that f(y) > 0. The Hyvärinen score is proper relative to the subclass of P(R) such that the density
f exists, is twice continuously differentiable and satisfies f′(x)/f(x) → 0 as |x| → ∞. It is worth noticing that
the Hyvärinen score can be computed even if f is only known up to a scale factor (e.g., up to a normalizing
constant). This property allows circumventing the use of Monte Carlo methods or approximations of the
normalizing constant when it is unavailable or hard to compute. This is a property of local proper scoring
rules except for the logarithmic score (Parry et al., 2012). Readers eager to learn more about local proper
scoring rules may refer to Parry et al. (2012) and Ehm and Gneiting (2012).
The logarithmic score and the Hyvärinen score do not allow f to be zero. To overcome this limitation,
scoring rules expressed in terms of the Lα-norm have been proposed. The quadratic score is defined as
QuadS(F,y) = ∥f∥2
2 −2f(y),
6
where ∥f∥2
2 = R
f(y)2dy. The quadratic score is strictly proper relative to the class L2(R).
The pseudospherical score is defined as
PseudoS(F,y) = −f(y)α−1/∥f∥α−1
α ,
with α > 1. For α = 2, it reduces to the spherical score (see, e.g., Jose 2007). The pseudospherical score
is strictly proper relative to the class Lα(R). The four scoring rules presented above have been criticized as
they do not encourage a high probability in the vicinity of the observation y (Gneiting and Raftery, 2007).
In particular, as the logarithmic score is more sensitive to outliers, probabilistic forecasts inferred by its
minimization may be overdispersive (Gneiting et al., 2005). Moreover, the pdf is not always available, for
example in the case of ensemble forecasts.
Readers may refer to the various reviews of scoring rules available (see, e.g., Bröcker and Smith 2007;
Gneiting and Raftery 2007; Gneiting and Katzfuss 2014; Thorarinsdottir and Schuhen 2018; Alexander et al.
2022). Formulas of the expected scoring rules presented are available in Appendix A.
Strictly proper scoring rules can be seen as more powerful than proper scoring rules. This is theoretically
true when the interest is in identifying the ideal forecast (i.e., the true distribution). Regardless, in practice,
scoring rules are also used to rank probabilistic forecasts and with that in mind, a given ranking of forecasts
in terms of the expectation of a strictly proper scoring rule (such as the CRPS) is harder to interpret than
a ranking in terms of the expectation of a proper but more interpretable scoring rule (such as the SE). The
SE is known to discriminate the mean, and thus, a better rank in terms of expected SE implies a better
prediction of the mean of the true distribution. Conversely, a better ranking in terms of CRPS implies a
better prediction of the whole prediction, but it might not be useful as is, and other verification tools will be
needed to know what caused this ranking. When forecasts are not calibrated, there seems to be a trade-off
between interpretability and discriminatory power and this becomes more prominent in a multivariate set
ting. However, simpler interpretable tools and discriminatory-powerful tools can be used complementarily.
The framework proposed in Section 3 aims at helping the construction of interpretable proper scoring rules.
2.3 Multivariate scoring rules
In a multivariate setting, forecasters cannot solely use univariate scoring rules as they are not able to
discriminate forecasts beyond their 1-dimensional marginals. Univariate scoring rules cannot discriminate
the dependence structure between the univariate margins. Multivariate forecasts can be applied in different
setups: spatial forecasts, temporal forecasts, multivariable forecasts or any combination of these categories
(e.g., spatio-temporal forecasts of multiple variables). Considering weather forecasting, a spatial forecast
could aim at predicting temperatures across multiple locations. A temporal forecast could be focused on
predicting rainfall at multiple lead times at a given location. A multivariable forecast could predict both
eastward and northward components of the wind. In the following, we consider F ∈ F ⊆ P(Rd) a multivariate
probabilistic forecast and y ∈ Rd an observation.
Even if there is no natural ordering in the multivariate case, the notions of median and quantile can
be adapted using level sets, and then scoring rules using these quantities can be constructed (see, e.g.,
Meng et al. 2023). Nonetheless, as the mean is well-defined, the squared error (SE) can be defined in the
multivariate setting :
SE(F,y) = ∥µF −y∥2
2,
(12)
where µF is the mean vector of the distribution F. Similar to the univariate case, the SE is proper relative
to P2(Rd). Moments are well-defined in the multivariate case allowing the multivariate version of the Dawid
Sebastiani score to be defined. The Dawid-Sebastiani score (DSS) was proposed in Dawid and Sebastiani
(1999) as
DSS(F,y) = log(detΣF)+(µF −y)TΣ−1
F (µF −y),
where µF and ΣF are the mean vector and the covariance matrix of the distribution F. The DSS is proper
relative to P2(Rd) and it becomes strictly proper relative to any convex class of probability measures charac
terized by their first two moments (Gneiting and Raftery, 2007). The second term in the DSS is the squared
7
Mahalanobis distance between y and µF.
To define a strictly proper scoring rule for multivariate forecast, Gneiting and Raftery (2007) proposed
the energy score (ES) as a generalization of the CRPS to the multivariate case. The ES is defined by
ESα(F,y) = EF∥X −y∥α
2−1
2EF∥X −X′∥α
2,
(13)
where α ∈ (0,2) and F ∈ Pα(Rd), the class of Borel probability measures on Rd such that the moment of
order α is finite. The definition of the ES is related to the kernel form of the CRPS (6), to which the ES
reduces for d = 1 and α = 1. As pointed out in Gneiting and Raftery (2007), in the limiting case α = 2, the
ES becomes the SE (12). The ES is strictly proper relative to Pα(Rd) (Székely, 2003; Gneiting and Raftery,
2007) and is suited for ensemble forecasts (Gneiting et al., 2008). Moreover, the parameter α gives some
f
lexibility: a small value of α can be chosen and still lead to a strictly proper scoring rule, for example, when
higher-order moments are ill-defined. The discrimination ability of the ES has been studied in numerous
studies (see, e.g., Pinson and Girard 2012; Pinson and Tastu 2013; Scheuerer and Hamill 2015). Pinson and
Girard (2012) studied the ability of the ES to discriminate among rival sets of scenarios (i.e., forecasts) of
wind power generation. In the case of bivariate Gaussian processes, Pinson and Tastu (2013) illustrated that
the ES appears to be more sensitive to misspecifications of the mean rather than misspecifications of the
variance or dependence structure. The lack of sensitivity to misspecifications of the dependence structure
has been confirmed in Scheuerer and Hamill (2015) using multivariate Gaussian random vectors of higher
dimension. Moreover, the discriminatory power of the ES deteriorates in higher dimensions (Pinson and
Tastu, 2013).
To overcome the discriminatory limitation of the ES, Scheuerer and Hamill (2015) proposed the variogram
score (VS), a score targeting the verification of the dependence structure. The VS of order p is defined as
d
VSp(F,y) =
i,j=1
wij (EF [|Xi −Xj|p] −|yi −yj|p)2
(14)
where Xi is the i-th component of the random vector X following F, wij are nonnegative weights and p > 0.
The variogram score capitalizes on the variogram, used in spatial statistics to access the dependence struc
ture. The VS cannot detect an equal bias across all components. The VS of order p is proper relative to