text
stringlengths
1
298
formation considers the orthogonal directions formed by the abscissa and ordinate axes and evaluates how
the variogram changes between these directions. The transformation leads to negative or zero quantities
with values close to zero characterizing isotropy and negative values corresponding to the anisotropy of the
variograms in the directions and at the scale involved.
4.3 Other transformations
Transformations other than projections or summary statistics can be used to target forecast characteristics.
For example, a transformation in the form of a change of coordinates or a change of scale (e.g., a logarithmic
scale) can be used to obtain proper scoring rules. We highlight two families of scoring rules that can be seen
as transformation-based scoring rules: wavelet-based scoring rules and threshold-weighted scoring rules.
Generally speaking, wavelet-based scoring rules are built thanks to a projection of forecast and observation
f
ields onto a wavelet basis. Based on the wavelet coefficients, dimension reduction might be performed to
target specific characteristics such as the dependence structure or the location. The resulting coefficients of
the forecast fields are compared to the coefficients of the observations fields using scoring rules (e.g., squared
error (SE) or energy score (ES)). Wavelet transformations are (complex) transformations, and thus, the
scoring rules associated fall within the scope of Proposition 1. In particular, Buschow et al. (2019) used a
dimension reduction procedure resulting in the obtention of a mean and a scale spectra and used scoring
rules to compare forecasts and observation spectra. For example, the ES of the mean spectrum is used and
shows good discrimination ability when the scale structure is misspecified.
Note that Buschow et al. (2019) proposed two other wavelet-based scoring rules: one based on the earth
mover’s distance (EMD) of the scale histograms and one based on the distance in the scale histograms’ center
of mass. The EMD-based scoring rules are not proper since the EMD is not a proper scoring rule (Thorarins
dottir et al., 2013) and the so-called distance between centers of mass is not a distance but rather a difference
of position leading to an improper scoring rule. However, the ES-based scoring rules are proper and could be
derived from scale histograms. Despite their apparent complexity, wavelet transformations allow to target
interpretable characteristics such as the location (Buschow, 2022), the scale structure (Buschow et al., 2019;
Buschow and Friederichs, 2020) or the anisotropy (Buschow and Friederichs, 2021). The transformations
proposed for the deterministic forecasts setting in most of these articles could be used as foundations for
future work willing to propose wavelet-based proper scoring rules targeting the location, the scale structure
17
or the anisotropy.
As showcased in Heinrich-Mertsching et al. (2021) for a specific example and hinted in Allen et al. (2024),
transformations can also be used to emphasize certain outputs. Threshold weighting is one of the three main
types of weighting conserving the propriety of scoring rules. Its name come from the fact that it corresponds
to a weighting over different thresholds in the case of CRPS (7) (Gneiting, 2011). Recall that given a
conditionally negative definite kernel ρ, the kernel scoring associated Sρ (15) is proper relative to Pρ. Many
popular scoring rules are kernel scores such as the BS (5), the CRPS (6), the ES (13) and the VS (14). By
definition (Allen et al., 2023b, Definition 4), threshold-weighted kernel scores are constructed as
twSρ(F,y;v) = EF[ρ(v(X),v(y))] − 1
2EF[ρ(v(X),v(X′))] − 1
2ρ(v(y),v(y));
=Sρ(v(F),v(y)),
where v is the chaining function capturing how the emphasis is put on certain outputs. With this explicit
definition, it is obvious that threshold-weighted kernel scores are covered by the framework of Proposition 1.
It can be noted that Proposition 4 in Allen et al. (2023b) states that strict propriety of the kernel scoring
rule is preserved by the chaining function v if and only if v is injective. Weighted scoring rules allow to
emphasize particular outcomes: when studying extreme events, it is often of particular interest to focus
on values larger than a given threshold t and this can be achieved using the chaining rule v(x) = 1x≥t.
Threshold-weighted scoring rules have been used in verification procedures in the literature; we illustrate its
use through three different studies. Lerch and Thorarinsdottir (2013) aggregated across stations twCRPS
to compare the upper tail performance of different daily maximum wind speed forecasts. Chapman et al.
(2022) aggregated the threshold-weighted CRPS across locations to study the improvement of statistical
postprocessing techniques, the importance of predictors and the influence of the size of the training set
on the performance. Allen et al. (2023a) used threshold-weighted versions of the CRPS, the ES, and the
VS to compare the predictive performance of forecasts regarding heatwave severity; the scoring rules were
aggregated across stations. Readers may refer to Allen et al. (2023a) and Allen et al. (2023b) for insightful
reviews of weighted scoring rules in both univariate and multivariate settings.
5 Simulation study
This section provides simulated examples to showcase the different uses of the framework introduced in
Section 3 to construct interpretable proper scoring rules for multivariate forecasts. Four examples are
developed. Firstly, a setup where the emphasis is put on 1-marginal verification is proposed. This setup
serves as a means of recalling and showing the limitations of strictly proper scoring rules and the benefits
of interpretable scoring rules in a concrete setting. Secondly, a standard multivariate setup is studied
where popular multivariate scoring rules (i.e., VS and ES) are compared to a multivariate scoring rule
aggregated over patches and an aggregation-and-transformation-based scoring rule in their discrimination
ability regarding the dependence structure. Thirdly, a setup introducing anisotropy in both observations
and forecasts is introduced. The anisotropic score is constructed based on the transformation principle
with the goal of discriminating differences of anisotropy in the dependence structure between forecast and
observations. Fourthly, we propose a setup to test the sensitivity of scoring rules to the double-penalty effect
and we introduce scoring rules that can be built to be resilient to some manifestation of the double-penalty
effect.
In these four numerical experiments, the spatial field is observed and predicted on a regular 20×20 grid
D={1,...,20}×{1,...,20}. Observations are realizations of a Gaussian random field (G(s))s∈D with zero
mean and power-exponential covariance defined as
cov(G(s),G(s′)) = σ02 exp − ∥s−s′∥
λ0
The parameters are taken equal to σ0 = 1, λ0 = 3 and β0 = 1.
β0
,
s, s′ ∈ D.
(20)
18
In each numerical experiment, we compare a few predictive distributions, including the distribution
generating observations and other ones deviating from the generative distributions in a specific way. These
different predictive distributions are evaluated with different scoring rules and the aim is to illustrate the
discriminatory ability of the different scoring rules.
The simulation study uses 500 observations of the random field (G(s))s∈D. The scoring rules are com
puted using exact formulas when possible (see Appendix E), and, when exact formulas are not available,
they are computed based on a sample of size 100 (i.e., ensemble forecasts) of the probabilistic forecast.
Estimated expectations over the 500 observations are computed and the experiment is repeated 10 times.
The corresponding results are represented by boxplots. The units of the scoring rules are rescaled by the
average expected score of the true distribution (i.e., the ideal forecast). The statistical significativity of
the ranking between forecasts is tested using a Diebold-Mariano test (Diebold and Mariano, 1995). When
deemed necessary, statistical significativity is mentioned for a confidence level of 95%.
The code used for the different numerical experiments is publicly available1.
5.1 Marginals
This first numerical experiment focuses on the prediction of the 1-dimensional marginal distributions and
the aggregation of univariate scoring rules. For simplicity, we consider only stationary random fields so that
the 1-marginal distribution is the same at all grid points. Although similar conclusions could be drawn
from an univariate framework (i.e., with independent 1-dimensional rather than spatial observations), this