text stringlengths 1 298 |
|---|
example aims to clarify the notion of interpretability and presents notions that will be reused in the following |
examples. The verification of marginals, along with other simple quantities, is usually one of the first steps |
of any multivariate forecast verification process. |
Observations follow the model of (20) and multiple competing forecasts are considered:- the ideal forecast is the Gaussian distribution generating observations and is used as a reference;- the biased forecast is a Gaussian predictive distribution with the same covariance structure as the |
observation but a different mean E[Fbias(s)] = c = 0.255;- the overdispersed forecast and the underdispersed forecast are Gaussian predictive distributions from |
the same model as the observations except for an overestimation (σ = 1.4) and an underestimation |
(σ = 2/3) of the variance respectively;- the location-scale Student forecast is used where the marginals follow location-scale Student-t distri |
butions with parameters µ = 0, df = 5, and τ is such that the standard deviation is 0.745 and the |
covariance structure the same as in (20). |
In order to compare the predictive performance of forecasts, we use scoring rules constructed by aggre |
gating univariate scoring rules. Here, the aggregation is done with uniform weights since there is no prior |
knowledge on the locations. The univariate scoring rules considered are the continuous ranked probability |
score (CRPS), the Brier score (BS), the quantile score (QS), the squared error (SE) and the Dawid-Sebastiani |
score (DSS). Figure 1a compares five different forecasts based on their expected CRPS. It can be seen that |
all forecasts except for the ideal one have similar expected values and no sub-efficient forecast is significantly |
better than the others. In order to gain more insight into the predictive performance of the forecast, it is |
necessary to use other scoring rules. In practice, the distribution is unknown; thus, it is impossible to know |
if a forecast is efficient; it is only possible to provide a ranking linked to the closeness of the forecast with |
respect to the observations. The definition of closeness depends on the scoring rule used: for example, the |
CRPSdefines closeness in terms of the integrated quadratic distance between the two cumulative distribution |
functions (see, e.g., Thorarinsdottir and Schuhen 2018). |
If the quantity of interest is the value of a quantile of a certain level α, the aggregated QS is an appropriate |
scoring rule. Figure 1b shows the expected aggregated QS for three different levels α : α = 0.5, α = .75 |
and α = 0.95. α = 0.5 is associated with the prediction of the median and, since all the forecasts are |
symmetric and only the biased forecast is not centered on zero, the other forecasts are equally the best and |
1https://github.com/pic-romain/aggregation-transformation |
19 |
(a) Aggregated CRPS |
(c) Aggregated BS |
(b) Aggregated QS |
(d) Aggregated DSS and SE |
Figure 1: Expectation of aggregated univariate scoring rules: (a) the CRPS, (b) the quantile score, (c) the |
Brier score, and (d) the squared error and the Dawid-Sebastiani score, for the ideal forecast (light violet), a |
biased forecast (orange), an under-dispersed forecast (lighter blue), an over-dispersed forecast (darker blue) |
and a local-scale Student forecast (green). More details are available in the main text. |
efficient forecasts. If the third quartile is of interest (α = 0.75), the location-scale Student forecast appears |
as significantly the best (among the non-ideal). For the higher level of α = 0.95, the biased forecast is |
significantly the best since its bias error seems to be compensated by its correct prediction of the variance. |
Depending on the level of interest, the best forecast varies; the only forecast that would appear to be the |
best regardless of the level α is the ideal forecast, as implied by (8). |
If a quantity of interest is the exceedance of a threshold t at each location, then the aggregated BS is |
an interesting scoring rule. Figure 1c shows the expectation of aggregated BS for the different forecasts and |
for two different thresholds (t = 0.5 and t = 1). Among the non-ideal forecasts, there seems to be a clearer |
ranking than for the CRPS. The overdispersed forecast is significantly the best regarding the prediction |
of the exceedance of the threshold t = 0.5 and the biased forecast is significantly the best regarding the |
exceedance of t = 1. As for the aggregated quantile score, the best forecast depends on the threshold t |
considered and the only forecast that is the best regardless of the threshold t is the ideal one (see Eq. (7)). |
If the moments are of interest, the aggregated SE discriminates the first moment (i.e., the mean) and the |
aggregated DSS discriminates the first two moments (i.e., the mean and the variance). Figure 1d presents the |
expected values of these scoring rules for the different forecasts considered in this example. The aggregated |
20 |
SEs of all forecasts, except the biased forecast, are equal since they have the same (correct) marginal means. |
The aggregated DSS presents the biased forecast as significantly the best one (among non-ideal). This is |
caused by the combined discrimination of the first two moments of the Dawid-Sebastiani score (see Eq. (9) |
and Appendix A). |
5.2 Multivariate scores over patches |
This second numerical experiment focuses on the prediction of the dependence structure. Observations are |
sampled from the model of Eq. (20) and we compare forecasts that differ only in their dependence structure |
through misspecification of the range parameter λ and the smoothness parameter β:- the ideal forecast is the Gaussian distribution generating the observations;- the small-range forecast and the large-range forecast are Gaussian predictive distributions from the |
same model (20) as the observations except for an underestimation (λ = 1) and an overestimation |
(λ = 5), respectively, of the range;- the under-smooth forecast and the over-smooth forecast are Gaussian predictive distributions from the |
same model as the observations except for an underestimation (β = 0.5) and an overestimation (β = 2), |
respectively, of the smoothness. |
Since the forecasts differ only in their dependence structure, scoring rules acting on the 1-dimensional |
marginals would not be able to distinguish the ideal forecast from the others. We use the variogram score |
(VS) as a reference since it is known to discriminate misspecification of the dependence structure. We |
introduce the patched energy score, which results from the aggregation of the ES (with α = 1) over patches, |
defined as |
ESP,wP |
(F,y) = |
wPES1(FP,yP), |
P∈P |
where P is an ensemble of spatial patches, wP is the weight associated with a patch P ∈ P and FP is the |
marginal of F over the patch P. In order to make the scoring more interpretable, only square patches of |
a given size s are considered and the weights wP are uniform (wP = 1/|P|). Moreover, we consider the |
aggregated CRPS and the ES since they are limiting cases of the patched ES for 1×1 patches and a single |
patch over the whole domain D, respectively. Additionally, we proposed the p-variation score (pVS), which |
is based on the p-variation transformation to focus on the discrimination of the regularity of the random |
f |
ields, |
Tp−var,s(X) = |Xs+(1,1) − Xs+(1,0) − Xs+(0,1) + Xs|p |
pVS(F,y) = |
= |
wsSETp−var,s |
(F,y); |
s∈D∗ |
s∈D∗ |
ws(EF[Tp−var,s(X)] − Tp−var,s(y))2, |
where D∗ is the domain D restricted to grid points such that Tp−var,s is defined (i.e., D∗ = {1,...,19} × |
{1,...,19}). Note that in the literature on fractional random fields, the p-variation is an important charac |
teristic used to characterize the roughness of a random field and is commonly used for estimation purposes, |
see Benassi et al. (2004), Basse-O’Connor et al. (2021) and the references therein. |
21 |
(a) Variogram score |
(b) p-Variation score |
(c) Aggregated CRPS, patched ESs and ES |
Figure 2: Expectation of scoring rules focused the dependence structure: (a) the variogram score, (b) the |
p-variation score and (c) the patched energy score (and its limiting cases: the aggregated CRPS and the |
energy score), for the ideal forecast (violet), the small-range forecast (lighter blue), the large-range forecast |
(darker blue), the under-smooth forecast (lighter orange), and the over-smooth forecast (darker orange). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.