text stringlengths 1 298 |
|---|
This section presents the zoology of available verification tools and briefly summarizes their benefits and |
limitations. First, we define scoring rules and their key properties. Then, we recall univariate scoring rules, |
starting with ones derived from scoring functions used in point forecasting. Finally, we provide an overview |
of verification tools for multivariate forecasts. |
2.1 Calibration, sharpness, and propriety |
Gneiting et al. (2007) proposed a paradigm for the evaluation of probabilistic forecasts: "maximizing the |
sharpness of the predictive distributions subject to calibration". Calibration is the statistical compatibility |
between the forecast and the observations. Sharpness is the concentration of the forecast and is a property |
of the forecast itself. In other words, the paradigm aims at minimizing the uncertainty of the forecast given |
that the forecast is statistically consistent with the observations. Tsyplakov (2011) states that the notion of |
calibration in the paradigm is too vague but it holds if the definition of calibration is refined. This principle |
for the evaluation of probabilistic forecasts has reached a consensus in the field of probabilistic forecast |
ing (see, e.g., Gneiting and Katzfuss 2014; Thorarinsdottir and Schuhen 2018). The paradigm proposed |
in Gneiting et al. (2007) is not the first mention of the link between sharpness and calibration: for exam |
ple, Murphy and Winkler (1987) mentioned the relation between refinement (i.e., sharpness) and calibration. |
For univariate forecasts, multiple definitions of calibration are available depending on the setting. The |
most used definition is probabilistic calibration and, broadly speaking, consists of computing the rank of |
observations among samples of the forecast and checking for uniformity with respect to observations. If |
the forecast is calibrated, observations should not be distinguishable from forecast samples, and thus, the |
distribution of their ranks should be uniform. Probabilistic calibration can be assessed by probability integral |
transform (PIT) histograms (Dawid, 1984) or rank histograms (Anderson, 1996; Talagrand et al., 1997) |
for ensemble forecasts when observations are stationary (i.e., their distribution is the same across time). |
The shape of the PIT or rank histogram gives information about the type of (potential) miscalibration: |
a triangular-shaped histogram suggests that the probabilistic forecast has a systematic bias, a ∪-shaped |
histogram suggests that the probabilistic forecast is under-dispersed and a ∩-shaped histogram suggests that |
the probabilistic forecast is over-dispersed. Moreover, probabilistic calibration implies that rank histograms |
should be uniform but uniformity is not sufficient. For example, rank histograms should also be uniform |
conditionally on different forecast scenarios (e.g., conditionally on the value of the observations available |
when the forecast is issued). Additionally, under certain hypotheses, calibration tools have been developed |
to consider real-world limitations such as serial dependence (Bröcker and Ben Bouallègue, 2020). Statistical |
tests have been developed to check the uniformity of rank histograms (Jolliffe and Primo, 2008). Readers |
interested in a more in-depth understanding of univariate forecast calibration are encouraged to consult |
Tsyplakov (2013, 2020). |
For multivariate forecasts, a popular approach relies on a similar principle: first, multivariate forecast |
samples are transformed into univariate quantities using so-called pre-rank functions and then the calibration |
is assessed by techniques used in the univariate case (see, e.g., Gneiting et al. 2008). Pre-rank functions may |
be interpretable and allow targeting the calibration of specific aspects of the forecast such as the dependence |
structure. Readers interested in the calibration of multivariate forecasts can refer to Allen et al. (2024) for |
a comprehensive review of multivariate calibration. |
A scoring rule S assigns a real-valued quantity S(F,y) to a forecast-observation pair (F,y), where F ∈ F |
is a probabilistic forecast and y ∈ Rd is an observation. In the negative-oriented convention, a scoring rule |
S is proper relative to the class F if |
EG[S(G,Y )] ≤ EG[S(F,Y )] |
(1) |
3 |
for all F,G ∈ F, where EG[···] is the expectation with respect to Y ∼ G. In simple terms, a scoring rule |
is proper relative to a class of distribution if its expected value is minimal when the true distribution is |
predicted, for any distribution within the class. Forecasts minimizing the expected scoring rule are said to |
be efficient and the other forecasts are said to be sub-efficient. Moreover, the scoring rule S is strictly proper |
relative to the class F if the equality in (1) holds if and only if F = G. This ensures the characterization of |
the ideal forecast (i.e., there is a unique efficient forecast and it is the true distribution). Moreover, proper |
scoring rules are powerful tools as they allow the assessment of calibration and sharpness simultaneously |
(Winkler, 1977; Winkler et al., 1996). Sharpness can be assessed individually using the entropy associated |
with proper scoring rules, defined by eS(F) = EF[S(F,Y )]. The sharper the forecast, the smaller its entropy. |
Strictly proper scoring rules can also be used to infer the parameters of a parametric probabilistic forecast |
(see, e.g., Gneiting et al. 2005; Pacchiardi et al. 2024). |
Regardless of all the interesting properties of proper scoring rules, it is worth noting that they have some |
limitations. Proper scoring rules may have multiple efficient forecasts (i.e., associated with their minimal |
expected value) and, in the general setting, no guarantee is given on their relevance. Moreover, strict propri |
ety ensures that the efficient forecast is unique and that it is the ideal forecast (i.e., the true distribution), |
however, no guarantee is available for forecasts within the vicinity of the minimum in the general case. This |
is particularly problematic since, in practice, the unavailability of the ideal distribution makes it impossible |
to know if the minimum expected score is achieved. In the case of calibrated forecasts, the expected scoring |
rule is the entropy of the forecast and the ranking of forecasts is thus linked to the information carried by |
the forecast (see Corollary 4, Holzmann and Eulert 2014 for the complete result). These limitations may |
explain the plurality of scoring rules depending on application fields. |
2.2 Univariate scoring rules |
We recall classical univariate scoring rules to explain key concepts. Some univariate scoring rules will be |
useful for the multivariate scoring rules construction framework proposed in Section 3. Let P(E) denote the |
class of Borel probability measures on E. We consider F ∈ F ⊆ P(R) a probabilistic forecast in the form of |
its cumulative distribution function (cdf) and y ∈ R an observation. When the probabilistic forecast F has |
a probability density function (pdf), it will be denoted f. |
The simplest scoring rules can be derived from scoring functions used to assess point forecasts. The |
squared error (SE) is the most popular and is known through its averaged value (the mean squared error; |
MSE) or the square root of its average (the root mean squared error; RMSE) which has the advantage of |
being expressed in the same units as the observations. As a scoring rule, the SE is expressed as |
SE(F,y) = (µF −y)2, |
(2) |
where µF denotes the mean of the predicted distribution F. The SE solely discriminates the mean of the |
forecast (see Appendix A); efficient forecasts for SE are the ones matching the mean of the true distribution. |
The SE is proper relative to P2(R), the class of Borel probability measures on R with a finite second moment |
(i.e., finite variance). Note that the SE cannot be strictly proper as the equality of mean does not imply the |
equality of distributions. |
Another well-known scoring rule is the absolute error (AE) defined by |
AE(F,y) = |med(F)−y|, |
(3) |
where med(F) is the median of the predicted distribution F. The mean absolute error (MAE), the average |
of the absolute error, is the most seen form of the AE and it is also expressed in the same units as the |
observations. Efficient forecasts are forecasts that have a median equal to the median of the true distribution. |
The AE is proper relative to P1(R) but not strictly proper. Similarly, the quantile score (QS), also known |
as the pinball loss, is a scoring rule focusing on quantiles of level α defined by |
QSα(F,y) = (1y≤F−1(α) −α)(F−1(α)−y) |
(4) |
where 0 < α < 1 is a probability level and F−1(α) is the predicted quantile of level α. The case α = 0.5 |
corresponds to the AE up to a factor 2. The QS of level α is proper relative to P1(R) but not strictly proper |
4 |
since efficient forecasts are ones correctly predicting the quantile of level α (see, e.g., Friederichs and Hense |
2008). |
Another summary statistic of interest is the exceedance of a threshold t ∈ R. The Brier score (BS; Brier |
1950) was initially introduced for binary predictions but allows also to discriminate forecasts based on the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.