Buckets:
Title: Copula Conformal Prediction for Multi-step Time Series Forecasting
URL Source: https://arxiv.org/html/2212.03281
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Related Work 3Background 4Copula Conformal Prediction for Time Series (CopulaCPTS) 5Experiments 6Conclusion and Discussion License: CC BY 4.0 arXiv:2212.03281v4 [cs.LG] 18 Mar 2024 Copula Conformal Prediction for Multi-step Time Series Forecasting Sophia Sun University of California, San Diego shs066@ucsd.edu &Rose Yu University of California, San Diego roseyu@ucsd.edu
Abstract
Accurate uncertainty measurement is a key step in building robust and reliable machine learning systems. Conformal prediction is a distribution-free uncertainty quantification framework popular for its ease of implementation, finite-sample coverage guarantees, and generality for underlying prediction algorithms. However, existing conformal prediction approaches for time series are limited to single-step prediction without considering the temporal dependency. In this paper, we propose the Copula Conformal Prediction algorithm for multivariate, multi-step Time Series forecasting, CopulaCPTS. We prove that CopulaCPTS has finite-sample validity guarantee. On four synthetic and real-world multivariate time series datasets, we show that CopulaCPTS produces more calibrated and efficient confidence intervals for multi-step prediction tasks than existing techniques. Our code is open-sourced at https://github.com/Rose-STL-Lab/CopulaCPTS.
1Introduction
Deep learning models are becoming widely used in high-risk settings such as healthcare and transportation. In these settings, it is important that a model produces calibrated uncertainty to reflect its own confidence. Confidence regions are a common approach to quantify prediction uncertainty (Khosravi et al., 2011). A ( 1 โ ๐ผ ) -confidence region ฮ 1 โ ๐ผ for a random variable ๐ฆ is valid if it contains ๐ฆ โs true value with high probability: โ โข [ ๐ฆ โ ฮ 1 โ ๐ผ ] โฅ 1 โ ๐ผ . Note that one can make the confidence region infinitely large to satisfy validity. But for the confidence region to be useful, we also want to minimize its area while remaining valid; this is known as the efficiency of the region.
Conformal prediction (CP) is a powerful framework to produce confidence regions with finite-sample guarantees of validity (Vovk et al., 2005; Lei et al., 2018). Furthermore, it makes no assumptions about the underlying prediction model or the data distribution. CPโs generality, simplicity, and statistical guarantees have made it popular for many real-world applications including time series prediction (Xu & Xie, 2021), drug discovery (Eklund et al., 2015) and safe robotics (Luo et al., 2021).
Figure 1:Illustration of the multi-step time series forecasting setting. (Left) The timesteps within a time series are temporally dependent, and (Right) the observations in the dataset are independent.
This paper considers the setting of multi-step time series forecasting from a set of independent sequences. Consider the problem of vehicle trajectory prediction, illustrated in Figure 1. Given a dataset of trajectories, the task is to predict a future trajectory for ๐ steps given its past trajectory of ๐ก time steps. We assume that the trajectories are independent from each other. For each trajectory, these time steps are temporally dependent.
There are many real-world tasks that present the same challenges as the example above, such as EEG forecasting (each patient is independent), short-term weather forecasting (local meteorology history is independent), etc. They require predicting multiple time steps into the future, so it is desired to have a โcone of uncertaintyโ that covers the entire course of the forecasts. Existing CP methods for time series data either only provide coverage guarantee for individual time steps (Gibbs & Candes, 2021; Xu & Xie, 2021) or produce confidence regions that are often too inefficient to be useful, especially in long horizons or multivariate settings (Stankeviฤiลซtฤ et al., 2021).
In this paper, we present a practical and effective conformal prediction algorithm for multi-step time series forecasting. We introduce CopulaCPTS, a Copula-based Conformal Prediction algorithm for multi-step Time Series forecasting. A copula is a multivariate cumulative distribution function that models the dependence between multiple random variables. By using copulas to model the uncertainty jointly over future time steps, we can shrink the confidence regions significantly while maintaining validity. Copulas have been used for conformal prediction (Messoudi et al., 2021), but they focus on multiple target prediction in non-temporal settings and did not provide a validity proof.
In summary, our contributions are:
โข
We introduce CopulaCPTS, a general uncertainty quantification algorithm that can be applied to any multivariate multi-step forecaster.
โข
We prove that CopulaCPTS produces valid confidence regions for the full forecast horizon.
โข
CopulaCPTS produces significantly sharper and more calibrated uncertainty estimates than state-of-the-art baselines on two synthetic and two real-world benchmark datasets.
โข
We extend CopulaCPTS to obtain valid confidence intervals for time series forecasts of varying lengths.
2Related Work Deep Uncertainty Quantification for Time-Series Forecasting.
The two major paradigms of Uncertainty Quantification (UQ) methods for deep neural networks are Bayesian and Frequentist. Bayesian approaches estimate a distribution over the model parameters given data, and then marginalize these parameters to form output distributions via Markov Chain Monte Carlo (MCMC) sampling (Welling & Teh, 2011; Neal, 2012; Chen et al., 2014) or variational inference (VI) (Graves, 2011; Kingma et al., 2015; Blundell et al., 2015; Louizos & Welling, 2017). Wang et al. (2019); Wu et al. (2021) propose Bayesian Neural Networks (BNN) for UQ of spatiotemporal forecasts. In practice, Bayesian UQ can be computationally expensive and difficult to optimize, especially for larger networks (Lakshminarayanan et al., 2017; Zadrozny & Elkan, 2001). Furthermore, Bayesian methods do not provide any finite sample coverage guarantees. Therefore, UQ for deep neural network time series forecasts often adopts approximate Bayesian inference such as MC-dropout (Gal & Ghahramani, 2016b; Gal et al., 2017).
Frequentist UQ methods emphasize robustness against variations in the data. These approaches either rely on resampling the data or learning an interval bound to encompass the dataset. For time series forecasting UQ, approaches include ensemble methods such as bootstrap (Efron & Hastie, 2016; Alaa & Van Der Schaar, 2020) and jackknife methods (Kim et al., 2020; Alaa & Van Der Schaar, 2020); interval prediction methods include interval regression through proper scoring rules (Kivaranovic et al., 2020; Wu et al., 2021), and quantile regression (Takeuchi et al., 2006), with many recent advances for time series UQ (Tagasovska & Lopez-Paz, 2019; Gasthaus et al., 2019; Park et al., 2022; Kan et al., 2022). Many of the frequentist methods produce asymptotically valid confidence regions and can be categorized as distribution-free UQ techniques as they are (1) agnostic to the underlying model and (2) agnostic to data distribution.
Conformal Prediction.
Conformal prediction (CP) is an important member of distribution-free UQ methods; we refer readers to Angelopoulos & Bates (2021) for a comprehensive introduction and survey of CP. CP has become popular because of its simplicity, generality, theoretical soundness, and low computational cost. A key feature of CP is that under the exchangeability assumption, conformal methods guarantee validity in finite samples (Vovk et al., 2005).
Most relevant to our work is the recent endeavor in generalizing CP to time-series forecasting. According to Stankeviฤiลซtฤ et al. (2021) there are two settings: data generated from (1) one single time series or (2) multiple independent time series. For the first setting, ACI (Gibbs & Candes, 2021) and EnbPI (Xu & Xie, 2021) developed CP algorithms that relax the exchangeability assumption while maintaining asymptotic validity via online learning (former) and ensembling (later); Zaffran et al. (2022) further improves online adaptation. Sousa et al. (2022) combines EnbPI with conformal quantile regression (Romano et al., 2019) to model heteroscedastic time series. However, because these algorithms operate on one single time series, the validity guarantees do not cover the full horizon, posing issues for application in high-risk settings.
We focus on the setting where data consists of many independent time series. Stankeviฤiลซtฤ et al. (2021) shares the same setting as ours but provides only a univariate time series solution. We show that their method of applying Bonferroni correction produces inefficient confidence regions, especially for multidimensional data or long prediction horizons. Messoudi et al. (2021) uses a copula function for multi-target CP for non-temporal data, creating box-like regions to account for the correlations between the labels. However, they do not provide theoretical proof and empirical results have shown that are often invalid. This paper builds upon these works and presents a novel two-step algorithm with guaranteed multivariate multi-step coverage and efficient confidence regions.
3Background 3.1Inductive Conformal Prediction (ICP)
Let ๐
{ ๐ง ๐
( ๐ฅ ๐ , ๐ฆ ๐ ) } ๐
1 ๐ be a dataset with input ๐ฅ ๐ โ ๐ณ and output ๐ฆ ๐ โ ๐ด such that each data point ๐ง ๐ โ ๐ต := ๐ณ ร ๐ด is drawn i.i.d. from an unknown distribution ๐ต .
We will briefly present the algorithm and theoretical results for conformal prediction, and refer readers to Angelopoulos & Bates (2021) for a thorough introduction. The goal of conformal prediction is to produce a valid confidence region (Def. 1) for any underlying prediction model.
Definition 3.1 (Validity).
Given a new data point ( ๐ฅ , ๐ฆ ) and a desired confidence 1 โ ๐ผ โ ( 0 , 1 ) , the confidence region ฮ 1 โ ๐ผ โข ( ๐ฅ ) is a subset of ๐ด containing probable outputs ๐ฆ ~ โ ๐ด given ๐ฅ . The region ฮ 1 โ ๐ผ is valid if
โ โข [ ๐ฆ โ ฮ 1 โ ๐ผ โข ( ๐ฅ ) ] โฅ 1 โ ๐ผ
(1)
Conformal prediction splits the dataset into a proper training set ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ and a calibration set ๐ ๐ โข ๐ โข ๐ . A prediction model ๐ ^ : ๐ณ โ ๐ด is trained on ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ . We use a nonconformity score ๐ด : ๐ต | ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ | ร ๐ต โ โ to quantify how well a data sample from calibration conforms to the training dataset. Typically, we choose a metric of disagreement between the prediction and the true label as the non-conformity score, such as the Euclidean distance:
๐ด โข ( ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ , ( ๐ฅ , ๐ฆ ) )
e.g. ๐ โข ( ๐ฆ , ๐ ^ โข ( ๐ฅ ) )
e.g. โ ๐ฆ โ ๐ ^ โข ( ๐ฅ ) โ 2
(2)
For conciseness, we write ๐ด โข ( ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ , ( ๐ฅ ๐ , ๐ฆ ๐ ) ) as ๐ด โข ( ๐ง ๐ ) in rest of the paper.
Let ๐ฎ
{ ๐ด โข ( ๐ง ๐ ) } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ denote the set of nonconformity scores of all samples in the calibration set ๐ ๐ โข ๐ โข ๐ . We can define a quantile function for the nonconformity scores ๐ฎ as:
๐ โข ( 1 โ ๐ผ , ๐ฎ ) := inf โข { ๐ * : ( 1 | ๐ฎ | โข โ ๐ ๐ โ ๐ฎ ๐ ๐ ๐ โค ๐ * ) โฅ 1 โ ๐ผ } .
(3)
Conformal prediction is guaranteed to produce valid confidence regions (Vovk et al., 2005), under the exchangeablility assumption defined as follows,
Definition 3.2 (Exchangeability).
In a dataset { ๐ง ๐ } ๐
1 ๐ of size ๐ , any of its ๐ ! permutations are equally probable.
The procedure introduced above is known as inductive conformal prediction, as it splits the dataset into training and calibration sets to reduce computation load (Vovk et al., 2005; Lei & Wasserman, 2012). Our method is based on inductive CP, but can also be easily adapted for other CP variants.
3.2Copula and Its Properties
Copula is a concept from statistics that describes the dependency structure in a multivariate distribution. It has also been used in generative models for multivariate time series (Salinas et al., 2019; Drouin et al., 2022). We can use copulas to capture the joint distribution for multiple future time steps. We briefly introduce its notations and concepts.
Definition 3.3 (Copula).
Given a random vector ( ๐ 1 , โฏ โข ๐ ๐ ) , define the marginal cumulative density function (CDF) for each variable ๐ โ , โ โ { 1 , โฆ , ๐ } as
๐น โ โข ( ๐ฅ )
โ โข [ ๐ โ โค ๐ฅ ]
The copula of ( ๐ 1 , โฏ โข ๐ ๐ ) is the joint CDF of ( ๐น 1 โข ( ๐ 1 ) , โฏ , ๐น ๐ โข ( ๐ ๐ ) ) , written as
๐ถ โข ( ๐ข 1 , โฏ , ๐ข ๐ )
โ โข [ ๐น 1 โข ( ๐ 1 ) โค ๐ข 1 , โฏ , ๐น ๐ก โข ( ๐ ๐ ) โค ๐ข ๐ ]
In other words, the copula function captures the dependency structure between the variable ๐ s; we can view an ๐ dimensional copula ๐ถ : [ 0 , 1 ] ๐ โ [ 0 , 1 ] as a CDF with uniform marginals, as illustrated in Figure 2. A fundamental result in the theory of copula is Sklarโs theorem.
Theorem 3.4 (Sklarโs theorem).
Given a joint CDF as ๐น โข ( ๐ 1 , โฏ , ๐ ๐ ) and the marginals ๐น 1 โข ( ๐ฅ ) , โฆ , ๐น ๐ โข ( ๐ฅ ) , there exists a copula such that
๐น โข ( ๐ฅ 1 , โฏ , ๐ฅ ๐ )
๐ถ โข ( ๐น 1 โข ( ๐ฅ 1 ) , โฏ , ๐น ๐ โข ( ๐ฅ ๐ ) )
for all ๐ฅ ๐ โ ( โ โ , โ ) , ๐ โ { 1 , โฆ , ๐ } .
Sklarโs theorem states that for all multivariate distribution functions, there exists a copula function such that the distribution can be expressed using the copula and multiple univariate marginal distributions. When all the ๐ ๐ s are independent, the copula function is known as the product copula: ๐ถ โข ( ๐ข 1 , โฏ , ๐ข ๐ )
ฮ ๐
1 ๐ โข ๐ข ๐ .
Figure 2:An example copula, where we express a multivariate Gaussian with correlation ๐
0.8 with two univariate distributions and a copula function ๐ถ โข ( ๐ข 1 , ๐ข 2 ) . 4Copula Conformal Prediction for Time Series (CopulaCPTS)
UQ methods are evaluated on two properties: validity and efficiency. A model is valid when the predicted confidence is greater than or equal to the probability of events falling into the predicted range (Definition 1). The term calibration describes the case of equality in the validity condition. Efficiency, on the other hand, refers to the size of the confidence region. In practice, we want the measure of the confidence region (e.g. its area or length) to be as small as possible, given that the validity condition holds. CopulaCPTS improves the efficiency of confidence regions by modeling the dependency of the time steps using a copula function.
Denote the time series dataset of size ๐ as ๐
{ ๐ง ๐
( ๐ฅ 1 : ๐ก ๐ , ๐ฆ 1 : ๐ ๐ ) } ๐
1 ๐ , where ๐ฅ 1 : ๐ก โ โ ๐ก ร ๐ is ๐ก input steps, and ๐ฆ 1 : ๐ โ โ ๐ ร ๐ is ๐ prediction steps, both with dimension ๐ at each step. Each data point ๐ง ๐ is sampled i.i.d. from an unknown distribution ๐ต . In the multi-step forecasting setting, given a confidence level 1 โ ๐ผ , the algorithm produces ๐ confidence regions for a test input ๐ฅ 1 : ๐ก ๐ + 1 , denoted as [ ฮ 1 1 โ ๐ผ , โฆ , ฮ ๐ 1 โ ๐ผ ] . We say the confidence regions are valid if all time steps in the forecast are covered:
โ โข [ โ ๐ โ { 1 , โฆ , ๐ } , ๐ฆ ๐ โ ฮ ๐ 1 โ ๐ผ ] โฅ 1 โ ๐ผ .
(4)
In the following sections, we introduce CopulaCPTS, a conformal prediction algorithm that is both valid and efficient for multivariate multi-step time series forecasts.
4.1Algorithm Details
The key insight of our algorithm is that we can model the joint probability of uncertainty for multiple predicted time steps with a copula, hence better capturing the confidence regions. We divide the calibration set ๐ ๐ โข ๐ โข ๐ into two subsets: ๐ ๐ โข ๐ โข ๐ โ 1 , which we use to estimate a Cumulative Distribution Function (CDF) on the nonconformity score of each time step, and ๐ ๐ โข ๐ โข ๐ โ 2 , to calibrate the copula.
The two calibration sets allow us to prove validity for both components of our algorithm. At the cost of using a subset of the data to calibrate a copula, our approach produces provably more efficient confidence regions compared to worst-case corrections such as union bounding in Stankeviฤiลซtฤ et al. (2021) which is a lower bound for copulas (Appendix B.1), and more valid regions than Messoudi et al. (2021) (table 1).
Nonconformity of Multivariate Forecasts.
If the time series is multivariate, we have each target time step ๐ฆ ๐ โ โ ๐ . Given ๐ง
( ๐ฅ , ๐ฆ ) โผ ๐ต , let the nonconformity score be the L-2 distance ๐ ๐ ๐
๐ด โข ( ๐ง ๐ ) ๐
e.g. โ ๐ฆ ๐ ๐ โ ๐ ^ โข ( ๐ฅ ๐ ) ๐ โ for each timestep ๐
1 , โฆ , ๐ , where ๐ ^ โข ( ๐ฅ ) is a forecasting model trained on ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ . The confidence region ฮ 1 โ ๐ผ โข ( ๐ฅ ) therefore is a ๐ -dimensional ball. We chose this metric for simplicity, but one can choose other metrics such as Mahalanobis (Johnstone & Cox, 2021) or L-1 (Messoudi et al., 2021) distance based on domain needs, and our algorithm will remain valid.
For brevity, we will use ๐ฎ โข 1
{ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 1 to denote the set of nonconformity scores of data in ๐ ๐ โข ๐ โข ๐ โ 1 and ๐ฎ โข 2
{ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 the set of nonconformity scores of data in ๐ ๐ โข ๐ โข ๐ โ 2 . Subscript ๐ will be used to index the set of specific time steps of the scores: ๐ฎ โข 1 ๐
{ ๐ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 1 , ๐ฎ โข 2 ๐
{ ๐ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 .
Calibrating CDF on ๐ ๐ โข ๐ โข ๐ โ 1 .
We use ๐ ๐ โข ๐ โข ๐ โ 1 to build conformal predictive distributions for each time stepโs anomaly scores, which provides desirable validity properties (Vovk et al., 2017). The conformal cumulative distribution function is constructed as follows. 1
๐น ^ ๐ โข ( ๐ ๐ ) := 1 | ๐ฎ โข 1 ๐ | + 1 โข ( ๐ + โ ๐ ๐ โ ๐ฎ โข 1 ๐ ๐ ๐ ๐ ๐ < ๐ ๐ ) , where โข ๐ โผ ( 0 , 1 ) , for โข ๐ โ { 1 , โฆ , ๐ }
(5) Copula Calibration on ๐ ๐ โข ๐ โข ๐ โ 2 .
Next, for every data point in ๐ ๐ โข ๐ โข ๐ โ 2 , we evaluate the cumulative probability of its anomaly scores with the estimated conformal predictive distributions:
๐ฐ
{ ๐ฎ ๐ } ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 , ๐ฎ ๐
( ๐ข 1 ๐ , โฆ , ๐ข ๐ ๐ )
( ๐น ^ 1 โข ( ๐ 1 ๐ ) , โฆ , ๐น ^ ๐ โข ( ๐ ๐ ๐ ) )
(6)
We adopt the empirical copula (Ruschendorf, 1976) for modeling and proof in this work. The empirical copula is a non-parametric method of estimating marginals directly from observation, and hence does not introduce any bias. For the joint distribution of ๐ time steps, we construct ๐ถ empirical : [ 0 , 1 ] ๐ โ [ 0 , 1 ] as Eqn 7.
๐ถ empirical โข ( ๐ฎ )
1 | ๐ ๐ โข ๐ โข ๐ โ 2 | + 1 โข โ ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 โช { โ } โ ๐
1 ๐ ๐ ๐ฎ ๐ ๐ < ๐ฎ ๐
(7)
Here boldface โ is a k-dimensional vector with each โ ๐
โ for ๐
1 , โฆ , ๐ .
To fulfill the full-horizon validity condition of Eqn 4, we only need to find appropriate ๐ฎ * such that ๐ถ empirical โข ( ๐ฎ * ) โฅ 1 โ ๐ผ .
arg โข min ๐ฎ * โข โ ๐
1 ๐ ๐ฎ ๐ * s.t. โข ๐ถ empirical โข ( ๐ฎ * ) โฅ 1 โ ๐ผ
(8)
Note that the ๐ฎ * is not and does not have to be unique; any solution that satisfies the constraint in Eq. 8 will guarantee multi-step validity (Appendix A). The minimization helps with efficiency, i.e. the sharpness of the confidence regions. We use a gradient descent algorithm for the optimization in implementation (see Appendix B.2 for details, and Appendix C.5 for a study of its effectiveness). Lastly, We obtain ( ๐ 1 * , โฆ , ๐ ๐ * ) by ๐น ^ ๐ โ 1 โข ( ๐ฎ ๐ * ) and construct the confidence region for each time step ๐ โ { 1 , โฆ , ๐ } as the set of all ๐ฆ ๐ โ โ ๐ such that the nonconformity score is less than ๐ ๐ * . Algorithm 1 summarizes the CoupulaCPTS procedure.
The full proof of CopulaCPTSโs validity (theorem 4.1) can be found in Appendix A. Intuitively, CopulaCPTS performs conformal prediction twice: first calibrating the nonconformity scores of each time step with ๐ ๐ โข ๐ โข ๐ โ 1 , and then calibrating the copula with ๐ ๐ โข ๐ โข ๐ โ 2 .
Theorem 4.1 (Validity of CopulaCPTS).
CopulaCPTS (algorithm 1) produces valid confidence regions for the entire forecast horizon. i.e.
โ
โข
[
โ
๐
โ
{
1
,
โฆ
,
๐
}
,
๐ฆ
๐
โ
ฮ
๐
1
โ
๐ผ
]
โฅ
1
โ
๐ผ
.
Input: Dataset
๐
, test inputs
๐
๐ก
โข
๐
โข
๐
โข
๐ก
, target significant level
1
โ
๐ผ
.
Output:
ฮ
1
1
โ
๐ผ
,
โฆ
,
ฮ
๐
1
โ
๐ผ
for each test input.
1 // Training
2 Randomly split dataset
๐
into
๐
๐ก
โข
๐
โข
๐
โข
๐
โข
๐
and
๐
๐
โข
๐
โข
๐
.
3 Train
๐
-step forecasting model
๐
^
on training set
๐
๐ก
โข
๐
โข
๐
โข
๐
โข
๐
. // Calibration
4 Randomly split
๐
๐
โข
๐
โข
๐
into
๐
๐
โข
๐
โข
๐
โ
1
and
๐
๐
โข
๐
โข
๐
โ
2
.
5 for
(
๐ฅ
1
:
๐ก
๐
,
๐ฆ
1
:
๐
๐
)
โ
๐
๐
โข
๐
โข
๐
โ
1
โช
๐
๐
โข
๐
โข
๐
โ
2
do
6
๐ฆ
^
1
:
๐
๐
โ
๐
^
โข
(
๐ฅ
1
:
๐ก
๐
)
7
๐
๐
๐
โ
โ
๐ฆ
๐
๐
โ
๐ฆ
^
๐
๐
โ
for
๐
1 , โฆ , ๐ 8 9 end for 10 ๐น ^ 1 , โฆ , ๐น ^ ๐ โ Eq. (5) on ๐ ๐ โข ๐ โข ๐ โ 1 11 ๐ถ empirical โข ( โ ) โ Eq. (7) on ๐ ๐ โข ๐ โข ๐ โ 2 12 ๐ฎ * โ Eq. (8) 13 ๐ ๐ *
๐น ^ ๐ โ 1 โข ( ๐ฎ ๐ * ) for ๐
1 , โฆ , ๐ 14 // Prediction 15for ๐ฅ 1 : ๐ก ๐ โ ๐ ๐ก โข ๐ โข ๐ โข ๐ก do 16 ๐ฆ ^ 1 : ๐ ๐ โ ๐ ^ โข ( ๐ฅ 1 : ๐ก ๐ ) 17 ฮ ๐ 1 โ ๐ผ โ { ๐ฆ : โ ๐ฆ โ ๐ฆ ^ โ ๐ โ < ๐ ๐ * } for ๐
1 , โฆ , ๐ 18 yield ฮ 1 1 โ ๐ผ , โฆ , ฮ ๐ 1 โ ๐ผ 19 end for Algorithm 1 Copula Conformal Time Series Prediction (CopulaCPTS) 5Experiments
In this section, we show that CopulaCPTS produces more calibrated and efficient confidence regions compared to existing methods on two synthetic datasets and two real-world datasets. We demonstrate that CopulaCPTSโs advantage is more evident over longer prediction horizons in Section 5.2. We also show its effectiveness in the autoregressive prediction setting in Section 5.2.
All experiments in this paper split the calibration set in half into equal-sized ๐ ๐ โข ๐ โข ๐ โ 1 and ๐ ๐ โข ๐ โข ๐ โ 2 . Although the split does not significantly impact the result when calibration data is ample, performance deteriorates when there are not enough data in either one of the subsets.
Baselines.
We compare our model with three representative works in different paradigms of deep uncertainty quantification: the Bayesian-motivated Monte Carlo dropout RNN (MC-dropout) by Gal & Ghahramani (2016a), the frequentist blockwise jackknife RNN (BJRNN) by Alaa & Van Der Schaar (2020), a conformal forecasting RNN (CF-RNN) by Stankeviฤiลซtฤ et al. (2021), and the multi-target copula algorithm that does not have the two step calibration (Copula) by Messoudi et al. (2021). We use the same underlying prediction model for post-hoc uncertainty quantification methods BJRNN, CF-RNN, and CopulaCPTS. The MC-dropout RNN is of the same architecture but is trained separately, as it requires an extra dropout step during training and inference.
Metrics.
We evaluate calibration and efficiency for each method. For calibration, we report the empirical coverage on the test set. Coverage should be as close to the desired confidence level 1 โ ๐ผ as possible. Coverage is calculated as:
Coverage 1 โ ๐ผ
๐ผ ๐ฅ , ๐ฆ โผ ๐ต โข โ โข [ ๐ฒ โ ฮ 1 โ ๐ผ โข ( ๐ฅ ) ] โ 1 | ๐ ๐ก โข ๐ โข ๐ โข ๐ก | โข โ ๐ฅ ๐ , ๐ฆ ๐ โ | ๐ ๐ก โข ๐ โข ๐ โข ๐ก | ๐ โข ( ๐ฆ ๐ โ ฮ 1 โ ๐ผ โข ( ๐ฅ ๐ ) ) .
For efficiency, we report the average area (2D) or volume (3D) of the confidence regions. The measure should be as small as possible while being valid (coverage maintains above-specified confidence level). The area or volume is calculated as:
Area 1 โ ๐ผ
๐ผ ๐ฅ โผ ๐ณ โข [ โ ฮ 1 โ ๐ผ โข ( ๐ฅ ) โ ] โ 1 | ๐ ๐ก โข ๐ โข ๐ โข ๐ก | โข โ ๐ฅ ๐ โ | ๐ ๐ก โข ๐ โข ๐ โข ๐ก | โ ฮ 1 โ ๐ผ โข ( ๐ฅ ๐ ) โ .
5.1Synthetic Datasets
We first test the effectiveness of our models on two synthetic spatiotemporal datasets - interacting particle systems (Kipf et al., 2018), and drone trajectory following simulated with PythonRobotics (Sakai et al., 2018). For particle simulation, we predict ๐ฒ ๐ก + 1 : ๐ก + โ where ๐ก
35 , โ
25 and ๐ฆ ๐ก โ โ 2 ; for drone simulation ๐ก
60 , โ
10 , and ๐ฆ ๐ก โ โ 3 . To add randomness to the tasks, we added Gaussian noise of ๐
.01 and .05 to the dynamics of particle simulation and ๐
.02 to drone dynamics. We generate 5000 samples for each dataset, and split the data by 45/45/10 for train, calibration, and test, respectively. For baselines that does not require calibration, the calibration split is used for training the model. Please see Appendix C.1 for forecaster model details.
We visualize the calibration and efficiency of the methods in Figure 3 for confidence levels 1 โ ๐ผ
0.5 to 0.95 . We can see that Copula-RNN, the red lines, are more calibrated and efficient compared to other baseline methods, especially when the confidence level is high ( 90 % and 95 % ). We can see that for harder tasks (particle ๐
0.05 , and drone trajectory prediction), MC-Dropout is overconfident, whereas BJ-RNN and CF-RNN produce very large (hence inefficient) confidence regions. This behavior of CF-RNN is expected because they apply Bonferroni correction to account for joint prediction for multiple time steps, which is an upper bound of copula functions. Numerical results for confidence level 90 % are presented in Table 1. A qualitative comparison of confidence regions for drone simulation can be found in Figure 9 in Appendix C.4.
Figure 3:Calibration (upper row) and efficiency (lower row) comparison on different 1 โ ๐ผ levels for synthetic data sets. Shaded regions are ยฑ 2 standard deviations over 3 runs. For calibration, the goal is to stay above the green dotted (validity) and coincide as closely as possible (calibration). CopulaCPTS is more calibrated across different significance levels. For efficiency, we want the metric to be small. CopulaCPTS outperforms the baselines consistently. (MC-dropout for the right two experiments produces invalid regions, so we donโt consider its efficiency.) Table 1:Performance in synthetic and real-world datasets with target confidence 1 โ ๐ผ
0.9 . Methods that are invalid (coverage below 90 % ) are greyed out. CopulaCPTS achieves high level of calibration (coverage is close to 90 % ) while producing more efficient confidence regions.
MC-dropout BJRNN CF-RNN Copula CopulaCPTS
Particle Sim ( ๐
.01 ) Cov 91.5 ยฑ 2.0 98.9 ยฑ 0.2 97.3 ยฑ 1.2 86.9 ยฑ 1.9 91.3 ยฑ 1.5 Area 2.22 ยฑ 0.05 2.24 ยฑ 0.59 1.97 ยฑ 0.4 0.63 ยฑ 0.07 1.08 ยฑ 0.14 Particle Sim ( ๐
.05 ) Cov 46.1 ยฑ 3.7 100.0 ยฑ 0.0 94.5 ยฑ 1.5 88.6 ยฑ 1.7 90.6 ยฑ 0.6 Area 2.16 ยฑ 0.08 12.13 ยฑ 0.39 5.80 ยฑ 0.52 4.67 ยฑ 0.16 5.27 ยฑ 1.02 Drone Sim ( ๐
.02 ) Cov 84.5 ยฑ 10.8 90.8 ยฑ 2.8 91.6 ยฑ 9.2 89.2 ยฑ 1.3 90.0 ยฑ 0.8 Vol 9.64 ยฑ 2.13 49.57 ยฑ 3.77 32.18 ยฑ 13.66 16.92 ยฑ 8.9 17.12 ยฑ 6.93 COVID-19 Daily Cases Cov 19.1 ยฑ 5.1 79.2 ยฑ 30.8 95.4 ยฑ 1.9 90.8 ยฑ 1.4 90.5 ยฑ 1.6 Area 34.14 ยฑ 0.84 823.3 ยฑ 529.7 610.2 ยฑ 96.0 414.42 ยฑ 5.08 408.6 ยฑ 65.8 Argoverse Trajectory Cov 27.9 ยฑ 3.1 92.6 ยฑ 9.2 98.8 ยฑ 1.9 89.7 ยฑ 0.9 90.2 ยฑ 0.1 Area 127.6 ยฑ 20.9 880.8 ยฑ 156.2 396.9 ยฑ 18.67 107.2 ยฑ 9.56 126.8 ยฑ 12.22 5.2Real world datasets
COVID-19. We replicate the experiment setting of Stankeviฤiลซtฤ et al. (2021) and predict new daily cases of COVID-19 in regions of the UK. The models take 100 days of data as input and forecast 50 days into the future. We used 200 time series for training, 100 for calibration, and 80 for testing.
Vehicle trajectory prediction. The Argoverse autonomous vehicle motion forecasting dataset (Chang et al., 2019) is a widely used vehicle trajectory prediction benchmark. The task is to predict 3 second trajectories based on all vehicle motion in the past 2 seconds sampled at 10Hz. Because trajectory prediction is a challenging task, we utilize a state-of-the-art prediction algorithm LaneGCN (Liang et al., 2020) as the underlying model for both CF-RNN and Copula-RNN (details in Appendix C.1). Flexibility of underlying forecasting model is an advantage of post-hoc UQ methods such as conformal prediction. For model-dependent baselines MC-dropout and BJRNN, we have to train an RNN forecasting model from scratch for each method, which induces additional computational cost.
CopulaCPTS is both more calibrated and efficient compared to baseline models for real-world datasets (Table 1). The Covid-19 dataset demonstrates a potential failure case for our model when calibration data are scarce. Because there are only 100 calibration data, CDF and copula estimations are more stochastic depending on the dataset split, resulting in 1 case of invalidity among 3 experiment trials. Even so, CopulaCPTS shows strong performance on average by remaining valid and reducing the confidence width by 33 % . For the trajectory prediction task, learning the copula results in a 40 % sharper confidence region while still remaining valid for the 90 % confidence interval. We visualize two samples from each dataset in Figure 3.The importance of efficiency in these scenarios is clear - the confidence regions need to be narrow enough for them to be useful for decision making. Given the same underlying prediction model, we can see that CopulaCPTS produces a much more efficient region while still remaining valid.
Figure 4:Illustrations of 90% confidence regions given by CF-RNN (blue) and CopulaCPTS (orange) on two real-world datasets, COVID-19 forecast (left 2) and Argoverse (right 2 at time steps 1, 10, 20, and 30). For the Argoverse data, The red dotted lines (ego agent) and blue dotted lines (other agents) are input to the underlying prediction model and the red solid lines are the prediction output. Note that the confidence region produced CF-RNN is uninformatively large, as it covers all the lanes: these examples illustrate the importance of efficiency. Overall, CopulaCPTS is able to produce much more efficient confidence regions while maintaining valid coverage. Comparison of models at different horizon lengths.
CopulaCPTS is an algorithm designed to produce calibrated and efficient confidence regions for multi-step time series. When the prediction horizon is long, CopulaCPTSโs advantage is more pronounced. Figure 5 shows the performance comparison over increasing time horizons on the particle dataset. See Table 3 of Appendix C for numerical results. CopulaCPTS achieves a 30 % decrease in area at 20 time steps compared to CF-RNN, the best performing baseline; the decrease is above 50 % at 25 time steps. This experiment shows significant improvement of using copula to model the joint distribution of future time steps.
CopulaCPTS for Autoregressive prediction.
The autoregressive extension of CopulaCPTS is illustrated in detail in Appendix B.3. To provide preliminary evidence of effectiveness, we present test results on the COVID-19 dataset. We train an RNN model with ๐
7 and use it to autoregressively forecast the next 14 steps. Table 2 compares the performance of re-estimating the copula for each 7-step forecasts versus using a fixed copula calibrated using the first 7 steps. We also compare the model to a 14-step joint forecaster using CopulaCPTS. It is evident that daily cases of the pandemic is a non-stationary time series, where re-estimating the copula is necessary for validity.
Figure 5:CopulaCPTS remains more calibrated and efficient than baselines over increasing forecast horizons.
Method Coverage Area AR re-estimate 90.1 89.4 AR fixed 88.2 75.9 Joint 90.7 102.3
Table 2:Performance of autoregressive (AR) CopulaCPTS. Re-estimating copula gives us valid confidence region over time and is more efficient than joint CopulaCPTS forecast. 6Conclusion and Discussion
In this paper, we present CopulaCPTS, a conformal prediction algorithm for multi-step time series prediction. CopulaCPTS significantly improves calibration and efficiency of multi-step conformal confidence intervals by incorporating copulas to model the joint distribution of the uncertainty at each time step. We prove that CopulaCPTS has a finite sample validity guarantee over the entire prediction horizon. Our experiments show that CopulaCPTS produces confidence regions that are (1) valid, and (2) more efficient than state-of-the-art UQ methods on all 4 benchmark datasets, and over varying prediction horizons. The improvement is especially pronounced when the data dimension is high or the prediction horizon is long, cases when other methods are prone to be highly inefficient. Hence, we argue that our method is a practical and effective way to produce useful uncertainty quantification for machine learning forecasting models.
The limitations of our algorithm are as follows. As CopulaCPTS requires two calibration steps, it is suitable only when there are abundant data for calibration. The validity proof relies on using the empirical copula, so it does not apply to other learnable copula classes. Future work includes (1) improving the autoregressive extension of CopulaCPTS, to achieve coverage over the whole horizon, and (2) developing online settings of CopulaCPTS for decision making.
Acknowledgement
This work was supported in part by Army-ECASE award W911NF-23-1-0231, the U.S. Department Of Energy, Office of Science under #DE-SC0022255, IARPA HAYSTAC Program, CDC-RFA-FT-23-0069, NSF Grants #2205093, #2146343, and #2134274.
We would like to thank Bo Zhao for her helpful comments on the paper.
References Alaa & Van Der Schaar (2020) โ Ahmed Alaa and Mihaela Van Der Schaar.Frequentist uncertainty in recurrent neural networks via blockwise influence functions.In International Conference on Machine Learning, pp. 175โ190. PMLR, 2020. Angelopoulos & Bates (2021) โ Anastasios N Angelopoulos and Stephen Bates.A gentle introduction to conformal prediction and distribution-free uncertainty quantification.arXiv preprint arXiv:2107.07511, 2021. Blundell et al. (2015) โ Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra.Weight uncertainty in neural network.In International conference on machine learning, pp. 1613โ1622. PMLR, 2015. Chang et al. (2019) โ Ming-Fang Chang, John W Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, and James Hays.Argoverse: 3d tracking and forecasting with rich maps.In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Chen et al. (2014) โ Tianqi Chen, Emily Fox, and Carlos Guestrin.Stochastic gradient hamiltonian monte carlo.In International conference on machine learning, pp. 1683โ1691. PMLR, 2014. Drouin et al. (2022) โ Alexandre Drouin, รtienne Marcotte, and Nicolas Chapados.Tactis: Transformer-attentional copulas for time series.arXiv preprint arXiv:2202.03528, 2022. Efron & Hastie (2016) โ Bradley Efron and Trevor Hastie.Computer Age Statistical Inference, volume 5.Cambridge University Press, 2016. Eklund et al. (2015) โ Martin Eklund, Ulf Norinder, Scott Boyer, and Lars Carlsson.The application of conformal prediction to the drug discovery process.Annals of Mathematics and Artificial Intelligence, 74(1):117โ132, 2015. Gal & Ghahramani (2016a) โ Yarin Gal and Zoubin Ghahramani.Dropout as a bayesian approximation: Representing model uncertainty in deep learning.In international conference on machine learning, pp. 1050โ1059. PMLR, 2016a. Gal & Ghahramani (2016b) โ Yarin Gal and Zoubin Ghahramani.A theoretically grounded application of dropout in recurrent neural networks.Advances in neural information processing systems, 29, 2016b. Gal et al. (2017) โ Yarin Gal, Jiri Hron, and Alex Kendall.Concrete dropout.In NIPS, pp. 3581โ3590, 2017. Gasthaus et al. (2019) โ Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski.Probabilistic forecasting with spline quantile function RNNs.In AISTATS 22, pp. 1901โ1910, 2019. Gibbs & Candes (2021) โ Isaac Gibbs and Emmanuel Candes.Adaptive conformal inference under distribution shift.Advances in Neural Information Processing Systems, 34, 2021. Graves (2011) โ Alex Graves.Practical variational inference for neural networks.Advances in neural information processing systems, 24, 2011. Johnstone & Cox (2021) โ Chancellor Johnstone and Bruce Cox.Conformal uncertainty sets for robust optimization.In Conformal and Probabilistic Prediction and Applications, pp. 72โ90. PMLR, 2021. Kan et al. (2022) โ Kelvin Kan, Franรงois-Xavier Aubet, Tim Januschowski, Youngsuk Park, Konstantinos Benidis, Lars Ruthotto, and Jan Gasthaus.Multivariate quantile function forecaster.In International Conference on Artificial Intelligence and Statistics, pp. 10603โ10621. PMLR, 2022. Khosravi et al. (2011) โ Abbas Khosravi, Saeid Nahavandi, Doug Creighton, and Amir F Atiya.Comprehensive review of neural network-based prediction intervals and new advances.IEEE Transactions on neural networks, 22(9):1341โ1356, 2011. Kim et al. (2020) โ Byol Kim, Chen Xu, and Rina Barber.Predictive inference is free with the jackknife+-after-bootstrap.In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4138โ4149. Curran Associates, Inc., 2020. Kingma et al. (2015) โ Durk P Kingma, Tim Salimans, and Max Welling.Variational dropout and the local reparameterization trick.Advances in neural information processing systems, 28, 2015. Kipf et al. (2018) โ Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel.Neural relational inference for interacting systems.arXiv preprint arXiv:1802.04687, 2018. Kivaranovic et al. (2020) โ Danijel Kivaranovic, Kory D Johnson, and Hannes Leeb.Adaptive, distribution-free prediction intervals for deep networks.In AISTATS, pp. 4346โ4356, 2020. Lakshminarayanan et al. (2017) โ Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell.Simple and scalable predictive uncertainty estimation using deep ensembles.Advances in neural information processing systems, 30, 2017. Lei & Wasserman (2012) โ Jing Lei and Larry Wasserman.Distribution free prediction bands.arXiv preprint arXiv:1203.5422, 2012. Lei et al. (2018) โ Jing Lei, Max GโSell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman.Distribution-free predictive inference for regression.Journal of the American Statistical Association, 113(523):1094โ1111, 2018. Liang et al. (2020) โ Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun.Learning lane graph representations for motion forecasting.In European Conference on Computer Vision, pp. 541โ556. Springer, 2020. Louizos & Welling (2017) โ Christos Louizos and Max Welling.Multiplicative normalizing flows for variational bayesian neural networks.In International Conference on Machine Learning, pp. 2218โ2227. PMLR, 2017. Luo et al. (2021) โ Rachel Luo, Shengjia Zhao, Jonathan Kuck, Boris Ivanovic, Silvio Savarese, Edward Schmerling, and Marco Pavone.Sample-efficient safety assurances using conformal prediction.arXiv preprint arXiv:2109.14082, 2021. Messoudi et al. (2021) โ Soundouss Messoudi, Sรฉbastien Destercke, and Sylvain Rousseau.Copula-based conformal prediction for multi-target regression.Pattern Recognition, 120:108101, 2021. Messoudi et al. (2022) โ Soundouss Messoudi, Sรฉbastien Destercke, and Sylvain Rousseau.Ellipsoidal conformal inference for multi-target regression.In Conformal and Probabilistic Prediction with Applications, pp. 294โ306. PMLR, 2022. Neal (2012) โ Radford M Neal.Bayesian learning for neural networks, volume 118.Springer Science & Business Media, 2012. Park et al. (2022) โ Youngsuk Park, Danielle Maddix, Franรงois-Xavier Aubet, Kelvin Kan, Jan Gasthaus, and Yuyang Wang.Learning quantile functions without quantile crossing for distribution-free time series forecasting.In International Conference on Artificial Intelligence and Statistics, pp. 8127โ8150. PMLR, 2022. Romano et al. (2019) โ Yaniv Romano, Evan Patterson, and Emmanuel Candes.Conformalized quantile regression.Advances in neural information processing systems, 32, 2019. Ruschendorf (1976) โ Ludger Ruschendorf.Asymptotic distributions of multivariate rank order statistics.The Annals of Statistics, pp. 912โ923, 1976. Sakai et al. (2018) โ Atsushi Sakai, Daniel Ingram, Joseph Dinius, Karan Chawla, Antonin Raffin, and Alexis Paques.Pythonrobotics: a python code collection of robotics algorithms.CoRR, abs/1808.10703, 2018. Salinas et al. (2019) โ David Salinas, Michael Bohlke-Schneider, Laurent Callot, Roberto Medico, and Jan Gasthaus.High-dimensional multivariate forecasting with low-rank gaussian copula processes.Advances in neural information processing systems, 32, 2019. Sousa et al. (2022) โ Martim Sousa, Ana Maria Tomรฉ, and Josรฉ Moreira.A general framework for multi-step ahead adaptive conformal heteroscedastic time series forecasting.arXiv preprint arXiv:2207.14219, 2022. Stankeviฤiลซtฤ et al. (2021) โ Kamilฤ Stankeviฤiลซtฤ, Ahmed Alaa, and Mihaela van der Schaar.Conformal time-series forecasting.In Advances in Neural Information Processing Systems, 2021. Tagasovska & Lopez-Paz (2019) โ Natasa Tagasovska and David Lopez-Paz.Single-model uncertainties for deep learning.In NeurIPS, pp. 6417โ6428, 2019. Takeuchi et al. (2006) โ Ichiro Takeuchi, Quoc Le, Timothy Sears, Alexander Smola, et al.Nonparametric quantile estimation.MIT Press, 2006. Vovk et al. (2005) โ Vladimir Vovk, Alexander Gammerman, and Glenn Shafer.Algorithmic learning in a random world.Springer Science & Business Media, 2005. Vovk et al. (2017) โ Vladimir Vovk, Jieli Shen, Valery Manokhin, and Min-ge Xie.Nonparametric predictive distributions based on conformal prediction.In Conformal and probabilistic prediction and applications, pp. 82โ102. PMLR, 2017. Wang et al. (2019) โ Bin Wang, Jie Lu, Zheng Yan, Huaishao Luo, Tianrui Li, Yu Zheng, and Guangquan Zhang.Deep uncertainty quantification: A machine learning approach for weather forecasting.In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2087โ2095, 2019. Welling & Teh (2011) โ Max Welling and Yee W Teh.Bayesian learning via stochastic gradient langevin dynamics.In ICML, pp. 681โ688, 2011. Wu et al. (2021) โ Dongxia Wu, Liyao Gao, Matteo Chinazzi, Xinyue Xiong, Alessandro Vespignani, Yi-An Ma, and Rose Yu.Quantifying uncertainty in deep spatiotemporal forecasting.In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1841โ1851, 2021. Xu & Xie (2021) โ Chen Xu and Yao Xie.Conformal prediction interval for dynamic time-series.In International Conference on Machine Learning. PMLR, 2021. Zadrozny & Elkan (2001) โ Bianca Zadrozny and Charles Elkan.Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers.In Icml, volume 1, pp. 609โ616. Citeseer, 2001. Zaffran et al. (2022) โ Margaux Zaffran, Olivier Fรฉron, Yannig Goude, Julie Josse, and Aymeric Dieuleveut.Adaptive conformal predictions for time series.In International Conference on Machine Learning, pp. 25834โ25866. PMLR, 2022. Appendix AProof of Theorem 4.1 Theorem A.1 (Validity of CopulaCPTS).
The confidence region provided by CopulaCPTS (algorithm 1) is valid. i.e. โ โข [ โ ๐ โ { 1 , โฆ , ๐ } , ๐ฆ ๐ก + ๐ โ ฮ ๐ 1 โ ๐ผ ] โฅ 1 โ ๐ผ .
Proof.
Define notations to be the same as in Section 4. Let ๐
{ ๐ง ๐
( ๐ฅ ๐ , ๐ฆ ๐ ) } ๐
1 ๐ be a dataset with input ๐ฅ ๐ โ โ ๐ก ร ๐ , a time series with length ๐ก , and output ๐ฆ ๐ โ โ ๐ ร ๐ a time series with length ๐ . Each data sample (of entire time series, not time step) ๐ง ๐
( ๐ฅ ๐ , ๐ฆ ๐ ) is drawn i.i.d. from an unknown distribution ๐ต . This means that any other sample drawn ๐ต is exchangeable with ๐ . from Dataset ๐ is divided into training set ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ and two calibration sets ๐ ๐ โข ๐ โข ๐ โ 1 and ๐ ๐ โข ๐ โข ๐ โ 2 .
We have nonconformity score function ๐ด with prediction model ๐ ^ trained on ๐ ๐ก โข ๐ โข ๐ โข ๐ โข ๐ . For each data point ๐ง ๐
( ๐ฅ ๐ , ๐ฆ ๐ ) โ ๐ ๐ โข ๐ โข ๐ , we calculate the nonconformity score for each time step ๐ , concatenating them into a vector ๐ ๐ of dimension ๐ .
๐ ๐ ๐
๐ด โข ( ๐ง ๐ ) ๐
e.g. โ ๐ฆ ๐ ๐ โ ๐ ^ โข ( ๐ฅ ๐ ) ๐ โ , ๐
1 , โฆ , ๐
(9)
Let ๐ฎ โข 1
{ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 1 be the set of nonconformity scores of data in ๐ ๐ โข ๐ โข ๐ โ 1 and ๐ฎ โข 2
{ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 the set of nonconformity scores of data in ๐ ๐ โข ๐ โข ๐ โ 2 . Subscript ๐ will be used to index the set of specific time steps of the scores: ๐ฎ โข 1 ๐
{ ๐ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 1 , ๐ฎ โข 2 ๐
{ ๐ ๐ ๐ } ๐ง ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 .
CDF Estimation on ๐ ๐ โข ๐ โข ๐ โ 1 .
We use ๐ ๐ โข ๐ โข ๐ โ 1 to build conformal predictive distributions (CPD) (Vovk et al., 2017) for each time stepโs anomaly scores. The cumulative distribution function is constructed as:
๐น ^ ๐ โข ( ๐ ๐ ) := 1 | ๐ฎ โข 1 ๐ | + 1 โข โ ๐ ๐ โ ๐ฎ โข 1 ๐ โช { โ } ๐ ๐ ๐ ๐ < ๐ ๐ , for โข ๐ โ { 1 , โฆ , ๐ }
(10) Lemma A.2 (Validity of CPD. Theorem 11 of Vovk et al. (2017) ).
Given a nonconformity score function ๐ด : ๐ต โ โ and a data sample ๐ง โผ ๐ต , calculate the nonconformity score as ๐
๐ด โข ( ๐ง ) . Then, the distribution ๐น ^ ๐ โข ( โ ) is valid in the sense that โ ๐ต โข [ ๐น ^ ๐ โข ( ๐ ๐ ) โค 1 โ ๐ผ ]
1 โ ๐ผ , for any 0 < ๐ผ < 1 .
Copula Calibration on ๐ ๐ โข ๐ โข ๐ โ 2 .
Next, for every data point ๐ ๐ โข ๐ โข ๐ โ 2 , we calculate
๐ฐ
{ ๐ฎ ๐ } ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 , ๐ฎ ๐
( ๐ข 1 ๐ , โฆ , ๐ข ๐ ๐ )
( ๐น ^ 1 โข ( ๐ 1 ๐ ) , โฆ , ๐น ^ ๐ โข ( ๐ ๐ ๐ ) )
Each ๐ฎ ๐ can be seen as a multivariate nonconformity score for data sample ๐ง ๐ . We will now illustrate that an empirical copula on ๐ฐ is a rank statistic, and hence we can apply the proof of conformal prediction to prove a finite sample validity guarantee.
Definition A.3 (Vector partial order).
Define a partial order for ๐ -dimensional vectors โชฏ .
๐ฎ
โชฏ
๐ฏ
i.f.f.
โ
๐
โ
{
1
,
โฆ
,
๐
}
,
๐ฎ
๐
โค
๐ฏ
๐
(11)
i.e.
โข
๐ฎ
โชฏ
๐ฏ
โบ
โ
๐
1 ๐ ๐ ๐ฎ ๐ โค ๐ฏ ๐
(12)
Next, we define an empirical multivariate quantile function for ๐ฐ , a set of ๐ -dimensional vectors, based on the partial order defined in Eqn 11. 2
๐ ^ โข ( 1 โ ๐ผ , ๐ฐ )
arg โข min ๐ฎ * โข โ ๐
1 ๐ ๐ฎ ๐ * s.t. โข ( 1 | ๐ฐ | โข โ ๐ฎ โ ๐ฐ ๐ ๐ฎ โชฏ ๐ฎ * ) โฅ 1 โ ๐ผ
(13)
The empirical copula formula in CopulaCPTS (Eqn 7 in section 4.1) is the same as the expression inside the inf function of ๐ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } ) . Therefore, finding ๐ 1 * , โฆ , ๐ ๐ * by Equation 8 implies:
๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } )
๐ง
The rest of the proof follows that of Inductive Conformal Prediction (ICP) in Vovk et al. (2005).
Given a test data sample ๐ง ๐ + 1
( ๐ฅ 1 : ๐ก ๐ + 1 , ๐ฆ 1 : ๐ ๐ + 1 ) โผ ๐ต , we want to prove that the confidence regions ฮ 1 1 โ ๐ผ , โฆ , ฮ ๐ 1 โ ๐ผ output by CopulaCPTS satisfies:
โ โข [ ๐ฆ ๐ โ ฮ ๐ 1 โ ๐ผ ] โฅ 1 โ ๐ผ , โ ๐ โ { 1 , โฆ , ๐ }
We first calculate
๐ฎ ๐ ๐ + 1
๐น ^ ๐ โข ( ๐ด โข ( ๐ง ๐ + 1 ) ๐ ) โข for โข ๐ โ { 1 , โฆ , ๐ }
Let ๐ฎ *
๐ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } ) , ๐ฎ * โ [ 0 , 1 ] ๐ . An important observation for the conformal prediction proof is that if ๐ฎ * โชฏ ๐ฎ ๐ + 1 , then
๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } )
๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { ๐ฎ ๐ + 1 } )
the quantile remains unchanged. This fact can be re-written as
๐ฎ ๐ + 1 โชฏ ๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } ) โบ ๐ฎ ๐ + 1 โชฏ ๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { ๐ฎ ๐ + 1 } )
The above describes the condition where ๐ฎ ๐ + 1 is among the โ ( 1 โ ๐ผ ) โข ( ๐ + 1 ) โ smallest of ๐ฐ . By exchangability, the probability of ๐ฎ ๐ + 1 โs rank among ๐ฐ is uniform. Therefore,
โ โข [ ๐ฎ ๐ + 1 โชฏ ๐ ^ โข ( ๐ , ๐ฐ โช { โ } ) ]
โ ( 1 โ ๐ผ ) โข ( | ๐ฐ | + 1 ) โ ( | ๐ฐ | + 1 ) โฅ 1 โ ๐ผ
Hence we have
โ โข [ ๐ฎ ๐ + 1 โชฏ ๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } ) ] โฅ 1 โ ๐ผ
(14)
Note again that:
โข
๐ฎ *
๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } )
( ๐น ^ 1 โข ( ๐ 1 * ) , โฆ , ๐น ^ ๐ก โข ( ๐ ๐ * ) )
โข
๐ฎ ๐ + 1
( ๐น ^ 1 โข ( ๐ 1 ๐ + 1 ) , โฆ , ๐น ^ ๐ก โข ( ๐ ๐ ๐ + 1 ) )
โข
The uncertain regions are constructed as (Algorithm 1, line 17):
ฮ ๐ 1 โ ๐ผ โ { ๐ฒ : โ ๐ฒ โ ๐ฒ ^ ๐ ๐ + 1 โ < ๐ ๐ * }
(15)
By definition of โชฏ , we have
๐ฎ ๐ + 1 โชฏ ๐ฎ *
(16)
โบ ( 11 )
โ ๐ โ { 1 , โฆ , ๐ } , ๐ฎ ๐ ๐ + 1 โค ๐ฎ ๐ *
(17)
โน Lemma A.2
โ ๐ โ { 1 , โฆ , ๐ } , ๐ ๐ ๐ + 1 โค ๐ ๐ *
(18)
โบ ( 15 )
โ ๐ โ { 1 , โฆ , ๐ } , ๐ฒ ๐ โ ฮ ๐ 1 โ ๐ผ
(19)
Combining Eqn 14 and Eqn 19, we have
โ โข [ โ ๐ โ { 1 , โฆ , ๐ } , ๐ฒ ๐ โ ฮ ๐ 1 โ ๐ผ ] โฅ โ โข [ ๐ฎ ๐ + 1 โชฏ ๐ ^ โข ( 1 โ ๐ผ , ๐ฐ โช { โ } ) ] โฅ 1 โ ๐ผ
(20)
โ
Appendix BAdditional Algorithm Details B.1Upper and Lower bounds for Copulas
To provide a better understanding of the properties of Copulas, consider the Frechet-Hoeffding Bounds (Theorem B.1). In fact, the Frechet-Hoeffding upper- and lower- bounds are both copulas. The lower bound is precisely the Bonferroni correction used in Stankeviฤiลซtฤ et al. (2021) - therefore by estimating the copula more precisely instead of using a lower bound, we have a guaranteed efficiency improvement for the confidence region.
Theorem B.1 (The Frechet-Hoeffding Bounds).
Consider a copula ๐ถ โข ( ๐ข 1 , โฆ , ๐ข ๐ ) . Then
max โก { 1 โ ๐ + โ ๐
1 ๐ ๐ข ๐ , 0 } โค ๐ถ โข ( ๐ข 1 , โฆ , ๐ข ๐ ) โค min โก { ๐ข 1 , โฆ , ๐ข ๐ }
B.2Numerical optimization with SGD for search
We continue to use the notation defined in Appendix A. The inverse of the predictive distributions (Equation 10) can be written as follows, similar to the empirical quantile function (Equation 3).
๐น ^ ๐ โ 1 โข ( ๐ ) := inf { ๐ ๐ : ( 1 | ๐ฎ โข 1 ๐ | + 1 โข โ ๐ ๐ โ ๐ฎ โข 1 ๐ โช { โ } ๐ ๐ ๐ ๐ < ๐ ๐ ) โฅ ๐ }
(21)
We find the optimal ๐ ๐ * in Equation 8 and Algorithm 1 by minimizing the following loss:
โ โข ( ๐ 1 , โฆ , ๐ ๐ )
1 | ๐ ๐ โข ๐ โข ๐ โ 2 | โข โ ๐ โ ๐ ๐ โข ๐ โข ๐ โ 2 โ ๐
1 ๐ ๐ โข [ ๐ฎ ๐ ๐ < ๐น ^ ๐ โ 1 โข ( ๐ ๐ ) ] โ ( 1 โ ๐ผ )
The indicator function is implemented using a sigmoid function whose input is multiplied by a constant for differentiability. A small amount of L2 regularization is added to the loss function to ensure the searched scores are as low as possible. We use the Adam optimizer and perform gradient descent for 500 steps to get the final result. The optimization process to find ๐ฌ * typically takes a few seconds on CPU. For each run of our experiments, the calibration and prediction steps of CopulaCPTS combined took less than 1 minute to run on an Apple M1 CPU. Please refer to the CP class in the reference code for implementation details.
B.3CopulaCPTS in auto-regressive forecasting
Auto-regressive forecasting is a common framework in time series forecasting. So far, we have been looking at forecasts for a predetermined number of time steps ๐ . One can use a fixed-length model to forecast variable horizons ๐ โฒ autoregressively, taking the model output as part of the input. In the conformal prediction setting, we want not only to autoregressively use the point forecasts, but also to propagate the uncertainty measurement.
If the time series and uncertainty are stationary (for example additive Gaussian noise), the copula remains the same for any sliding window of ๐ steps, i.e. ๐ถ โข ( ๐ข 1 , โฆ , ๐ข ๐ )
๐ถ โข ( ๐ข 2 , โฆ , ๐ข ๐ + 1 ) . Therefore, after finding ( ๐ข 1 * , โฆ , ๐ข ๐ * ) such that ๐ถ โข ( ๐ข 1 * , โฆ , ๐ข ๐ * ) โฅ 1 โ ๐ผ , we can simply search for ๐ข ๐ + 1 * such that ๐ถ โข ( ๐ข 2 * , โฆ , ๐ข ๐ * , ๐ข ๐ + 1 * ) โฅ 1 โ ๐ผ . The guarantee proven in Theorem 4.1 still holds for the new estimate. In this way, we can achieve the coverage guarantee over the entire autoregressive forecasting horizon.
On the other hand, if the time series is non-stationary, we need to fit copulas ๐ถ 1 โข ( ๐ข 1 , โฆ , ๐ข ๐ ) , ๐ถ 2 โข ( ๐ข 2 , โฆ , ๐ข ๐ + 1 ) , โฆ , ๐ถ ๐ โฒ โ ๐ โข ( ๐ข ๐ โฒ โ ๐ , โฆ , ๐ข ๐ โฒ ) , one for each autoregressive prediction, which requires a calibration set with โฅ ๐ โฒ time steps. The ๐ โฒ step autoregressive problem is then reduced to ๐ โฒ โ ๐ multi-step forecasting problems that can be solved by CopulaCPTS. It follows that each of the autoregressive predictions is valid. Appendix B.4 provides an example scenario where re-estimating the copula is necessary for validity.
B.4Autoregressive prediction
In the context of this paper to forecast autoregressively is given input ๐ฑ 1 : ๐ก and a ๐ step forecasting model ๐ ^ , perform prediction
๐ฒ ^ ๐ก + 1 : ๐ก + ๐
๐
^
โข
(
๐ฑ
1
:
๐ก
)
๐ฒ
^
๐ก
+
2
:
๐ก
+
๐
+
1
๐ ^ โข ( ๐ฑ 2 : ๐ก , ๐ฒ ^ ๐ก + 1 )
โฏ
until all ๐ โฒ time steps are predicted.
We now provide a toy scenario to illustrate when re-estimating the copula is necessary and improves validity. Consider a time series of three time steps ๐ก 0 , ๐ก 1 , ๐ก 2 . The two scenarios are illustrated in Figure 6. In both scenarios, the mean and variance of all time steps are 0 and 1 respectively. In scenario (a), ๐ก 0
๐ก 1 and hence their covariance is 1 . The copula estimated on ๐ก 0 and ๐ก 1 is ๐ถ 0 : 1 โข ( ๐น โข ( ๐ก 0 ) , ๐น โข ( ๐ก 1 ) )
๐น โข ( ๐ก 0 )
๐น โข ( ๐ก 1 ) . This copula will significantly underestimate the confidence region of ๐ก 2 where its covariance with ๐ก 1 is โ 1 . In fact the coverage of ๐ถ 0 : 1 โข ( ๐น 1 โข ( ๐ก 1 ) , ๐น 2 โข ( ๐ก 2 ) )
0.74 . On the other hand, (b) illustrates a scenario where the copula for any 2 consecutive time series remains the same ๐ถ 0
๐ถ 1 . In this case, applying ๐ถ 0 directly to forecast ๐ถ 1 achieves precisely 90% coverage.
(a)Time steps 0 and 1 are positively correlated while 1 and 2 are negatively correlated (b)Stationary time series where each time steps are uncorrelated Figure 6:Two scenarios to illustrate the autoregressive case Appendix CExperiment Details and additional results C.1Underlying forecasting models Particle Dataset.
The underlying forecasting model for the particle experiments is an 1-layer LSTM network with embedding size
24 . The hidden state is then passed through a linear network to forecast the timesteps concurrently (output has dimension ๐ ร ๐ ๐ฆ ). We train the model for 150 epochs with batch size 128. Hyperparameters of the network are selected through a model search by performance on a 5-fold cross-validation split of the dataset. The architecture and hyperparameters are shared for all baselines and CopulaCPTS in Table 1.
Drone.
For the drone trajectory forecasting task, we use the same LSTM forecasting network as the particle dataset, but with its hidden size increased to 128. We train the model for 500 epochs with batch size 128. The same architecture and hyperparameters are shared for all baselines and CopulaCPTS reported in Table 1.
Covid-19.
The COVID-19 dataset is downloaded directly from the official UK government website https://coronavirus.data.gov.uk/details/download by selecting region for area type and newCasesByPublishDate for metric. There are in total 380 regions and over 500 days of data, depending on when it is downloaded. We selected 150-day time series from the collection to construct our dataset.
The base forecasting model for Covid-19 dataset is the same as the model for synthetic datasets, with hidden size
128 , and were trained for 150 epochs with batch size 128. The same architecture and hyperparameters are shared for all baselines and CopulaCPTS reported in Table 1.
Argoverse.
As highlighted in the main text, we utilize a state-of-the-art prediction algorithm LaneGCN (Liang et al., 2020) as the underlying forecaster model for CF-RNN and Copula-RNN. We refer the readers to their paper and code base for model details. The architecture of the RNN network used for MC-Dropout and BJRNN is an Encoder-Decoder network. Both the encode and decoder contain a LSTM layer with encoding size 8 and hidden size 16. We chose this architecture because the is part of the official Argoverse baselines (https://github.com/jagjeet-singh/argoverse-forecasting) and demonstrates competitive performance.
C.2Calibration and Efficiency chart for COVID-19
Figure 7 shows a comparison of calibration and efficiency for the daily new COVID 19 cases forecasting.
Figure 7:Calibration and efficiency comparison on different ๐ level for COVID-19 Daily Forecasts. The copula methods (orange and red lines) are more calibrated (coinciding with the green dotted line) and sharp (low width) compared to baselines.
To see if the daily fluctuation due to testing behavior disrupts other methods, we also ran the same experiment on weekly aggregated new cases forecast. We take 14 weeks of data as input and output forecasts for the next 6 weeks. The results are illustrated in Figure 8. The weekly forecasting scenario gives us similar insights as the daily forecasts.
Figure 8:Covid Weekly Forecasts C.3Argoverse
The Argoverse autonomous vehicle dataset contains 205,942 samples, consisting of diverse driving scenarios from Miami and Pittsburgh. The data can be downloaded from the official Argoverse dataset website. We split 90/10 into a training set and validation set of size 185,348 and 20,594 respectively. The official validation set of size 39,472 is used for testing and reporting performance. We preprocess the scenes to filter out incomplete trajectories and cap the number of vehicles modeled to 60. If there are less than 60 cars in the scenario, we insert dummy cars into them to achieve consistent car numbers. For map information, we only include center lanes with lane directions as features. Similar to vehicles, we introduce dummy lane nodes into each scene to make lane numbers consistently equal to 650.
C.4Additional Experiment Results
We present in Figures 8 and 9 some qualitative results for uncertainty estimation.
To test how the effects of copulaCPTS compare with baseline on other base forecasters, we also include an encoder-decoder architecture with the same embedding size as the RNN models introduced in Appendix C.1 for each dataset. The results are presented in Table 3. We omit these results in the main text because we found that they did not bring significant improvement to time series forecasting UQ.
Table 4 compares model performance compared across different prediction horizons. We show that the advantage of our method is more pronounced for longer horizon forecasts.
(a)Copula-EncDec (b)MC Dropout (c)CF-RNN Figure 9: 99 % Confidence region produced by three methods for the drone dataset. Copula methods (a) produce a more consistent, expanding cone of uncertainty compared to MC-Dropout (b) sharper one compared to CF-RNN (c). Figure 10:Illustrations for confidence regions given by CF-RNN (blue) and CopulaCPTS (orange) at time steps 0, 10, 20, and 30. Note that in order to achieve 90% coverage, the regions are larger than needed, especially in straight-lane cases like the middle two. Using copulas to couple together time steps results in a much smaller region while achieving similarly good coverage. Particle Simulation ( ๐
.01 ) Coverage (90%) Area (90%) Coverage (99%) Area (99%) MC-dropout 691.5 ยฑ 2.0 2.22 ยฑ 0.05 95.2 ยฑ 1.4 3.16 ยฑ 0.08 BJRNN 98.9 ยฑ 0.2 2.24 ยฑ 0.59 99.6 ยฑ 0.3 2.75 ยฑ 0.71 CF-RNN 97.1 ยฑ 0.8 1.2 ยฑ 0.21 99.3 ยฑ 0.6 3.16 ยฑ 0.86 CF-EncDec 97.3 ยฑ 1.2 1.97 ยฑ 0.4 98.9 ยฑ 0.6 2.75 ยฑ 0.42 Copula-vanilla 86.9 ยฑ 1.9 0.63 ยฑ 0.07 91.9 ยฑ 1.8 0.76 ยฑ 0.12 Copula-RNN 91.3 ยฑ 1.5 1.08 ยฑ 0.14 99.4 ยฑ 0.3 2.23 ยฑ 0.19 Copula-EncDec 90.8 ยฑ 2.5 1.19 ยฑ 0.08 99.3 ยฑ 0.5 2.16 ยฑ 0.23 Particle Simulation ( ๐
.05 ) Coverage (90%) Area (90%) Coverage (99%) Area (99%) MC-dropout 16.1 ยฑ 4.3 0.79 ยฑ 0.02 33.9 ยฑ 5.1 2.12 ยฑ 0.03 BJRNN 100.0 ยฑ 0.0 12.13 ยฑ 0.39 100.0 ยฑ 0.0 15.43 ยฑ 0.85 CF-RNN 94.5 ยฑ 1.5 5.79 ยฑ 0.51 99.8 ยฑ 2.2 19.21 ยฑ 8.19 Copula-vanilla 88.5 ยฑ 1.7 4.37 ยฑ 0.16 91.7 ยฑ 1.6 4.8 ยฑ 0.18 Copula-RNN 90.3 ยฑ 0.7 4.50 ยฑ 0.07 99.1 ยฑ 0.8 12.82 ยฑ 3.98 Copula-EncDec 91.4 ยฑ 1.1 4.40 ยฑ 0.15 98.7 ยฑ 0.1 9.31 ยฑ 1.97 Drone Simulation ( ๐
.02 ) Coverage (90%) Area (90%) Coverage (99%) Area (99%) MC-dropout 84.5 ยฑ 10.8 9.64 ยฑ 2.13 90.0 ยฑ 7.8 16.02 ยฑ 3.62 BJRNN 90.8 ยฑ 2.8 49.57 ยฑ 3.77 100.0 ยฑ 4.0 65.77 ยฑ 4.56 CF-RNN 91.6 ยฑ 9.2 32.18 ยฑ 13.66 100.0 ยฑ 0.0 36.79 ยฑ 14.03 CF-EncDec 100.0 ยฑ 0.0 21.83 ยฑ 26.29 100.0 ยฑ 0.0 25.03 ยฑ 12.53 Copula-vanilla 89.5 ยฑ 1.3 54.67 ยฑ 28.9 94.5 ยฑ 0.5 68.9 ยฑ 33.42 Copula-RNN 90.0 ยฑ 1.5 16.52 ยฑ 15.08 98.5 ยฑ 0.5 21.48 ยฑ 8.91 COVID-19 Daily Cases Dataset Coverage (90%) Area (90%) Coverage (99%) Area (99%) MC-dropout 19.1 ยฑ 5.1 34.14 ยฑ 0.84 100.0 ยฑ 0.0 1106.57 ยฑ 25.41 BJRNN 79.2 ยฑ 30.8 823.3 ยฑ 529.7 85.7 ยฑ 27.5 149187. ยฑ 51044. CF-RNN 95.4 ยฑ 1.9 610.2 ยฑ 96.0 100.0 ยฑ 0.0 121435. ยฑ 26495. CF-EncDec 91.7 ยฑ 1.4 570.3 ยฑ 22.1 100.0 ยฑ 0.0 108130. ยฑ 10889. Copula-vanilla 90.8 ยฑ 1.4 414.42 ยฑ 5.08 91.2 ยฑ 1.3 41346. ยฑ 59.0 Copula-RNN 92.1 ยฑ 1.0 429.0 ยฑ 15.1 100.0 ยฑ 0.0 88962. ยฑ 9643. Copula-EncDec 90.8 ยฑ 0.3 429.4 ยฑ 27.9 100.0 ยฑ 0.0 60852. ยฑ 12263. Argoverse Trajectory Prediction Dataset Coverage (90%) Area (90%) Coverage (99%) Area (99%) MC-dropout 27.9 ยฑ 3.1 127.6 ยฑ 20.9 31.5 ยฑ 3.9 242.1 ยฑ 54.0 BJRNN 92.6 ยฑ 9.2 880.8 ยฑ 156.2 100.0 ยฑ 0.0 3402.8 ยฑ 268. CF-LaneGCN 98.8 ยฑ 1.9 396.9 ยฑ 18.67 100. ยฑ 0.2 607.2 ยฑ 8.67 Copula-vanilla 89.7 ยฑ 0.9 107.2 ยฑ 9.56 96.5 ยฑ 2.3 289.0 ยฑ 38.1 Copula-LaneGCN 90.4 ยฑ 0.3 126.8 ยฑ 12.22 99.1 ยฑ 0.4 324.1 ยฑ 42.22 Table 3:Additional results. Copula methods achieve a high level of calibration while producing sharper prediction regions. The sharpness gain is even more pronounced at higher confidence levels (99%), where we want the prediction region to be useful while remaining valid. 1 Step 5 Steps 15 Steps Method Coverage Area Coverage Area Coverage Area MC-Dropout 97.8 ยฑ 2.0 0.4 ยฑ 0.04 88.0 ยฑ 7.0 0.69 ยฑ 0.25 52.3 ยฑ 1.4 0.94 ยฑ 0.2 BJRNN 45.3 ยฑ 39.4 0.27 ยฑ 0.18 97.7 ยฑ 2.1 2.69 ยฑ 1.79 95.5 ยฑ 2.8 19.99 ยฑ 4.83 CF-RNN 100.0 ยฑ 0.0 0.01 ยฑ 0.01 77.8 ยฑ 19.2 0.8 ยฑ 0.64 66.7 ยฑ 0.0 18.82 ยฑ 3.73 CF-EncDec 89.9 ยฑ 19.2 0.01 ยฑ 0.01 100.0 ยฑ 0.0 0.75 ยฑ 0.99 88.9 ยฑ 19.2 13.07 ยฑ 16.1 Copula-RNN 90.1 ยฑ 0.2 0.01 ยฑ 0.01 89.8 ยฑ 0.6 0.54 ยฑ 0.45 90.1 ยฑ 1.2 8.25 ยฑ 3.44 Copula-EncDec 90.0 ยฑ 0.3 0.01 ยฑ 0.0 90.3 ยฑ 0.6 0.67 ยฑ 1.01 90.5 ยฑ 0.5 7.13 ยฑ 9.5 Table 4:Performance comparison across different horizons at 90% confidence level on the drone simulation dataset. The improvement on efficiency is more pronounced when the horizon is longer. C.5Study on ๐ผ ๐ search
Figure 11 shows the ๐ผ ๐ values for each 1 โ ๐ผ ๐
๐น ^ ๐ โข ( ๐ ๐ * ) used in Copula CPTS as outlined in line 15 of Algorithm 1. We present ๐ผ ๐ values searched using two methods of searching, with dichotomy search for a constant ๐ผ value for the horizon as in Messoudi et al. (2021), and by stochastic gradient descent as outlined in section 4.2.
The ๐ผ ๐ values are an indicator of how interrelated the uncertainty between each time step are: Bonferroni Correction used in Stankeviฤiลซtฤ et al. (2021) (grey dotted line in Figure 11) assumes that the time steps are independent, with CopulaCPTS we have lower 1 โ ๐ผ ๐ levels while having valid coverage (blue and orange lines in Figure 11). This shows that the uncertainty of the time steps is not independent, and we are able to utilize this dependency to shrink the confidence region while still maintaining the coverage guarantee.
Table 5 shows that there are no significant differences between coverage and area performance for the two search methods within the scope of datasets we study in this paper. However, we want to highlight that SGD search is ๐ โข ( ๐ ) complexity to optimization steps, regardless of the prediction horizon. SGD also allows for varying ๐ผ ๐ which might be useful in some settings, for example capturing uncertainty spikes for some time steps as seen in the COVID-19 dataset of Figure 11. Dichotomy search, on the other hand, is ๐ โข ( ๐ โข ๐ โข ๐ โข ๐ โข ( ๐ ) ) complexity to the search space depends on granularity, and will be ๐ ( ๐ ๐ ๐ ๐ ๐ ( ๐ ๐ ) if we want to search for varying ๐ผ ๐ .
Dataset Coverage (90%) Area Fixed ๐ผ ๐ Varying ๐ผ ๐ Fixed ๐ผ ๐ Varying ๐ผ ๐
Particle ( ๐
.01 ) 91.7 ยฑ 1.9 91.5 ยฑ 2.1 1.13 ยฑ 0.45 1.06 ยฑ 0.36 Particle ( ๐
.05 ) 92.1 ยฑ 1.3 90.3 ยฑ 0.7 4.89 ยฑ 0.05 4.50 ยฑ 0.07 Drone 90.3 ยฑ 0.5 90.0 ยฑ 1.5 15.92 ยฑ 1.98 16.52 ยฑ 7.08 Covid-19 92.9 ยฑ 0.1 92.1 ยฑ 1.0 498.44 ยฑ 6.36 429.0 ยฑ 15.1 Argoverse 90.2 ยฑ 0.1 90.4 ยฑ 0.3 117.1 ยฑ 7.3 126.8 ยฑ 12.2 Table 5:Coverage and area comparison between stochastic search for fixed ๐ผ ๐ and SGD for Varying ๐ผ ๐ . We do not see a significant difference between the performance of the two. Figure 11:Comparison between dichotomy search for fixed ๐ผ ๐ values (blue) and stochastic gradient search for varying ๐ผ ๐ (blue) through timesteps. Shaded regions are the standard deviation of the values over 3 runs. C.6Comparison to additional Baselines
We include a comparison to two additional simple UQ baselines on the particle simulation dataset.
L2-Conformal. L2-Conformal uses the same underlying RNN forecaster as CF-RNN and Copula RNN. We use the nonconformity score of the vector norm of all timesteps concatenated together โ ๐ฒ ^ ๐ก + 1 : ๐ก + ๐ โ ๐ฒ ๐ก + 1 : ๐ก + ๐ โ to perform ICP. As there are no analytic way to represent a ๐ ร ๐ ๐ฆ -dimensional uncertainty region on 2-D space, we calculate the area and plot the region for L2 Conformal baseline with the maximum deviation at each timestep such that the vector norm still stays within range.
Direct Gaussian. Direct Gaussian uses the same model architecture and training hyperparameters, with the addition of a linear layer that outputs the variance for each timestep, and is optimized using negative log loss, a proper scoring rule for probabilistic forecasting. We obtain the area by analytically calculating the 90% confidence interval for each variable.
Results in Table 6 show that L2-conformal produces inefficient confidence area, and directly outputting variance under-covers test data. These results align with previous findings and motivate our method, which is both more calibrated and sharper compared to these baselines. We show a visualization in Figure 12 to illustrate the different properties of the methods qualitatively.
Particle (
๐
.01 ) Particle ( ๐
.05 ) Method Coverage (90%) Area โ Coverage (90%) Area โ
L2-Conformal 88.5 ยฑ 0.4
7.21 ยฑ 0.35
89.7 ยฑ 0.6
7.21 ยฑ 0.35
Direct Gaussian 11.9 ยฑ 0.09
0.07 ยฑ 0.31
0.0 ยฑ 0.0
0.08 ยฑ 0.02
CF-RNN 97.0 ยฑ 2.3
3.13 ยฑ 3.24
97.0 ยฑ 2.3
5.79 ยฑ 0.51
CopulaCPTS 91.3 ยฑ 2.1
1.08 ยฑ 0.36
90.3 ยฑ 0.7
4.50 ยฑ 0.07 Table 6: Comparison with two additional baselines on the particle dataset. (a)L2 Conformal (b)Direct RNN Gaussian (c)CF-RNN (d)Copula-RNN Figure 12: Visualization of on a sample from the Particle datasetโs test set. Ellipsoidal conformal inference for Multi-Target Regression
We also compare CopulaCPTS to a newer work, Ellipsoidal CP (Messoudi et al., 2022). The result is presented in Table 7. This method models the uncertainty region of multi-target outputs as a high-dimensional ellipsoid, by estimating a covariance matrix on calibration data. We apply EllipsoidalCP on our data by flattening the time and space dimensions, so the particle simulation, for example, is treated as a multi-target prediction of dim
50
25 โข (time steps) ร 2 โข (dims) . We see that the results are comparable in our experiment. When the correlation is more pronounced such as in the covid experiment, EllipsoidalCP can capture the correlation better than CopulaCPTS resulting in improved efficiency. On the other hand, the flexibility of our method allows us to achieve better efficiency than that of EllipsoidalCP. A notable concern for using EllipsoidalCP is that for higher output dimensions, the determinant of the covariance matrix can be extremely large (up to 10 50 in our experiments) and can result in numerical instabilities.
Table 7: Performance comparison with EllipsoidalCP in synthetic and real-world datasets with target confidence 1 โ ๐ผ
0.9 . EllipsoidalCP CopulaCPTS Particle Sim ( ๐
.01 ) cov 90.1 ยฑ 0.9 91.3 ยฑ 1.5 area 0.84 ยฑ .005 1.08 ยฑ 0.14 Particle Sim ( ๐
.05 ) cov 90.8 ยฑ 0.4 90.6 ยฑ 0.6 area 8.76 ยฑ 0.41 5.27 ยฑ 1.02 Drone Sim ( ๐
.02 ) cov 90.5 ยฑ 0.2 90.0 ยฑ 0.8 area 28.3 ยฑ 3.1 17.12 ยฑ 6.93 COVID-19 Daily Cases cov 93.3 ยฑ 1.5 90.5 ยฑ 1.6 area 231.5 ยฑ 22.4 408.6 ยฑ 65.8 Argoverse Trajectory cov 90.3 ยฑ 0.1 90.2 ยฑ 0.1 area 144.8 ยฑ 8.1 126.8 ยฑ 12.2 Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Report Issue Report Issue for Selection
Xet Storage Details
- Size:
- 75.5 kB
- Xet hash:
- b6e158f02b89fd998fe8558904861c163636d10088afdffd84e136e75d4c3fa4
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.