papers / 20240722 /2401.02413v2.json
yilunzhao's picture
Add files using upload-large-folder tool
2411829 verified
{
"title": "Simulation-Based Inference with Quantile Regression",
"abstract": "We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference (SBI) method based on conditional quantile regression.\nNQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions.\nPosterior samples are obtained by interpolating the predicted quantiles using monotonic cubic Hermite spline, with specific treatment for the tail behavior and multi-modal distributions.\nWe introduce an alternative definition for the Bayesian credible region using the local Cumulative Density Function (CDF), offering substantially faster evaluation than the traditional Highest Posterior Density Region (HPDR).\nIn case of limited simulation budget and/or known model misspecification, a post-processing calibration step can be integrated into NQE to ensure the unbiasedness of the posterior estimation with negligible additional computational cost.\nWe demonstrate that NQE achieves state-of-the-art performance on a variety of benchmark problems.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "Given the likelihood of a stochastic forward model and observation data , Bayes\u2019 theorem postulates that the underlying model parameters follow the posterior distribution , where represents the prior.\nIn many applications, however, we are restricted to simulating the data , while the precise closed form of remains unavailable.\nSimulation-Based Inference (SBI), also known as Likelihood-Free Inference (LFI) or Implicit Likelihood Inference (ILI), conducts Bayesian inference directly from these simulations, circumventing the need to explicitly formulate a tractable likelihood function.\nEarly research in this field primarily consists of Approximate Bayesian Computation (ABC) variants, which employ a distance metric in the data space and approximate true posterior samples using realizations whose simulated data are \u201cclose enough\u201d to the observation (e.g. Tavar\u00e9 et al., 1997 ###reference_b51###; Pritchard et al., 1999 ###reference_b41###; Beaumont et al., 2002 ###reference_b1###, 2009 ###reference_b2###). However, these methods are prone to the curse of dimensionality and prove inadequate for higher-dimensional applications.\nIn recent years, a series of neural-network-based SBI methods have been proposed, which can be broadly categorized into three groups.\nNeural Likelihood Estimation (NLE, Papamakarios et al., 2019b ###reference_b39###; Lueckmann et al., 2019 ###reference_b28###) fits the likelihood using a neural density estimator, typically based on Normalizing Flows.\nThe posterior is then evaluated by multiplying the likelihood with the prior, and posterior samples can be drawn using Markov Chain Monte Carlo (MCMC).\nNeural Posterior Estimation (NPE, Papamakarios & Murray, 2016 ###reference_b37###; Lueckmann et al., 2017 ###reference_b27###; Greenberg et al., 2019 ###reference_b14###) uses neural density estimators to approximate the posterior, thereby enabling direct posterior sample draws without running MCMC.\nNeural Ratio Estimation (NRE, Hermans et al., 2020 ###reference_b16###) employs classifiers to estimate density ratios, commonly selected as the likelihood-to-evidence ratio.\nIndeed, Durkan et al. (2020 ###reference_b9###) demonstrates that NRE can be unified with specific types of NPE under a general contrastive learning framework.\nEach method has its sequential counterpart, namely SNLE, SNPE, and SNRE, respectively.\nWhereas standard NLE, NPE, and NRE allocate new simulations based on the prior, allowing them to be applied to any observation data (i.e., they are amortized), their sequential counterparts allocate new simulations based on the inference results from previous iterations and must be trained specifically for each observation.\nThese neural-network-based methods typically surpass traditional ABC methods in terms of inference accuracy under given simulation budgets.\nSee Cranmer et al. (2020 ###reference_b4###) for a review and Lueckmann et al. (2021 ###reference_b29###) for a comprehensive benchmark of prevalent SBI methods.\nQuantile Regression (QR), as introduced by Koenker & Bassett Jr (1978 ###reference_b21###), estimates the conditional quantiles of the response variable over varying predictor variables.\nMany Machine Learning (ML) algorithms can be extended to quantile regression by simply transitioning to a weighted loss (e.g. Meinshausen & Ridgeway, 2006 ###reference_b32###; Rodrigues & Pereira, 2020 ###reference_b44###; Tang et al., 2022 ###reference_b50###).\nIn this paper, we introduce Neural Quantile Estimation (NQE), a new family of SBI methods supplementing the existing NPE, NRE and NLE approaches.\nNQE successively estimates the one dimensional quantiles of each dimension of , conditioned on the data and previous dimensions.\nWe interpolate the discrete quantiles with monotonic cubic Hermite splines, adopting specific treatments to account for the tail behavior and potential multimodality of the distribution.\nPosterior samples can then be drawn by successively applying inverse transform sampling for each dimension of .\nWe also develop a post-processing calibration strategy, leading to guaranteed unbiased posterior estimation as long as one provides enough () simulations to accurately calculate the empirical coverage.\nTo the best of our knowledge, this constitutes the first demonstration that QR-based SBI methods can attain state-of-the-art performance, matching or surpassing the benchmarks set by existing methods.\nThe structure of this paper is as follows:\nIn Section 2 ###reference_###, we introduce the methodology of NQE, along with a alternative definition for Bayesian credible regions and a post-processing calibration scheme to ensure the unbiasedness of the inference results.\nIn Section 3 ###reference_###, we demonstrate that NQE attains state-of-the-art performance across a variety of benchmark problems, together with a realistic application to high dimensional cosmology data.\nSubsequently, in Section 4 ###reference_###, we discuss related works in the literature and potential avenues for future research.\nThe results in this paper can be reproduced with the publicly available NQE package\n111https://github.com/h3jia/nqe ###reference_github.com/h3jia/nqe###. based on pytorch (Paszke et al., 2019 ###reference_b40###)."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "Methodology",
"text": ""
},
{
"section_id": "2.1",
"parent_section_id": "2",
"section_name": "Quantile Estimation And Interpolation",
"text": "###figure_1### The cornerstone of most contemporary SBI methods is some form of conditional density estimator, which is used to approximate the likelihood, the posterior, or the likelihood-to-evidence ratio. Essentially, every generative model can function as a density estimator. While Generative Adversarial Networks (Goodfellow et al., 2020 ###reference_b12###) and more recently Diffusion Models (Dhariwal & Nichol, 2021 ###reference_b7###) have shown remarkable success in generating high-quality images and videos, the SBI realm is primarily governed by Normalizing Flows (NF, e.g. Rezende & Mohamed, 2015 ###reference_b43###; Papamakarios et al., 2019a ###reference_b38###), which offer superior inductive bias for the probabilistic distributions with up to dozens of dimensions frequently encountered in SBI tasks. Our proposed NQE method can also be viewed as a density estimator, as it reconstructs the posterior distribution autoregressively from its 1-dim conditional quantiles.\n\n###figure_2### In a typical SBI setup, one first samples the model parameters from the prior , and then runs the forward simulations to generate the corresponding observations .\nFor simplicity, let us start with the scenario of 1-dim .\nGiven a dataset and a neural network parameterized by , one can estimate the median (mean) of conditioned on by minimizing the () loss 222Not to be confused with and defined below. between and .\nAs a straightforward generalization, one can estimate the -th quantile of conditioned on by minimizing the following weighted loss,\nHere one can introduce an additional -dependent weight ,\nsimilar to the fact that one can use simulations allocated from an arbitrary prior to train SNLE.\nA discussion regarding the choice of can be found in Appendix B ###reference_###.\nTo reconstruct the full posterior, we require the quantiles at multiple \u2019s, for which we aggregate the individual loss functions,\nWithout loss of generality, we assume the prior of is zero outside some interval .\nIf the prior is positive everywhere on , one can choose such that the prior mass outside it is negligible.\nFor example, one can set to for a standard Gaussian prior; in case of heavy-tailed priors, one can also use the (inverse) prior CDF to map the prior support to .\nWe then equally divide the interval into bins, and estimate the corresponding quantiles with .\nIn this work, we choose to be a Multi-Layer Perceptron (MLP) with outputs followed by a softmax layer, such that the -th quantile of is parameterized as , and we add shortcut connections (the input layer of MLP is concatenated to every hidden layer) to facilitate more efficient information propagation throughout the network.\nMoreover, an optional embedding network (e.g. Jiang et al., 2017 ###reference_b19###; Radev et al., 2020 ###reference_b42###) can be added before the MLP to more efficiently handle high dimensional data (e.g. the cosmology example in Section 3.3 ###reference_###).\nFor multidimensional , we successively apply the aforementioned method to each dimension , conditioned on not only the data but also all the previous dimensions .\nIn other words, in Equations 1 ###reference_### and 2 ###reference_### is replaced by , since is effectively treated as observation data for the inference of .\nAn illustration of the NQE architecture can be found in the top panel of Figure 1 ###reference_###.\nSimilar to Flow Matching Posterior Estimation (FMPE, Dax et al., 2023 ###reference_b5###), NQE has an unconstrained architecture which does not require specialized NFs.\n\n###figure_3### The estimated conditional quantiles must be interpolated to enable sampling from them. We achieve this by interpolating the Cumulative Distribution Function (CDF) using Piecewise Cubic Hermite Interpolating Polynomial with Exponential Tails (PCHIP-ET), a modified version of the PCHIP scheme (Fritsch & Carlson, 1980 ###reference_b10###), which preserves monotonicity of input data and continuity of first derivatives, ensuring a well-defined Probability Distribution Function (PDF). As depicted in the 1st row of Figure 2 ###reference_###, the original PCHIP algorithm presents discernible interpolation artifacts, primarily because polynomials cannot decay rapidly enough to align with the true PDF in the tail regime. To address this issue, we substitute the polynomials with Gaussians within bins identified as tails. A more detailed description of our PCHIP-ET scheme is available in Appendix A ###reference_###. We observe that a satisfactory reconstruction of unimodal distributions can be achieved with quantiles, while incorporating additional bins may facilitate better convergence in multimodal cases. Samples can then be drawn using inverse transform sampling with the interpolated CDF.\nNQE requires\none neural network for each posterior dimension, which can be trained independently on multiple devices to reduce the training wall time.\nIn principle, one can also train NQE by maximizing the joint PDF, similar to the training of NPE.\nHowever, such approach will be less efficient than minimizing in Equation 2 ###reference_###, since one needs to compute the PCHIP-ET interpolation for the PDF, while only depends on the individual quantiles.\nNQE can also be used to estimate distributions with no observation to condition on.\nIn this case, we do not need neural networks for the first dimension , which can be directly interpolated from the empirical quantiles.\nIn Figure 3 ###reference_###, we demonstrate that NQE can successfully model two complicated distributions from Grathwohl et al. (2018 ###reference_b13###)."
},
{
"section_id": "2.2",
"parent_section_id": "2",
"section_name": "Regularization",
"text": "Numerical derivatives are inherently noisier than integrals, and similarly for the PDF compared with the CDF.\nTo mitigate this issue, we propose the following regularization scheme to improve the smoothness of NQE PDF predictions.\nIntuitively, a \u201csmooth distribution\u201d means the averaged PDF within every 1-dim bin for quantile prediction, , should be close to the interpolated value between its neighboring bins,\nwith and , which leads to the following loss for regularization,\nwhere is the Heaviside function.\nWith Equation 4 ###reference_###, we only penalize cases where , since we will have between the peaks in multimodal problems, which is therefore a possible feature in the ground truth solution that should not be penalized.\nFor similar reasons, in Equation 3 ###reference_### is set to be larger than the naive average of and , so that the regularization is only activated when necessary.\nThe total loss is then defined as\nNote that a linear rescaling of changes while remains invariant, which motivates our choice of above.\nWe find 0.1 to be a generally reasonable choice for , although one may reduce for examples with e.g. sharp spikes or edges in the posterior distribution, if one has such prior knowledge of the typical shape of the posterior."
},
{
"section_id": "2.3",
"parent_section_id": "2",
"section_name": "Empirical Coverage",
"text": "Analogous to frequentist confidence regions, Bayesian statistics utilizes credible regions to define the reasonable space for model parameters given .\nThe most popular choice of Bayesian credible region, namely the highest posterior density region (HPDR, e.g. McElreath, 2020 ###reference_b31###), encloses the samples with the highest PDF for the credible region,\nachieving the smallest volume for any given credibility level.\nTo test whether a posterior estimator is biased, one checks the empirical coverage, namely the probability of the true model parameters to fall into the credible region over the simulation data.\nIf such probability is larger (smaller) than , the posterior estimator is over-conservative (biased) 333Note that being well calibrated is a necessary yet not sufficient condition for an estimator to predict the Bayesian optimal posterior, as exemplified by the extreme case where the posterior estimator always outputs the prior..\nTo compute the empirical coverage in practice, one needs to pick pairs of from the simulation data, and generate samples for each of them to get the rank of PDF, leading to neural network calls for NPE and NQE 444We ignore the factor for NQE as we define one network call as one evaluation of the whole estimator..\nFor NLE and NRE, such cost is further multiplied by , the number of posterior evaluations per effective MCMC sample\n555For one may circumvent MCMC using Importance Sampling, which however becomes inefficient as the dimensionality of grows..\nTypically one needs to set both and to so as to get a reliable estimate of the empirical coverage, leading to a moderate computational cost especially for NLE and NRE methods.\nA unique characteristics of NQE is that it predicts the distribution quantiles, which explicitly contains the information regarding the global properties of the posterior and enables us to propose the following quantile mapping credible region (QMCR) 666Not to be confused with the quantile mapping technique used to e.g. correct the bias for simulated climate data (Maraun, 2013 ###reference_b30###)., a generalization of the 1-dim equal-tailed credible interval (e.g. McElreath, 2020 ###reference_b31###) for multidimensional distributions.\nTalts et al. (2018 ###reference_b49###) shows the rank of any 1-dim statistic can be used to define the Bayesian credible region, with HPDR a special case that chooses such statistic as the posterior PDF.\nWith the conditional quantiles predicted by NQE, we introduce an auxiliary distribution , which we typically set to a multivariate standard Gaussian.\nWe then define a bijective mapping that establishes a one-to-one correspondence between and with the same 1-dim conditional CDF, and , across all the dimensions 777If is set to a multivariate standard Gaussian, there is no correlation between the different dimensions so we indeed have ..\nThe defining statistic of the credible region is chosen as with , whose rank can be computed analytically using the distribution since is Gaussian.\nIf the interpolation indicates that includes multiple modes, we use the local CDF within the mode containing to define the mapping , such that the low PDF regions between the modes are excluded from the credible regions.\nA comparison of HPDR and QMCR for a toy distribution can be found in the 2nd row of Figure 2 ###reference_###, together with the mapping illustrated in the 4th row.\nHeuristically, the limit of QMCR encloses the (conditional) median across all the dimensions for unimodal distributions, as opposed to the global maximum of the PDF for HPDR.\nTherefore, unlike HPDR, QMCR is invariant under any 1-dim monotonic transforms of , as long as such reparameterization does not give rise to a different identification of multimodality during the CDF interpolation.\nAs shown with the examples below, QMCR typically leads to similar conclusions regarding the (un)biasedness of the posterior estimators as HPDR, but only requires network calls to evaluate as one no longer needs to generate samples for each observation.\nSuch speed-up allows us to perform posterior calibration in the next subsection with negligible computational cost.\nFor simplicity, in the rest of this paper we will use the term coverage (coverage) for empirical coverage computed with HPDR (QMCR).\nIn addition, we note that due to its autoregressive structure, one can compute the coverage of NQE for the leading dimensions without additional training, which is useful if the unbiasedness of certain dimensions takes precedence over others."
},
{
"section_id": "2.4",
"parent_section_id": "2",
"section_name": "Posterior Calibration",
"text": "Hermans et al. (2021 ###reference_b17###) demonstrates that all existing SBI methods may produce biased results when the simulation budget is limited.\nIntuitively, a biased posterior is too narrow to enclose the true model parameters, so we propose the following calibration strategy as illustrated in the bottom panel of Figure 1 ###reference_###.\nTo make a distribution broader, we fix the medians of all 1-dim conditional posteriors and increase the distance between the medians and all other quantiles by a global broadening factor.\nSimilar to the coverage evaluation, we utilize the local quantiles within modes for multimodal distributions.\nWe remove the quantiles that escape from the boundary of the prior and/or the boundary between different modes, and redistribute the corresponding posterior mass to the bins still within the boundary based on the bin mass, so that the local posterior shape is preserved.\nThe effect of such broadening transform is shown in the 3rd row of Figure 2 ###reference_###.\nWe then solve for the minimum broadening factor such that the calibrated posterior is unbiased across a series of credibility levels, which we set to throughout this paper.\nNote that ideally, a good estimator should have empirical coverage that matches the credibility level.\nHowever, if this is not possible due to limited training data, over-conservative inference should be preferred over biased results.\nThe broadening factor can also be smaller than 1, in case the original posterior is already too conservative.\nWhile one has the freedom to choose the definition of the coverage for the calibration process, the broadened posterior is only guaranteed to be unbiased at the calibrated credibility levels under the same coverage definition.\nWhile similar calibration tricks may also be developed for other SBI methods, it will likely be considerably more expensive than NQE in practice, for the following reasons.\nFirstly, the evaluation of coverage is exclusive to NQE, which is faster by at least a factor of than traditional coverage (with an additional factor of if MCMC is required for sampling).\nMore importantly, we have developed a broadening strategy for NQE that preserves not only the local correlation structure of the posterior but also the ability of fast sampling without MCMC.\nWe are not aware of any similar techniques for existing SBI methods, which estimate the local PDF with no explicit global information of the distribution.\nFor example, while one can broaden an NF-based probability distribution by lowering its temperature, i.e. replacing with , , this will necessitate MCMC sampling for NPE (NLE and NRE need MCMC even without broadening).\nIn addition, with the analytical rank evaluation of coverage, the NQE network outputs can also be reused between different iterations, thus reducing the total network calls by another factor of .\nWe compare the computational cost of broadening calibration for different methods in Table 1 ###reference_###.\nSuch post-processing calibration relies on a reliable calculation of the coverage.\nThe (pointwise) error of empirical coverage due to stochastic sampling can be estimated using binomial distribution (S\u00e4ilynoja et al., 2022 ###reference_b45###); with , the maximum error is smaller than , regardless of the dimensionality of and 888See Appendix E ###reference_### for more discussion on this..\nIn other words, for any inference task, with the broadening calibration, one only needs simulations in the validation dataset to ensure the unbiasedness of the posterior, if there is no model misspecification.\nNevertheless, the number of network calls required for broadening is different across the various algorithms as compared in Table 1 ###reference_###.\nUsing NQE and coverage, one only needs calls of the NQE network for the broadening, which is typically negligible compared with the cost for running the simulations and training the neural estimators.\nIn addition, similar calibration tricks can be used to mitigate partially known model misspecification, as exemplified in Section 3.3 ###reference_### below.\nNote that we use the same validation dataset during the training and broadening calibration of NQE, as the one-parameter broadening transform is unlikely to overfit.\nWe summarize the proposed NQE method in Algorithm 1 ###reference_###.\nIn this paper, we focus on the simple broadening calibration, which is guaranteed to converge with validation simulations, regardless of and .\nWith more simulations, it may be beneficial to employ a more sophisticated calibration scheme to remove the bias without over-broadening the predicted posterior.\nWe plan to conduct a comprehensive survey of such calibration schemes in a follow-up paper.\nOne example is the quantile shifting calibration demonstrated with the cosmology example in Section 3.3 ###reference_###: for each quantile of predicted by NN, we check if we indeed have probability that the true is smaller than the predicted quantile (on the validation dataset) 999For multi-modal distributions, we use the local quantile within the mode that contains the true , similar to the definition of the coverage in Section 2.3 ###reference_###..\nIf not, we calculate the shift required for the quantile such that this statement is true.\nNote that we apply a shift of quantile that is different for each and , but the same for all and .\nIn other words, we effectively calculate the bias averaged over the prior, and shift the predicted quantiles accordingly to remove the bias.\nStrictly speaking, such quantile shifting scheme calibrates the coverage of all the individual 1-dim conditional posteriors, but not necessarily the coverage of the multi-dimensional joint posterior.\nIn addition, the number of simulations required for this scheme depends on the dimensionality of , in contrast to the global broadening scheme which always converges with validation simulations.\nWe leave a more detailed investigation of such methods for future research; nevertheless, for the cosmology example in Section 3.3 ###reference_###, the posterior calibrated with quantile shifting has an almost diagonal empirical coverage and is much narrower than the posterior calibrated with simple global broadening, when there is a significant bias in the uncalibrated posterior due to model misspecification.\n\n###figure_4### \n###figure_5### \n###figure_6###"
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "Numerical Experiments",
"text": ""
},
{
"section_id": "3.1",
"parent_section_id": "3",
"section_name": "SBI Benchmark Problems",
"text": "We assess the performance of NQE on six benchmark problems,\nwith detailed specifications provided in Appendix C ###reference_###.\nAll results for methods other than NQE are adopted from Lueckmann et al. (2021 ###reference_b29###).\nAs discussed in Appendix F ###reference_###, we conduct a mild search of hyperparameters for NQE, but in the end use the same set of hyperparameters across all the benchmark problems,\nalthough it is possible to further improve the performance by tuning the hyperparameters based on specific posterior structures.\nFor example, increasing the number of predicted quantiles will be beneficial for multimodal problems with large simulation budgets.\nTo evaluate the performance of SBI algorithms, we employ Classifier-based 2-Sample Testing (C2ST) as implemented in the sbibm package (Lopez-Paz & Oquab, 2016 ###reference_b24###; Lueckmann et al., 2021 ###reference_b29###). Lower C2ST values denote superior results, with 0.5 signifying a perfect posterior and 1.0 indicating complete failure.\nWe plot the C2ST results for the benchmark problems in Figure 4 ###reference_###, showing that (uncalibrated) NQE achieves state-of-the-art performance across all the examples.\nIn Figure 5 ###reference_###, we compare the NQE coverage before and after broadening: with the broadening calibration, NQE consistently predicts unbiased posterior for all the problems.\nWhile Figure 5 ###reference_### utilizes simulations to enhance the smoothness of the coverage curves, a convergence test in Appendix E ###reference_### shows that simulations are sufficient for most cases.\nThe exact values of the broadening factor can be found in Figure 15 ###reference_###.\nIn Figure 16 ###reference_###, we find that the C2ST is generally similar or slightly worse after the global broadening calibration: this is likely due to the nature of the C2ST metric, since a conservative posterior will be similarly penalized as a biased posterior, although the former should be preferred over the latter for most scientific applications (e.g. Hermans et al., 2021 ###reference_b17###; Delaunoy et al., 2022 ###reference_b6###)."
},
{
"section_id": "3.2",
"parent_section_id": "3",
"section_name": "Order of Model Parameters",
"text": "Due to its autoregressive structure, NQE\u2019s performance may be affected by the order of dimensions.\nWhile each 1-dim conditional distribution is estimated independently, the 1-dim marginal posterior does depend on the estimation for all the previous that are correlated with , therefore one may expect the marginals for the latter dimensions to be less accurate than the former dimensions as the error will accumulate.\nTo study this effect, we compute all the 1-dim marginal C2ST\u2019s for the benchmark problems and plot them with respect to the dimension indices in Figure 6 ###reference_###.\nContrary to the conjecture above, we find no clear dependence between the marginal C2ST and the dimension index.\nNevertheless, this may be due to the relative low posterior dimensionality of the benchmark problems, such that the accumulation of per-dimension error has not become the dominant contribution.\nWe still recommend ordering the dimensions based on the relative importance of the parameters, especially for applications to higher () dimensional posteriors.\nWe note that similar to the TMNRE approach (Miller et al., 2021 ###reference_b33###), one may estimate the individual marginal posteriors with NQE, if the high dimensionality makes it impractical to accurately model the joint posterior."
},
{
"section_id": "3.3",
"parent_section_id": "3",
"section_name": "Application to Cosmology",
"text": "###figure_7### \n###figure_8### The cosmological large scale structures contain ample information regarding the origin and future of our universe, which can be inferred from the locations and/or shapes of the galaxies (e.g. Dodelson & Schmidt, 2020 ###reference_b8###), however the optimal strategy to extract the information remains an unsolved problem.\nWhile at larger scales the power spectra carry most of the information and can be well modeled with a Gaussian likelihood, at smaller scales the highly nonlinear evolution render SBI methods necessary for the optimal inference.\nUnfortunately, the small-scale baryonic physics is still poorly understood, leading to potential model misspecification which can bias the SBI inference (e.g. Modi et al., 2023 ###reference_b34###).\nAs we do not know the exact forward model for our Universe, the best we can do is to make sure our SBI estimator is unbiased on all the well-motivated baryonic physics models, which requires a massive amount of expensive cosmological hydrodynamic simulations (e.g. Villaescusa-Navarro et al., 2021 ###reference_b53###).\nHowever, with NQE one can first train it using cheap (therefore less realistic) simulations and then calibrate it using all available high fidelity (therefore much more expensive) simulations to make sure the uncertainties of baryonic physics have been properly accounted for 101010Here we assume the model misspecification is at least partially known, in the sense that our selection of baryonic physics models \u201cincludes\u201d the correct model for our Universe. The post-processing calibration cannot mitigate completely unknown model misspecification..\nNote that one only needs simulations for each baryonic model to calibrate NQE, which is far fewer than the amount required to directly train field-level SBI with them.\nSuch approach is demonstrated in Figures 7 ###reference_### and 8 ###reference_###, where we show that the bias due to model misspecification can be mitigated by the calibration of NQE.\nAs the model misspecification introduces a large systematic bias, we find that the global broadening calibration makes the posterior over-conservative while the quantile shifting scheme eliminates the bias without over-broadening the posterior, highlighting the benefits of such more advanced calibration methods that will be examined more thoroughly in a follow-up paper.\nMore details regarding this example can be found in Appendix D ###reference_###."
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "Discussion",
"text": "The main contribution of this work is to introduce Neural Quantile Estimation (NQE), a novel class of SBI methods that incorporate the concept of quantile regression, with competitive performance across various examples.\nStrictly speaking, our paper presents Neural Quantile Posterior Estimation, a method that can be extended to Neural Quantile Likelihood Estimation, which fits the likelihood with conditional quantiles.\nWe note that the idea of interpolating predicted quantiles has been explored for e.g. time series forecasting (Gasthaus et al., 2019 ###reference_b11###; Sun et al., 2023 ###reference_b48###).\nNonetheless, to our knowledge our paper is the first work that implements this idea in the SBI framework, with a dedicated interpolation scheme that minimizes the potential artifacts.\nIn addition, Jeffrey & Wandelt (2020 ###reference_b18###) uses a similar architecture to predict the moments of the posterior.\nMontel et al. (2023 ###reference_b36###) proposes to autoregressively apply marginal NRE estimators to obtain the joint distribution, which outperforms standard NRE in their benchmarks.\nAs shown in Hermans et al. (2021 ###reference_b17###), all existing SBI methods may predict biased results in practice: while the Bayesian optimal posterior has perfect calibration, there is no guarantee regarding the unbiasedness of SBI algorithms trained with insufficient number of simulations.\nHowever, with the post-processing calibration step, NQE is guaranteed to be unbiased should there be no unknown model misspecification, in the sense that the credible regions of the posterior will enclose no fewer samples samples than their corresponding credibility levels, as long as one has validation data to reliably compute the empirical coverage for the broadening calibration.\nWhile Balanced Neural Ratio Estimation (BNRE, Delaunoy et al., 2022 ###reference_b6###) pursues similar goals of robust SBI inference, the unbiasedness of BNRE depends on the choice of their regularization parameter, so in principle they need to tune this parameter for each task to obtain best results.\nUnfortunately, the coverage evaluation is considerably more expensive for NRE methods which relies on MCMC sampling, making the coverage-based\ntuning of BNRE computationally prohibitive for higher dimensional applications.\nOn the other hand, the broadening calibration of NQE can be applied with negligible computational cost, with the calibrated NQE manifestly unbiased as the empirical coverage has been explictly corrected during the broadening process.\nIn addition, one can also mitigate the bias due to partially known model misspecification by calibrating the NQE posterior.\nBefore concluding this paper, we enumerate several promising directions for future study.\nFirst of all, NQE can be straightforwardly generalized to Sequential NQE (SNQE), which will be presented in a separate paper.\nSecond, while our PCHIP-ET scheme shows competitive performance across various problems, it does not have continuous PDF derivatives, which may be improved by a higher order interpolation scheme.\nMoreover, in this work we mostly restrict to a global broadening transform for the calibration of NQE, which eliminates the bias at the cost of being possibly too conservative for certain credibility levels.\nAs shown in Section 3.3 ###reference_###, a more advanced calibration strategy would be useful, in particular for problems with a large systematic bias, so that one can calibrate biased posteriors without losing too much constraining power."
}
],
"appendix": [
{
"section_id": "Appendix 1",
"parent_section_id": null,
"section_name": "Appendix A Piecewise Cubic Hermite Interpolating Polynomial with Exponential Tails",
"text": "We interpolate the CDF of the conditional 1-dim distributions using the quantiles predicted by NQE.\nOur interpolation scheme is based on Piecewise Cubic Hermite Interpolating Polynomial (PCHIP, Fritsch & Carlson, 1980 ###reference_b10###; Moler, 2004 ###reference_b35###), which preserves the monotonicity of the input data and has continuous first order derivatives.\nThe values of the interpolated function at the th and th nodes, and , match the values of the target function, while the derivatives, and , are given by the two-side scheme for non-boundary points,\nFor boundary points, we use the following one-side scheme for the left end (similarly for the right end),\nwhich however is clipped to for and for , with a hyperparameter typically set to .\nNote that for well-defined CDF data, one always has in Equations 6 ###reference_### and 7 ###reference_###.\nFritsch & Carlson (1980 ###reference_b10###) shows a sufficient condition for the interpolation to preserve monotonicity is and 111111Indeed 3 is the largest number for the criterion of this form., which is satisfied by Equations 6 ###reference_### and 7 ###reference_###.\nWith , , and , the interpolation gives\nAs shown in Figure 2 ###reference_###, this interpolation scheme generates notable artifacts in the PDF, due to the challenge posed by fitting polynomials to the exponentially declining tail of the probability density.\nIn response to this challenge, we fit the local distribution with Gaussian tails whenever necessary.\nIn this regime, the fitting PDF is given by\nwith and its first derivative continuous at the end point of the bin.\nWe then solve the parameter by requiring that the PDF has correct normalization within the bin, which can be computed via the following indefinite integrals.\nFor , we have\nwhile for ,\nwhere and are the error function and imaginary error function, respectively.\nFor and , we use the following expressions which are analytically equivalent but numerically more stable,\nwhere is Dawson\u2019s integral and is the scaled complementary error function.\nNonetheless, in rare cases where we set and give up the continuity condition for the first derivative of PDF, and instead solve for the correct normalization within the bin.\nOur criterion for deciding whether a bin should be fitted with exponential tails is as follows.\nFirst of all, the leftmost and rightmost bins have one-sided exponential tails as long as their averaged PDF is smaller than 0.6 times the averaged PDF in the bin next to them, otherwise the edge bins likely have a hard truncation by the prior and are therefore fitted with polynomials.\nIn addition, we also allow other bins to have double, i.e. from left endpoint towards right and from right endpoint towards left, exponential tails to account for potential multimodality.\nFor each bin , we attempt to fit the distribution with double exponential tails, and compute\nNote that the PDF is no longer strictly continuous at the bin endpoints when fitted with double exponential tails, and quantifies such discontinuity.\nWe then switch to double exponentials only for bins with local minimum , and stick with the PCHIP polynomials for the remaining bins.\nThe rationale behind this is intuitive: a smaller implies a likely gap between two isolated peaks of the PDF (see, for instance, the top right panel of Figure 2 ###reference_###), which can be better fitted with two exponential tails extending from both sides.\nOur PCHIP-ET scheme incorporates the inductive bias that for most SBI problems the tails of probabilistic distributions can be well modeled by Gaussians; if this is not the case, one may replace the Gaussian with e.g. student\u2019s or Cauchy for long-tailed distributions."
},
{
"section_id": "Appendix 2",
"parent_section_id": null,
"section_name": "Appendix B Weights in",
"text": "In this work, we use NQE to predict the quantiles equally spaced between , which tends to put more emphasis on the regions with larger PDF where the neighboring quantiles are closer to each other, leading to potential instability in the tail regions.\nInstead of directly weighting the different terms in , we adopt the following dropout strategy: for each training batch, we only keep of the terms in using a no-replacement multinomial sampling with weights proportional to , , with and by default.\nThis will effectively upweight the quantiles where the PDF is small, while the no-replacement sampling prevents specific terms from having too large weights that dominate the whole loss function."
},
{
"section_id": "Appendix 3",
"parent_section_id": null,
"section_name": "Appendix C Benchmark Problems",
"text": "We use the following problems from Lueckmann et al. (2021 ###reference_b29###) to benchmark the performance of the SBI methods.\nThe \u201cground truth\u201d posterior samples are available for all the problems.\nA toy problem with complicated global (bimodality) as well as local (crescent shape) structures.\nA challenging problem designed to have a simple likelihood and a complex posterior, with uninformative dimensions (distractors) added to the observation.\nInference of a 10-parameter Generalized Linear Model (GLM) with raw Bernoulli observations.\nInferring the common mean of a mixture of two Gaussians, one with much broader covariance than the other.\nAn epidemiological model describing the numbers of individuals in three possible states: susceptible , infectious , and recovered or deceased .\nAn influential ecology model describing the dynamics of two interacting species."
},
{
"section_id": "Appendix 4",
"parent_section_id": null,
"section_name": "Appendix D Details of the Cosmology Application",
"text": "We run dark-matter-only Particle Mesh (PM) simulations with particles in Mpc/h3 boxes using the pmwd code (Li et al., 2022a ###reference_b22###, b ###reference_b23###), and generate two projected overdensity fields from each simulation by dividing the box into two halves along the axis as the observation data.\nWe use 80% simulations for training, 10% for validation, and 10% for test. We evaluate the calibration of NQE with the validation data, and plot Figures 7 ###reference_### and 8 ###reference_### with the test data.\nThe model parameters are , the total matter density today, and , the RMS matter fluctuation today in linear theory, with uniform priors and .\nAs a proof-of-concept example, we substitute the expensive cosmological hydrodynamic simulations with a post-processing scale-independent bias 121212Here the bias means any deviation of the actual observed field with respect to the dark-matter-only density field. model over the density fields from the dark-matter-only simulations, i.e. with 131313But we still require that ..\nIn other words, we train NQE with simulations but requires the inference to be unbiased for , which is achieved via the calibration of NQE.\nA ResNet (He et al., 2016 ###reference_b15###) with 10 convolutional layers is utilized as the embedding network for a more efficient inference with the high dimensional data."
},
{
"section_id": "Appendix 5",
"parent_section_id": null,
"section_name": "Appendix E Convergence Test of Coverage Evaluation",
"text": "We check the convergence of the coverage evaluation in Figures 9 ###reference_###, 10 ###reference_### and 11 ###reference_###.\nWhile Figure 5 ###reference_### in the main paper uses simulations to enhance the smoothness of the coverage curves, in most cases simulations should be sufficient for the evaluation of coverage.\nActually, the (pointwise) standard error of empirical coverage can be estimated using the properties of binomial distribution as , where is the number of simulations for the coverage evaluation (S\u00e4ilynoja et al., 2022 ###reference_b45###).\nTherefore, with , one has for all .\n\n###figure_9### \n###figure_10### \n###figure_11###"
},
{
"section_id": "Appendix 6",
"parent_section_id": null,
"section_name": "Appendix F Hyperparameter Choices",
"text": "We train all the models on NVIDIA A100 MIG GPUs using the AdamW optimizer (Loshchilov & Hutter, 2017 ###reference_b25###), and find the wall time of NQE training to be comparable to existing methods like NPE.\nOur PCHIP-ET scheme has been implemented with Cython (Behnel et al., 2010 ###reference_b3###), so that its evaluation is much faster than the quantile regression network calls for typical real-world examples.\nWe conduct a mild search for {, , } in Figures 12 ###reference_### and 13 ###reference_###, which leads to our baseline choice in Table 2 ###reference_###.\nWe reduce the stepsize by 10% after every 5 epochs, and terminate the training if the loss does not improve after 30 epochs or when the training reaches 300 epochs.\n\n###figure_12### \n###figure_13### We find that some tasks require a different stepsize while some tasks exhibit significant overfitting, so we train 9 realizations for each network with {initial step size = 5e-4, 1e-4, 2e-5} {AdamW weight decay = 0, 1, 10}, and choose the realization with the smallest loss function.\nNevertheless, most problems have a clear preference regarding these two parameters so it should be straightforward to tune them for specific problems in practice."
},
{
"section_id": "Appendix 7",
"parent_section_id": null,
"section_name": "Appendix G Additional Plots",
"text": "###figure_14### \n###figure_15### \n###figure_16###"
}
],
"tables": {
"1": {
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Computational cost of the broadening calibration, with NQE being significantly faster than other methods. : number of iterations to solve for the desired coverage. : number of simulated observations for coverage computation. : number of samples per observation for the rank of PDF. : number of posterior evaluations per effective MCMC sample. We assume there is no broadening technique for NPE that does not necessitate MCMC sampling.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.23.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.23.15.16.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.23.15.16.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.2\">coverage</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.3\">simulations</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.4\">network calls</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.11.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.11.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.11.3.3.4.1\">NQE</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.9.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.10.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.11.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.14.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.14.6.6.4\">NQE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.13.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.14.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.17.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.17.9.9.4\">NLE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.15.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.16.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.17.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.20.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.20.12.12.4\">NPE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.18.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.19.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.20.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.23.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.23.15.15.4\">NRE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.21.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.22.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.23.15.15.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 1: Computational cost of the broadening calibration, with NQE being significantly faster than other methods. : number of iterations to solve for the desired coverage. : number of simulated observations for coverage computation. : number of samples per observation for the rank of PDF. : number of posterior evaluations per effective MCMC sample. We assume there is no broadening technique for NPE that does not necessitate MCMC sampling."
},
"2": {
"table_html": "<figure class=\"ltx_table\" id=\"A6.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Our baseline choice of NQE hyperparameters. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A6.T2.11.11\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.12.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.12.1.1\">hyperparameter</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.12.1.2\">value</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"A6.T2.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.1.2\">0.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.3.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.5.5.5.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.7.7.7.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.9.9.9.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.10.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.10.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.10.10.10.2\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.13.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.13.2.1\"># of MLP hidden layers</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.13.2.2\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.14.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.14.3.1\"># of MLP hidden neurons per layer</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.14.3.2\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.11.2\">16</td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 2: Our baseline choice of NQE hyperparameters. "
}
},
"image_paths": {
"1": {
"figure_path": "2401.02413v2_figure_1.png",
"caption": "Figure 1: \n(Top) Network architecture of our NQE method, which autoregressively learns 1-dim conditional quantiles for each posterior dimension.\nThe estimated quantiles are then interpolated to reconstruct the full distribution.\n(Bottom) A post-processing calibration step can be employed to ensure the unbiasedness of NQE inference results.",
"url": "http://arxiv.org/html/2401.02413v2/x1.png"
},
"2": {
"figure_path": "2401.02413v2_figure_2.png",
"caption": "Figure 2: \n(1st row) Interpolation of Gaussian and Gaussian Mixture distributions.\nWhile the original PCHIP algorithm shows significant interpolation artifacts, our modified PCHIP-ET scheme decently reconstructs the distributions with only \u223c15similar-toabsent15\\sim 15\u223c 15 quantiles.\n(2nd row) Comparison of the 68.3% and 95.4% credible regions for a mixture of two asymmetric modes, evaluated with HPDR (p\ud835\udc5dpitalic_p-coverage) and QMCR (q\ud835\udc5eqitalic_q-coverage).\n(3rd row) Broadening of the interpolated posterior, with the broadening factors indicated in the legend.\n(4th row) The bijective mapping fqmsubscript\ud835\udc53qmf_{\\rm qm}italic_f start_POSTSUBSCRIPT roman_qm end_POSTSUBSCRIPT establishes a one-to-one correspondence between \ud835\udf3d\ud835\udf3d{\\bm{\\theta}}bold_italic_\u03b8 and \ud835\udf3d\u2032superscript\ud835\udf3d\u2032{\\bm{\\theta}}^{\\prime}bold_italic_\u03b8 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT with the same 1-dim conditional CDF across all the \u03b8(i)superscript\ud835\udf03\ud835\udc56\\theta^{(i)}italic_\u03b8 start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT dimensions. The p\u2212limit-from\ud835\udc5dp-italic_p -coverage and q\u2212limit-from\ud835\udc5eq-italic_q -coverage are based on the ranking of p\u2062(\ud835\udf3d)\ud835\udc5d\ud835\udf3dp({\\bm{\\theta}})italic_p ( bold_italic_\u03b8 ) and qaux\u2062(\ud835\udf3d\u2032)subscript\ud835\udc5eauxsuperscript\ud835\udf3d\u2032q_{\\rm aux}({\\bm{\\theta}}^{\\prime})italic_q start_POSTSUBSCRIPT roman_aux end_POSTSUBSCRIPT ( bold_italic_\u03b8 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT ), respectively.",
"url": "http://arxiv.org/html/2401.02413v2/x2.png"
},
"3": {
"figure_path": "2401.02413v2_figure_3.png",
"caption": "Figure 3: Probability density estimation for two toy examples from Grathwohl et al. (2018).\nDespite the intricate multimodal structures, NQE is able to faithfully reconstruct both distributions.",
"url": "http://arxiv.org/html/2401.02413v2/x3.png"
},
"4": {
"figure_path": "2401.02413v2_figure_4.png",
"caption": "Figure 4: Comparison of C2ST as a function of simulation budget for the six benchmark problems, with lower C2ST values representing better performance of the algorithm.\nThe error bars are estimated using the 25%, 50% and 75% quantiles of C2ST over ten realizations for each problem.\n(Uncalibrated) NQE achieves state-of-the-art performance across all the examples.",
"url": "http://arxiv.org/html/2401.02413v2/x4.png"
},
"5": {
"figure_path": "2401.02413v2_figure_5.png",
"caption": "Figure 5: \n(Top) NQE q\u2212limit-from\ud835\udc5eq-italic_q -coverage for the benchmark problems. Like other SBI methods, with limited simulation budgets, NQE may predict biased posteriors.\n(Bottom) Calibrated NQE predicts unbiased posteriors for all the problems.\nErrorbars are small and thus not plotted.\nSee Appendix E for a convergence test and Figure 14 for a similar plot with p\u2212limit-from\ud835\udc5dp-italic_p -coverage.",
"url": "http://arxiv.org/html/2401.02413v2/x5.png"
},
"6": {
"figure_path": "2401.02413v2_figure_6.png",
"caption": "Figure 6: \nThe C2ST values for the 1-dim uncalibrated marginal posteriors. We do not observe a clear trend of increasing C2ST with respect to the ordering of the dimensions.",
"url": "http://arxiv.org/html/2401.02413v2/x6.png"
},
"7": {
"figure_path": "2401.02413v2_figure_7.png",
"caption": "Figure 7: \n(Left) Sample image of the simulated data. The task is to infer two parameters of our Universe, \u03a9msubscript\u03a9\ud835\udc5a\\Omega_{m}roman_\u03a9 start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and \u03c38subscript\ud835\udf0e8\\sigma_{8}italic_\u03c3 start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT, from such images.\n(Right) The q\u2212limit-from\ud835\udc5eq-italic_q -coverage for uncalibrated NQE without model misspecification (No MM), uncalibrated NQE with model misspecification (With MM), and NQE with model misspecification but calibrated using a broadening factor of 4.2 (Broadening) and using the quantile shifting method (Shifting).\nBoth calibration methods eliminate the bias due to known model misspecification, with quantile shifting achieving almost exact empirical coverage whereas global broadening being over-conservative.",
"url": "http://arxiv.org/html/2401.02413v2/x7.png"
},
"8": {
"figure_path": "2401.02413v2_figure_8.png",
"caption": "Figure 8: \nComparison of the uncalibrated posterior and the posteriors calibrated with two different schemes.\nThe quantile shifting scheme removes the bias without over-broadening the posterior.",
"url": "http://arxiv.org/html/2401.02413v2/x8.png"
},
"9": {
"figure_path": "2401.02413v2_figure_9.png",
"caption": "Figure 9: \nSimilar to Figure 5, but using 2,000 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
"url": "http://arxiv.org/html/2401.02413v2/x9.png"
},
"10": {
"figure_path": "2401.02413v2_figure_10.png",
"caption": "Figure 10: \nSimilar to Figure 5, but using 1,000 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
"url": "http://arxiv.org/html/2401.02413v2/x10.png"
},
"11": {
"figure_path": "2401.02413v2_figure_11.png",
"caption": "Figure 11: \nSimilar to Figure 5, but using 500 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
"url": "http://arxiv.org/html/2401.02413v2/x11.png"
},
"12": {
"figure_path": "2401.02413v2_figure_12.png",
"caption": "Figure 12: \nA survey of NQE performance across different choices of hyperparameters.\nFrom left to right, we set f0subscript\ud835\udc530f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as (0, 0, 0, 0, 1, 1, 1, 1), and set \u03bbregsubscript\ud835\udf06reg\\lambda_{\\rm reg}italic_\u03bb start_POSTSUBSCRIPT roman_reg end_POSTSUBSCRIPT as (0, 0.01, 0.1, 1, 0, 0.01, 0.1, 1).\nAll other parameters are the same as Table 2.",
"url": "http://arxiv.org/html/2401.02413v2/x12.png"
},
"13": {
"figure_path": "2401.02413v2_figure_13.png",
"caption": "Figure 13: \nSame as Figure 12, but using 25 quantile bins. Increasing the number of bins is helpful for multimodal problems (e.g. TM) with large simulation budgets.",
"url": "http://arxiv.org/html/2401.02413v2/x13.png"
},
"14": {
"figure_path": "2401.02413v2_figure_14.png",
"caption": "Figure 14: \nEmpirical coverage results using p\u2212limit-from\ud835\udc5dp-italic_p -coverage, while the calibration is still evaluated using q\u2212limit-from\ud835\udc5eq-italic_q -coverage. We find that the p\u2212limit-from\ud835\udc5dp-italic_p -coverage results are qualitatively similar to the q\u2212limit-from\ud835\udc5eq-italic_q -coverage in most cases, and the broadening calibration with q\u2212limit-from\ud835\udc5eq-italic_q -coverage in the main text also mitigates the bias for the p\u2212limit-from\ud835\udc5dp-italic_p -coverage. Nevertheless, one can always solve the broadening factor directly with p\u2212limit-from\ud835\udc5dp-italic_p -coverage if one wishes the p\u2212limit-from\ud835\udc5dp-italic_p -coverage to be strictly unbiased, at the cost of more network calls required than using q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
"url": "http://arxiv.org/html/2401.02413v2/x14.png"
},
"15": {
"figure_path": "2401.02413v2_figure_15.png",
"caption": "Figure 15: The actual broadening factor applied to remove the bias for the benchmark problems.",
"url": "http://arxiv.org/html/2401.02413v2/x15.png"
},
"16": {
"figure_path": "2401.02413v2_figure_16.png",
"caption": "Figure 16: Similar to Figure 4, but for NQE calibrated with the global broadening scheme. The C2ST of calibrated NQE is generally similar to or slightly worse than uncalibrated NQE in Figure 4.",
"url": "http://arxiv.org/html/2401.02413v2/x16.png"
}
},
"validation": true,
"references": [
{
"1": {
"title": "Approximate bayesian computation in population genetics.",
"author": "Beaumont, M. A., Zhang, W., and Balding, D. J.",
"venue": "Genetics, 162(4):2025\u20132035, 2002.",
"url": null
}
},
{
"2": {
"title": "Adaptive approximate bayesian computation.",
"author": "Beaumont, M. A., Cornuet, J.-M., Marin, J.-M., and Robert, C. P.",
"venue": "Biometrika, 96(4):983\u2013990, 2009.",
"url": null
}
},
{
"3": {
"title": "Cython: The best of both worlds.",
"author": "Behnel, S., Bradshaw, R., Citro, C., Dalcin, L., Seljebotn, D. S., and Smith, K.",
"venue": "Computing in Science & Engineering, 13(2):31\u201339, 2010.",
"url": null
}
},
{
"4": {
"title": "The frontier of simulation-based inference.",
"author": "Cranmer, K., Brehmer, J., and Louppe, G.",
"venue": "Proceedings of the National Academy of Sciences, 117(48):30055\u201330062, 2020.",
"url": null
}
},
{
"5": {
"title": "Flow matching for scalable simulation-based inference.",
"author": "Dax, M., Wildberger, J., Buchholz, S., Green, S. R., Macke, J. H., and Sch\u00f6lkopf, B.",
"venue": "arXiv preprint arXiv:2305.17161, 2023.",
"url": null
}
},
{
"6": {
"title": "Towards reliable simulation-based inference with balanced neural ratio estimation.",
"author": "Delaunoy, A., Hermans, J., Rozet, F., Wehenkel, A., and Louppe, G.",
"venue": "arXiv preprint arXiv:2208.13624, 2022.",
"url": null
}
},
{
"7": {
"title": "Diffusion models beat gans on image synthesis.",
"author": "Dhariwal, P. and Nichol, A.",
"venue": "Advances in Neural Information Processing Systems, 34:8780\u20138794, 2021.",
"url": null
}
},
{
"8": {
"title": "Modern cosmology.",
"author": "Dodelson, S. and Schmidt, F.",
"venue": "Academic press, 2020.",
"url": null
}
},
{
"9": {
"title": "On contrastive learning for likelihood-free inference.",
"author": "Durkan, C., Murray, I., and Papamakarios, G.",
"venue": "In International Conference on Machine Learning, pp. 2771\u20132781. PMLR, 2020.",
"url": null
}
},
{
"10": {
"title": "Monotone piecewise cubic interpolation.",
"author": "Fritsch, F. N. and Carlson, R. E.",
"venue": "SIAM Journal on Numerical Analysis, 17(2):238\u2013246, 1980.",
"url": null
}
},
{
"11": {
"title": "Probabilistic forecasting with spline quantile function rnns.",
"author": "Gasthaus, J., Benidis, K., Wang, Y., Rangapuram, S. S., Salinas, D., Flunkert, V., and Januschowski, T.",
"venue": "In The 22nd international conference on artificial intelligence and statistics, pp. 1901\u20131910. PMLR, 2019.",
"url": null
}
},
{
"12": {
"title": "Generative adversarial networks.",
"author": "Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.",
"venue": "Communications of the ACM, 63(11):139\u2013144, 2020.",
"url": null
}
},
{
"13": {
"title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models.",
"author": "Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D.",
"venue": "arXiv preprint arXiv:1810.01367, 2018.",
"url": null
}
},
{
"14": {
"title": "Automatic posterior transformation for likelihood-free inference.",
"author": "Greenberg, D., Nonnenmacher, M., and Macke, J.",
"venue": "In International Conference on Machine Learning, pp. 2404\u20132414. PMLR, 2019.",
"url": null
}
},
{
"15": {
"title": "Deep residual learning for image recognition.",
"author": "He, K., Zhang, X., Ren, S., and Sun, J.",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778, 2016.",
"url": null
}
},
{
"16": {
"title": "Likelihood-free mcmc with amortized approximate ratio estimators.",
"author": "Hermans, J., Begy, V., and Louppe, G.",
"venue": "In International Conference on Machine Learning, pp. 4239\u20134248. PMLR, 2020.",
"url": null
}
},
{
"17": {
"title": "A trust crisis in simulation-based inference? your posterior approximations can be unfaithful.",
"author": "Hermans, J., Delaunoy, A., Rozet, F., Wehenkel, A., Begy, V., and Louppe, G.",
"venue": "arXiv preprint arXiv:2110.06581, 2021.",
"url": null
}
},
{
"18": {
"title": "Solving high-dimensional parameter inference: marginal posterior densities & moment networks.",
"author": "Jeffrey, N. and Wandelt, B. D.",
"venue": "arXiv preprint arXiv:2011.05991, 2020.",
"url": null
}
},
{
"19": {
"title": "Learning summary statistic for approximate bayesian computation via deep neural network.",
"author": "Jiang, B., Wu, T.-y., Zheng, C., and Wong, W. H.",
"venue": "Statistica Sinica, pp. 1595\u20131618, 2017.",
"url": null
}
},
{
"20": {
"title": "A contribution to the mathematical theory of epidemics.",
"author": "Kermack, W. O. and McKendrick, A. G.",
"venue": "Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character, 115(772):700\u2013721, 1927.",
"url": null
}
},
{
"21": {
"title": "Regression quantiles.",
"author": "Koenker, R. and Bassett Jr, G.",
"venue": "Econometrica: journal of the Econometric Society, pp. 33\u201350, 1978.",
"url": null
}
},
{
"22": {
"title": "pmwd: A differentiable cosmological particle-mesh -body library.",
"author": "Li, Y., Lu, L., Modi, C., Jamieson, D., Zhang, Y., Feng, Y., Zhou, W., Kwan, N. P., Lanusse, F., and Greengard, L.",
"venue": "arXiv preprint arXiv:2211.09958, 2022a.",
"url": null
}
},
{
"23": {
"title": "Differentiable cosmological simulation with adjoint method.",
"author": "Li, Y., Modi, C., Jamieson, D., Zhang, Y., Lu, L., Feng, Y., Lanusse, F., and Greengard, L.",
"venue": "arXiv preprint arXiv:2211.09815, 2022b.",
"url": null
}
},
{
"24": {
"title": "Revisiting classifier two-sample tests.",
"author": "Lopez-Paz, D. and Oquab, M.",
"venue": "arXiv preprint arXiv:1610.06545, 2016.",
"url": null
}
},
{
"25": {
"title": "Decoupled weight decay regularization.",
"author": "Loshchilov, I. and Hutter, F.",
"venue": "arXiv preprint arXiv:1711.05101, 2017.",
"url": null
}
},
{
"26": {
"title": "Analytical note on certain rhythmic relations in organic systems.",
"author": "Lotka, A. J.",
"venue": "Proceedings of the National Academy of Sciences, 6(7):410\u2013415, 1920.",
"url": null
}
},
{
"27": {
"title": "Flexible statistical inference for mechanistic models of neural dynamics.",
"author": "Lueckmann, J.-M., Goncalves, P. J., Bassetto, G., \u00d6cal, K., Nonnenmacher, M., and Macke, J. H.",
"venue": "Advances in neural information processing systems, 30, 2017.",
"url": null
}
},
{
"28": {
"title": "Likelihood-free inference with emulator networks.",
"author": "Lueckmann, J.-M., Bassetto, G., Karaletsos, T., and Macke, J. H.",
"venue": "In Symposium on Advances in Approximate Bayesian Inference, pp. 32\u201353. PMLR, 2019.",
"url": null
}
},
{
"29": {
"title": "Benchmarking simulation-based inference.",
"author": "Lueckmann, J.-M., Boelts, J., Greenberg, D., Goncalves, P., and Macke, J.",
"venue": "In International Conference on Artificial Intelligence and Statistics, pp. 343\u2013351. PMLR, 2021.",
"url": null
}
},
{
"30": {
"title": "Bias correction, quantile mapping, and downscaling: Revisiting the inflation issue.",
"author": "Maraun, D.",
"venue": "Journal of Climate, 26(6):2137\u20132143, 2013.",
"url": null
}
},
{
"31": {
"title": "Statistical rethinking: A Bayesian course with examples in R and Stan.",
"author": "McElreath, R.",
"venue": "CRC press, 2020.",
"url": null
}
},
{
"32": {
"title": "Quantile regression forests.",
"author": "Meinshausen, N. and Ridgeway, G.",
"venue": "Journal of machine learning research, 7(6), 2006.",
"url": null
}
},
{
"33": {
"title": "Truncated marginal neural ratio estimation.",
"author": "Miller, B. K., Cole, A., Forr\u00e9, P., Louppe, G., and Weniger, C.",
"venue": "Advances in Neural Information Processing Systems, 34:129\u2013143, 2021.",
"url": null
}
},
{
"34": {
"title": "Sensitivity analysis of simulation-based inference for galaxy clustering, 2023.",
"author": "Modi, C., Pandey, S., Ho, M., Hahn, C., Blancard, B. R.-S., and Wandelt, B.",
"venue": null,
"url": null
}
},
{
"35": {
"title": "Numerical computing with MATLAB.",
"author": "Moler, C. B.",
"venue": "SIAM, 2004.",
"url": null
}
},
{
"36": {
"title": "Scalable inference with autoregressive neural ratio estimation.",
"author": "Montel, N. A., Alvey, J., and Weniger, C.",
"venue": "arXiv preprint arXiv:2308.08597, 2023.",
"url": null
}
},
{
"37": {
"title": "Fast -free inference of simulation models with bayesian conditional density estimation.",
"author": "Papamakarios, G. and Murray, I.",
"venue": "Advances in neural information processing systems, 29, 2016.",
"url": null
}
},
{
"38": {
"title": "Normalizing flows for probabilistic modeling and inference, arxiv e-prints.",
"author": "Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B.",
"venue": "arXiv preprint arXiv:1912.02762, 2019a.",
"url": null
}
},
{
"39": {
"title": "Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows.",
"author": "Papamakarios, G., Sterratt, D., and Murray, I.",
"venue": "In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 837\u2013848. PMLR, 2019b.",
"url": null
}
},
{
"40": {
"title": "Pytorch: An imperative style, high-performance deep learning library.",
"author": "Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.",
"venue": "Advances in neural information processing systems, 32, 2019.",
"url": null
}
},
{
"41": {
"title": "Population growth of human y chromosomes: a study of y chromosome microsatellites.",
"author": "Pritchard, J. K., Seielstad, M. T., Perez-Lezaun, A., and Feldman, M. W.",
"venue": "Molecular biology and evolution, 16(12):1791\u20131798, 1999.",
"url": null
}
},
{
"42": {
"title": "Bayesflow: Learning complex stochastic models with invertible neural networks.",
"author": "Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., and K\u00f6the, U.",
"venue": "IEEE transactions on neural networks and learning systems, 33(4):1452\u20131466, 2020.",
"url": null
}
},
{
"43": {
"title": "Variational inference with normalizing flows.",
"author": "Rezende, D. and Mohamed, S.",
"venue": "In International conference on machine learning, pp. 1530\u20131538. PMLR, 2015.",
"url": null
}
},
{
"44": {
"title": "Beyond expectation: Deep joint mean and quantile regression for spatiotemporal problems.",
"author": "Rodrigues, F. and Pereira, F. C.",
"venue": "IEEE transactions on neural networks and learning systems, 31(12):5377\u20135389, 2020.",
"url": null
}
},
{
"45": {
"title": "Graphical test for discrete uniformity and its applications in goodness-of-fit evaluation and multiple sample comparison.",
"author": "S\u00e4ilynoja, T., B\u00fcrkner, P.-C., and Vehtari, A.",
"venue": "Statistics and Computing, 32(2):32, 2022.",
"url": null
}
},
{
"46": {
"title": "Adaptive approximate bayesian computation tolerance selection.",
"author": "Simola, U., Cisewski-Kehe, J., Gutmann, M. U., and Corander, J.",
"venue": "Bayesian analysis, 16(2):397\u2013423, 2021.",
"url": null
}
},
{
"47": {
"title": "Sequential monte carlo without likelihoods.",
"author": "Sisson, S. A., Fan, Y., and Tanaka, M. M.",
"venue": "Proceedings of the National Academy of Sciences, 104(6):1760\u20131765, 2007.",
"url": null
}
},
{
"48": {
"title": "Neural spline search for quantile probabilistic modeling.",
"author": "Sun, R., Li, C.-L., Arik, S. \u00d6., Dusenberry, M. W., Lee, C.-Y., and Pfister, T.",
"venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 9927\u20139934, 2023.",
"url": null
}
},
{
"49": {
"title": "Validating bayesian inference algorithms with simulation-based calibration.",
"author": "Talts, S., Betancourt, M., Simpson, D., Vehtari, A., and Gelman, A.",
"venue": "arXiv preprint arXiv:1804.06788, 2018.",
"url": null
}
},
{
"50": {
"title": "Nonparametric quantile regression: Non-crossing constraints and conformal prediction.",
"author": "Tang, W., Shen, G., Lin, Y., and Huang, J.",
"venue": "arXiv preprint arXiv:2210.10161, 2022.",
"url": null
}
},
{
"51": {
"title": "Inferring coalescence times from dna sequence data.",
"author": "Tavar\u00e9, S., Balding, D. J., Griffiths, R. C., and Donnelly, P.",
"venue": "Genetics, 145(2):505\u2013518, 1997.",
"url": null
}
},
{
"52": {
"title": "Approximate bayesian computation scheme for parameter inference and model selection in dynamical systems.",
"author": "Toni, T., Welch, D., Strelkowa, N., Ipsen, A., and Stumpf, M. P.",
"venue": "Journal of the Royal Society Interface, 6(31):187\u2013202, 2009.",
"url": null
}
},
{
"53": {
"title": "The camels project: Cosmology and astrophysics with machine-learning simulations.",
"author": "Villaescusa-Navarro, F., Angl\u00e9s-Alc\u00e1zar, D., Genel, S., Spergel, D. N., Somerville, R. S., Dave, R., Pillepich, A., Hernquist, L., Nelson, D., Torrey, P., et al.",
"venue": "The Astrophysical Journal, 915(1):71, 2021.",
"url": null
}
}
],
"url": "http://arxiv.org/html/2401.02413v2"
}