context stringlengths 100 12k | A stringlengths 100 5.1k | B stringlengths 100 6.02k | C stringlengths 100 4.6k | D stringlengths 100 4.68k | label stringclasses 4
values |
|---|---|---|---|---|---|
They generally achieve coverage close to the nominal level. For large error metrics relative to sample size, the vertex and double-or-nothing bootstrap methods can be considered as good alternatives. | In the case of the 𝙵𝙰𝚁𝙵𝙰𝚁\mathtt{FAR}typewriter_FAR (false acceptance rate), the subsets and two-level bootstrap techniques fail to achieve nominal coverage at any level of the error metric, while the naive Wilson interval, where one neglects to account for data dependence, shrinks with growing 𝙵𝙰𝚁𝙵𝙰𝚁\matht... | We strongly advise against using naive Wilson intervals, subsets, and two-level bootstrap techniques. | can lead to different conclusions due to miscoverage. Six methods for computing estimates and corresponding 95% confidence intervals on synthetic data for the false accept rate (𝙵𝙰𝚁𝙵𝙰𝚁\mathtt{FAR}typewriter_FAR) of two 1:1 matching algorithms (A and B) that have underlying equal accuracy (𝙵𝙰𝚁=10−1𝙵𝙰𝚁supersc... | We provide a review of two classes of methods for constructing confidence intervals for matching tasks, one based on parametric assumptions, and the other on nonparametric, resampling-based methods. The reviewed methods include the Wilson intervals without (naive version) and with variance adjusted for data dependence,... | B |
Is there a lightweight method that provably minimizes the original objective in (1), when the participation statistics of clients are unknown, uncontrollable, and heterogeneous? | We leverage the insight that we can apply different weights to different clients’ updates in the parameter aggregation stage of FedAvg. If this is done properly, the effect of heterogeneous participation can be canceled out so that we can minimize (1), as shown in existing works that assume known participation statisti... | Most existing works on FL with partial client participation assume that the clients participate according to a known or controllable random process (Karimireddy et al., 2020, Yang et al., 2021, Chen et al., 2022, Fraboni et al., 2021a, Li et al., 2020b; c). | Earlier works on FedAvg considered the convergence analysis with full client participation (Gorbunov et al., 2021, Haddadpour et al., 2019, Lin et al., 2020, Stich, 2019, Wang & Joshi, 2019; 2021, Yu et al., 2019, Malinovsky et al., 2023), which do not capture the fact that only a subset of clients participates in each... | Some previous works have discovered this need of debiasing the skewness of client participation (Li et al., 2020c, Perazzone et al., 2022) or designing the client sampling scheme to ensure that the updates are unbiased (Fraboni et al., 2021a, Li et al., 2020b). However, in our work, we consider the more realistic case ... | A |
Table 2: Proteins surpassing the Benjamini-Hochberg corrected p-value threshold 0.05. Associations denoted with an X are those that had a pQTL surpassing the genome-wide significance threshold 5×10−85superscript1085\times 10^{-8}5 × 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT for association with the trait. | Recent studies suggest that protein expression prediction models do not generalize well across different ancestral groups. For example, Zhang et al., (2022) found that models trained on data collected on individuals of European ancestry (EA) did not perform well when predicting protein expression in individuals of Afri... | In this work, we focus on proteome-wide association studies (PWAS) specific to individuals of African ancestry. Proteomics are important because many diseases manifest through changes in protein expression, so proteome-wide association studies can identify novel biomarkers and drug targets (Kavallaris and Marshall,, 20... | Focusing only on proteins whose average testing set R2≥.01superscript𝑅2.01R^{2}\geq.01italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≥ .01 using our method, we tested 286 proteins in total. To perform the MetaXcan, we obtained LD matrices using the AA individual level genotyping data from the WHI. Then, we teste... | We performed the group-specific association analysis of genetically predicted protein expression based on our fitted models with blood lipid traits with GWAS summary statistics using MetaXcan (Barbeira et al.,, 2018). | D |
We can see on Figure 1 that the optimal variance is reached for the OSSGD (as for the MLE, AVSGD and ADSGD) that naturally overperforms the non-optimal variance of the slowly converging SGD. It is worth noting the relative bias for samples of finite size of the AVSGD when the initial value ϑ0subscriptitalic-ϑ0\vartheta... | In order to improve the convergence rate of the gradient descent algorithm, we propose in the following the one-step procedure starting from an initial guess estimator taken from the projected stochastic gradient algorithm. This procedure is shown to be faster than the classical computation of the MLE but still asympto... | In the following, Section 2 is dedicated to notations and known results of convergence rates for stochastic gradient descent (SGD), stochastic gradient descent with averaging (AVSGD), adaptative gradient descent (ADSGD) and maximum likelihood estimation (MLE). The main result on (strong) consistency and asymptotic norm... | In terms of computation time, the OSSGD (as the AVSGD) is more than 3 times faster than the MLE. In comparison, the ADSGD is more than two times faster. | We can see on Figure 1 that the optimal variance is reached for the OSSGD (as for the MLE, AVSGD and ADSGD) that naturally overperforms the non-optimal variance of the slowly converging SGD. It is worth noting the relative bias for samples of finite size of the AVSGD when the initial value ϑ0subscriptitalic-ϑ0\vartheta... | C |
1}{\kappa b}+2\ln\kappa\right).over^ start_ARG caligraphic_L end_ARG ( italic_θ , italic_X , italic_Y , italic_l ) = caligraphic_L ( italic_θ , italic_X , italic_Y , italic_l ) - italic_n ( roman_ln ( italic_b ) + divide start_ARG 1 end_ARG start_ARG italic_κ italic_b end_ARG + 2 roman_ln italic_κ ) . | MIXALIME can produce standard errors of MLEs on a user request. Standard errors are calculated with the help of Rao–Cramér inequality that provides a lower bound on the estimates’ variance: | A user engages with MIXALIME via command-line interface. The package provides a complete documentation of its feature alongside with a small tutorial through the help command: | BetaNB model tends to provide ultra-conservative P-value estimates, see Section 9 for details of the scoring procedure. This happens due to the fact that the beta negative binomial distribution is significantly more heavy-tailed than a negative binomial distribution for small values of κ𝜅\kappaitalic_κ. Therefore, it ... | Nota bene: Although MIXALIME will output standard errors when requested for MAP estimates of the BetaNB regularized model (see Section 7.1), they should be ignored. | A |
While this is a relatively simple setting for trial design simulation, considering U𝑈Uitalic_U values for u𝑢uitalic_u, N𝑁Nitalic_N values for sample size, T𝑇Titalic_T values for η𝜂\etaitalic_η and γ𝛾\gammaitalic_γ (keeping other model parameters fixed), and M𝑀Mitalic_M simulation iterations (noting that M>10,000... | Figure 2 shows the estimated curves and 95% credible intervals for power as a function of sample size and a range of effect sizes which are different from those included in the simulation scenarios. The black solid circles are simulation-based estimates of power for the 25 scenarios listed in rows 2-6 of Table 1. This ... | Figure 1 shows the estimated curves (posterior median) and 95% credible intervals (equal-tailed posterior quantiles) for type I error rate as a function of sample size for the adjusted and unadjusted models. These results are obtained by fitting the models in Section 2 to the simulated sampling distribution (with M=100... | The selection of simulation scenarios at which the sampling distribution is simulated is important, as these simulations play the role of data in the proposed approach. The selection of these scenarios is therefore a design problem. For type I error rate, this boils down to selecting a sequence of sample size values. W... | Figure 3 shows these results for the type I error rate, where the training set includes sample sizes n=20,40,60,80,100,200,1000𝑛204060801002001000n=20,40,60,80,100,200,1000italic_n = 20 , 40 , 60 , 80 , 100 , 200 , 1000; the test set is defined as the remaining sample sizes listed in the first row of Table 1. The grey... | B |
Second, while we focus on situations where the value of discoveries is described by weights w(A)𝑤𝐴w(A)italic_w ( italic_A ) decreasing in |A|𝐴|A|| italic_A |, in different contexts it might be useful to consider other evaluations: the procedures we will describe adapt to any, as long as the weights are fixed. | The constraint (5) expresses our interest in obtaining non-redundant rejections. In section 5.1 and appendix LABEL:appendix:emlkf_description we will discuss, instead, procedures that lead to discoveries at multiple resolutions, in a coordinated fashion. | Using knockoff e-values we can overcome this limitation, as we describe in the appendix LABEL:appendix:global_partial_description. Testing these partial conjunction hypotheses can also be combined with testing across multiple levels of resolution. Indeed, in our application to the UK Biobank data in section 7, we test ... | Self-consistency can also be used to define other multiple comparisons procedures based on e-values. For example, as mentioned in Wang and Ramdas (2022), one can construct the equivalence of Focused BH (Katsevich et al., 2021) for e-values. We do so precisely in appendix LABEL:appendix:focusedeBH_vs_kelp, obtaining a p... | E-values can be used to develop an analog of the p-filter, as mentioned by Wang and Ramdas (2022). We include a description of the e-filter in appendix LABEL:appendix:emlkf_description. | A |
Related methods. Contrastive-based SSL methods are the most suitable choice for these two tasks since the core of contrastive learning is identifying positive and negative samples. Specifically, TS-TCC [116] introduces temporal contrast and contextual contrast in order to obtain more robust representations. TS2Vec [118... | The rest of the article is organized as follows. Section 2 provides some review literature on SSL and time series data. Section 3 to Section 5 describe the generation-based, contrastive-based, and adversarial-based methods, respectively. Section 6 lists some commonly used time series data sets from the application pers... | In this section, we point out some critical problems in current studies and outline several research directions worthy of further investigation. | Abundant future directions. We point out key problems in this field from both applicative and methodology perspectives, analyze their causes and possible solutions, and discuss future research directions for time series SSL. We strongly believe that our efforts will ignite further research interests in time series SSL. | In this section, the definition of time series data is first introduced, and then several recent reviews on SSL and time series analysis are scrutinized. | B |
The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a tensor product kernel | The last row shows the contribution of 𝒦2subscript𝒦2\mathcal{K}_{2}caligraphic_K start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, | The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a tensor product kernel | The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT indeed perfectly captures the | The second kernel 𝒦2subscript𝒦2\mathcal{K}_{2}caligraphic_K start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a very rough exponential | D |
5.3.2 Modelling [Z(⋅)∣Y(⋅),ω(⋅),u(⋅)]delimited-[]conditional𝑍⋅𝑌⋅𝜔⋅𝑢⋅[Z(\cdot)\mid Y(\cdot),\omega(\cdot),u(\cdot)][ italic_Z ( ⋅ ) ∣ italic_Y ( ⋅ ) , italic_ω ( ⋅ ) , italic_u ( ⋅ ) ] | Table 2 presents the posterior estimates of the finite population PIFV (first row) from the four models described above. The performance of these models was evaluated using D𝐷Ditalic_D and GRS𝐺𝑅𝑆GRSitalic_G italic_R italic_S defined in (29) and (30), respectively. Based upon these metrics, model (iii) is marginal... | While the sampling indicator process Z(⋅)𝑍⋅Z(\cdot)italic_Z ( ⋅ ) “disappears” from the likelihood or the posterior for ignorable designs, we can model non-ignorable designs using such a process. For example, we consider the model | A crucial observation is that the conditional distribution p(Z∣Y,ϕ)𝑝conditional𝑍𝑌italic-ϕp(Z\mid Y,\phi)italic_p ( italic_Z ∣ italic_Y , italic_ϕ ) models the sampling design and is accounted for in the finite population inference. For example, in the models considered in Sections 2–4 p(Z∣y,ϕ)=p(Z)𝑝conditional𝑍... | I will share some perspectives on Bayesian inference for finite population quantities with an emphasis on dependent finite populations. [42] comments that advocating Bayesian inference for survey sampling is akin to “swimming upstream”, given the aversion that many survey statisticians have to modelling assumptions, bu... | B |
\boldsymbol{\theta}}}}e^{-n{\cal I}_{\boldsymbol{\theta}}(a)}\,.roman_lim start_POSTSUBSCRIPT italic_a → italic_L ( bold_italic_θ ) - italic_m start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT end_POSTSUBSCRIPT blackboard_P start_FLOATSUBSCRIPT italic_D ∼ italic_ν start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT en... | As we show in this work, these bounds are directly connected to Large Deviation Theory (LDT) (Ellis, 2012) because their complexity measure 𝒞(𝒜(D),n,ν,δ)𝒞𝒜𝐷𝑛𝜈𝛿{\cal C}({\cal A}(D),n,\nu,\delta)caligraphic_C ( caligraphic_A ( italic_D ) , italic_n , italic_ν , italic_δ ) directly depends on the so-called rate ... | On the other hand, Cramér’s Theorem (Cramér, 1938) states that Chernoff’s bound is exponentially tight for large n𝑛nitalic_n. Formally, this statement is written as follows, | Theorem 4.5 states that an interpolator generalizes better than another (with h.p.) if it is sufficiently smoother in terms of its rate function. Figure 4 illustrates the premise of this theorem. We should note that the above result holds even for the log-loss, which is the default loss used for training, and for over-... | The following result is an adaptation of the PAC-Chernoff bound given in Theorem 4.1 for this setup, which describes the effect of using the data-augmented loss on the generalization error of interpolators. | B |
In fact, for any two invertible functions h1subscriptℎ1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscriptℎ2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT that satisfy the implicit autoregressive constraint, i.e., for all d,h1∘fd∘h2∈ℱA𝑑subscriptℎ1subscript𝑓𝑑subscriptℎ2subscriptℱ𝐴d,h_{1}\cir... | In this paper, we strive to balance practicality and theoretical guarantees by answering the question: “Can we theoretically and practically estimate domain counterfactuals without the need to recover the ground-truth causal structure?” | With weak assumptions about the true causal model and available data, we analyze invertible latent causal models and show that it is possible to estimate domain counterfactuals both theoretically and practically, where the estimation error depends on the intervention sparsity. | Ultimately, this result implies that to estimate domain counterfactuals, we indeed do not require the recovery of the latent representations or the full causal model. | In contrast, we show that estimating DCFs is easier than estimating the latent causal representations and may require fewer assumptions in Section 2.2. | C |
RMT is a powerful tool for describing the spectral statistics of complex systems. It is particularly useful for systems that are chaotic but also have certain coherent structures. The theory predicts universal statistical properties, provided that the underlying matrix ensemble is large enough to sufficiently fill the ... | In this section, we analyze the feature-feature covariance matrix for datasets of varying size, complexity, and origin. We consider real-world as well as correlated and uncorrelated gaussian datasets, establish a power-law scaling of their eigenvalues, and relate it to a correlation length. | We interpret ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT for real world data as a single realization, drawn from the space of all possible Gram matrices which could be constructed from sampling the underlying population distribution. In that sense, ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ star... | To demonstrate that a similar universal structure is also observed for correlation matrices resulting from datasets, we will employ several diagnostic tools widely used in the field of quantum chaos. We will analyze the global and local spectral statistics of empirical covariance matrices generated from three classes o... | What are the universal properties of datasets that can be gleaned from the empirical covariance matrix and how are they related to local and global statistical properties of RMT? | C |
{j}\right|\Big{/}\left|\widehat{\lambda}_{j,ZW,2}-\lambda_{j}\right|.caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = | over^ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT italic_j , italic_W italic_P italic_K end_POSTSUBSCRIPT - it... | Figure 7: Our method compared to the one in Zhou et al., (2022): the ratio ℛZW,2(ψj)subscriptℛ𝑍𝑊2subscript𝜓𝑗\mathcal{R}_{ZW,2}(\psi_{j})caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_ψ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) of the L2−limit-fromsuperscript𝐿2L^{2}-ita... | For the eigenfunctions, we use the ratio of the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -norm errors, given by the square root of | respectively. Here, ∥⋅∥2\|\cdot\|_{2}∥ ⋅ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denotes the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - norm. The risks depend on the bandwidth and the challenge is to find a way to select the bandwidths which minimize them. | Figure 5: Our method compared to the one in Zhou et al., (2022): the ratio ℛZW,2(ψj)subscriptℛ𝑍𝑊2subscript𝜓𝑗\mathcal{R}_{ZW,2}(\psi_{j})caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_ψ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) of the L2−limit-fromsuperscript𝐿2L^{2}-ita... | B |
The objective of our work was, therefore, to propose a new location-scale joint model accounting for both time-dependent individual variability of a marker and competing events. To do this, we extended the model proposed by Gao et al.(Gao et al., 2011) and Barrett et al.(Barrett et al., 2019) to include a time-dependen... | The analysis of the PROGRESS trial has shown that a high variability of blood pressure is associated with a high risk of CVD and death from other causes. Moreover, the individual residual variability depends on treatment group. These results are difficult to generalise to the entire population as the population study c... | This paper is organized as follows. Section 2 describes the model and the estimation procedure using a robust algorithm for maximizing the likelihood. Section 3 presents a simulation study to assess the estimation procedure performance. In section 4, the model is applied to the data from the Perindopril Protection Agai... | In order to evaluate the performance of the estimation procedure, we performed a simulation study using a design similar to the application data. | In this work, we have proposed a new joint model with a subject-specific time-dependent variance that extends the models proposed by Gao et al. (Gao et al., 2011) and Barrett et al. (Barrett et al., 2019). Indeed, this new model allows time and covariate dependent individual variance and a flexible dependence structure... | B |
})}+\sum_{i=1}^{N}\langle\ln{(x_{i})}\rangle\Big{]}.= divide start_ARG 1 end_ARG start_ARG roman_ln 2 end_ARG [ divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ln ( ( 2 italic_π italic_e ) start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT roman_det ( roman_Σ ) ) + ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCR... | Our major assumption in this section is that both firing rates f→→𝑓\vec{f}over→ start_ARG italic_f end_ARG and | where CovXY𝐶𝑜subscript𝑣𝑋𝑌Cov_{XY}italic_C italic_o italic_v start_POSTSUBSCRIPT italic_X italic_Y end_POSTSUBSCRIPT is n×k𝑛𝑘n\times kitalic_n × italic_k covariance matrix between variables | Now, we partition our dimensionality N𝑁Nitalic_N into two parts, N=n+k𝑁𝑛𝑘N=n+kitalic_N = italic_n + italic_k, | The covariance CovXY𝐶𝑜subscript𝑣𝑋𝑌Cov_{XY}italic_C italic_o italic_v start_POSTSUBSCRIPT italic_X italic_Y end_POSTSUBSCRIPT is n×k𝑛𝑘n\times kitalic_n × italic_k sparse matrix taken with one nonzero element | C |
We have presented a data-driven model for RANS simulations that quantifies and propagates in its predictions an often neglected source of uncertainty, namely the aleatoric, model uncertainty in the closure equations. We have combined this with a parametric closure model which employs a set of tensor basis functions tha... | In order to address these limitations, we advocate incorporating the RANS model in the training process. This enables one to use indirect data (e.g., mean velocities and pressure) obtained from higher-fidelity simulations or experiments as well as direct data (i.e. RS tensor observables) if this is available. In the su... | This is in contrast to the majority of efforts in data-driven RANS closure modeling [22, 15, 65, 17], which employ direct RS data. In the ensuing numerical illustrations, the data is obtained from higher-fidelity computational simulations, but one could readily make use of actual, experimental observations. | The indirect data i.e. velocities/pressures as in the Equation (22), could be complemented with direct, RS data at certain locations of the problem domain. This could be beneficial in improving the model’s predictive accuracy and generalization capabilities. | We have demonstrated how the model can be trained using sparse, indirect data, namely mean velocities/pressures in contrast to the majority of pertinent efforts that require direct, RS data. While the training data in our illustrations arose from a higher-fidelity model, one can readily envision using experimental obse... | D |
Table 4: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once c... | Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change poi... | Table 4: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once c... | Next we investigate the performance of our method and its competitors on test signals containing change points. To investigate performance we apply each method to 500500500500 sample paths from the change point models M1, M2, and M3 listed below, contaminated with each of the four noise types introduced in Section 4.2 ... | Table 5: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once c... | D |
We first re-examine classical reinforcement learning problems, formulated with the bottleneck objectives as introduced in Section III-D1. In many classical optimal control and reinforcement learning applications, the agent’s success is largely based on its ability to avoid failure or defeat. This is particularly the ca... | To solve the CartPole task with the Q-Min algorithm, when the pole falls outside of the pre-defined angle range (±12∘plus-or-minussuperscript12\pm 12^{\circ}± 12 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT from the up-right position), we assign a negative reward of -1 to the agent. To encourage the agent to postpone ne... | To solve Atari with the proposed Q-Min algorithm, we utilize a simple reward scheme under Q-Min: we assign a negative reward of -1 to the agent each time it fails to catch the ball with the paddle, and set γ=0.98𝛾0.98\gamma=0.98italic_γ = 0.98 in Eq. 49 to encourage the agent to postpone such failure events. For learn... | To formulate the task with the bottleneck objective for such classical tasks, we assign a negative reward to the agent when an undesired or failure event occurs after executing a certain action. For the other actions that do not directly lead to the failure events, we simply assign a zero intermediate reward. In the Ca... | Conventionally, both tasks are formulated with the cumulative objective, each with an incremental rewarding scheme. In the CartPole task, a positive reward is assigned to the agent for every timestep it maintains the pole in the upright position; while in Atari, a positive reward is assigned each time the agent breaks ... | D |
Furthermore, one could also extend the proposed method to a continuous y𝑦yitalic_y, for instance, between 00 and 1111, describing the severity of the disease. Indeed, practitioners could define a function σp(y)subscript𝜎𝑝𝑦\sigma_{p}(y)italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_y ) that would ... | Background samples (y=0𝑦0y=0italic_y = 0) salient space is set to an informationless value s′=0superscript𝑠′0s^{\prime}=0italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0. | In the salient prior regularization, as in previous works, we encourage background and target salient factors to match two different Gaussian distributions, both centered in 00 (we assume s′=0superscript𝑠′0s^{\prime}=0italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0) but with different covariance. | , as in (Zou et al., 2022), that p(s|x,y=0)∼𝒩(s′,σpI)similar-to𝑝conditional𝑠𝑥𝑦0𝒩superscript𝑠′subscript𝜎𝑝𝐼p(s|x,y=0)\sim\mathcal{N}(s^{\prime},\sqrt{\sigma_{p}}I)italic_p ( italic_s | italic_x , italic_y = 0 ) ∼ caligraphic_N ( italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , square-root start_ARG ita... | |^{2}_{2}caligraphic_L start_POSTSUBSCRIPT rec end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT | | italic_x - italic_d start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( [ italic_c , italic_y italic_s + ( 1 - italic_y ) italic_s start_POSTSUPERSC... | B |
This expression of the ML estimator is relatively well known; see e.g. Section 4.2.2 in Xu and Stein, (2017) or Proposition 7.5 in Karvonen and Oates, (2023). | On the other hand, the CV estimator σ^CV2superscriptsubscript^𝜎CV2\hat{\sigma}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{% | \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{>0}}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT > 0, where k𝑘kitalic_k is a fixed kernel, and study the estimation of σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT using the CV and ML estimators, denoted as σ^CV2superscrip... | =𝒪(N1−2α)→0absent𝒪superscript𝑁12𝛼→0\displaystyle=\mathcal{O}(N^{{\color[rgb]{0,0,0}\definecolor[named]{% | ∑n=0N−1[f(xN,n+1)−f(xN,n)]2≤NL2maxn(ΔxN,n)2α\displaystyle\sum_{n={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{% | A |
When the true incident count n𝑛nitalic_n is large, the term n−1/2+ϵ/psuperscript𝑛12italic-ϵ𝑝n^{-1/2+\epsilon/p}italic_n start_POSTSUPERSCRIPT - 1 / 2 + italic_ϵ / italic_p end_POSTSUPERSCRIPT and e−n2ϵ/3psuperscript𝑒superscript𝑛2italic-ϵ3𝑝e^{-n^{2\epsilon}/3p}italic_e start_POSTSUPERSCRIPT - italic_n start_POST... | The theorem, denoted as (3), offers a recovery limit concerning the optimization problem expressed in (7). This recovery bound is a key measure as it reflects the potential effectiveness and accuracy of our proposed solution. It serves as a performance metric for how closely the solution we obtain aligns with the true ... | In the following section, we want to provide more insights and theoretical guarantees on the optimization problem we formulate and GRAUD. | In this section, we present theoretical results concerning the uniqueness and the accuracy of the solution to our proposed optimization problem 7. | In this paper, we proposed a novel graph prediction method for debiasing under-count data. The idea is to utilize the intrinsic graph structure of the problem and thus overcome the identifiability issue. We reformulate the problem as a constrained convex optimization problem and establish the connection between the bin... | C |
)-\langle\nabla\psi_{t}({\bm{q}}),{\bm{p}}-{\bm{q}}\ranglecaligraphic_D start_POSTSUBSCRIPT italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( bold_italic_p , bold_italic_q ) = italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_italic_p ) - italic_ψ start_POSTSUBSCRIPT italic_t end_... | Technical Contributions. Our first contribution is proposing a multi-layer online ensemble approach with effective collaboration among layers, which is achieved by a carefully-designed optimism to unify different kinds of functions and cascaded correction terms to improve the algorithmic stability within the multi-la... | At the end of this part, we explain why we choose MsMwC as the meta algorithm. Apparently, a direct try is to keep using Adapt-ML-Prod following Zhang et al. (2022a). However, it is still an open problem to determine whether Adapt-ML-Prod contains negative stability terms in the analysis, which is essential to realize ... | It is worth noting that MsMwC is based on OMD, which is well-studied and proved to enjoy negative stability terms in analysis. However, the authors omitted them, which turns out to be crucial for our purpose. In Lemma 2 below, we extend Lemma 1 of Chen et al. (2021) by explicitly exhibiting the negative terms in MsMwC.... | In this paper, we obtain universal gradient-variation guarantees via a multi-layer online ensemble approach. We first propose a novel optimism design to unify various kinds of functions. Then we analyze the negative terms of the meta algorithm MsMwC and inject cascaded correction terms to improve the algorithmic stabil... | C |
Publicly available datasets such as those from NASA [27, 28], CALCE [29, 30], and Sandia National Lab [31] contain cells of different chemistries cycled under a range of charge rates, discharge rates, and temperatures. These datasets are frequently used in research studies since they comprehensively report capacity, in... | Despite this growing body of research, many fundamental questions about battery life modeling remain unanswered. One fundamental issue is that, in order to train machine learning models to predict lifetime from early-life cycles, data from the entire lifetime is required. Therefore these approaches are best suited to a... | In this work, we investigate new early-life features derived from capacity-voltage data that can be used to predict the lifetimes of cells cycled under a wide range of charge rates, discharge rates, and depths of discharge. To study this, we generated a new battery aging dataset from 225 nickel-manganese-cobalt/graphit... | Dataset partitioning was done at the group rather than the cell level, for three reasons. First, practical battery aging tests for product validation typically cycle multiple cells under the same conditions to capture the aging variability due to manufacturing. Second, it is desirable to build an early prediction model... | In light of this, we designed our battery aging dataset to study more cells under a broader range of operating conditions than current publicly available datasets [26]. Our dataset comprises 225 cells cycled in groups of four to capture some of the intrinsic cell-to-cell aging variability [32]. A unique feature of our ... | D |
The data used to compare methods are available from the Zenodo repository (https://doi.org/10.5281/zenodo.5048449) as compiled by Squair and coauthors [58]. Reversion scRT-qPCR data are available in the SRA repository number SRP076011, and fully described in the original publication [65]. Single-cell chIP-Seq data can ... | Details on the experiment and on the data can be found in the original paper [65]. The kernel-based testing framework was performed on the log(x+1)𝑥1\log(x+1)roman_log ( italic_x + 1 ) normalized RT-qPCR data and on the Pearson residuals of the 2000 most variable genes of the scRNA-Seq data obtained through the R pac... | Simulations are required to compare the empirical performance of DE methods on controlled designs, to check their type-I error control and compare their power on targeted alternatives. We challenged our kernel-based test with six standard DEA methods (Table S.1) on mixtures of zero-inflated negative binomial data repro... | In the simulations, the ZI-Gauss kernel was computed using the parameters of the Binomial distributions used to determine the drop-out rates of the simulated data (drawn uniformly in [0.7,0.9]0.70.9[0.7,0.9][ 0.7 , 0.9 ]), the variance parameter σ𝜎\sigmaitalic_σ was set as the median distance between the non-zero obse... | The research was supported by a grant from the Agence Nationale de la Recherche ANR-18-CE45-0023 SingleStatOmics, by the projects AI4scMed, France 2030 ANR-22-PESN-0002, and SIRIC ILIAD (INCA-DGOS-INSERM-12558). | D |
We hypothesise that the first issue is due to the use of the max aggregator function, which backpropagates gradients only along the largest of the similar values, making it harder for the learning process to identify whether it made a suboptimal choice. We propose to use softmax instead of max as aggregator, allowing g... | We hypothesise that the first issue is due to the use of the max aggregator function, which backpropagates gradients only along the largest of the similar values, making it harder for the learning process to identify whether it made a suboptimal choice. We propose to use softmax instead of max as aggregator, allowing g... | The second issue for the Bellman-Ford algorithm happens when accumulating distances between nodes. The issue is that depending on the graph connectivity the distribution of distances and the embeddings in latent space can change drastically. We propose a simple fix – decaying the magnitude of the embedding by a fixed r... | The second weakness is that the model tends to struggle when encountering out-of-distribution values during algorithm execution. We propose that GNN should decay magnitude of the representations at each step, allowing slightly out-of-range values to become within range during the execution of the algorithm. We show tha... | As the model is struggling with large out-of-distribution values, we propose using a decay-like regularisation where, at every message passing step, we scale the embeddings by a constant c<1𝑐1c<1italic_c < 1. We show in the following section that this provides improvements not only on the Bellman-Ford algorithm, but o... | B |
This could be due to the Chinese New Year holidays and the imposition of a restriction on large trucks in that particular area of the southern region. | Figure 3 provides a zoomed-in view on the last seven days for the same three time series (hence, from 2021-04-25 23:00:00 to 2021-04-30 22:00:00). | Figure 2 displays three examples of Taiwan highway hourly traffic time series corresponding to different vehicle types in different regions, stations, and traffic directions (for four months, from 2021-01-10 23:00:00 to 2021-04-30 22:00:00). | Examples of Taiwanese highway hourly time series, zoomed in view on the last seven days of the time series shown in Figure 2 (from 2021-04-23 23:00:00 to 2021-04-30 22:00:00). | Examples of Taiwanese highway hourly time series (from 2021-01-10 23:00:00 to 2021-04-30 22:00:00) in three regions for different stations, traffic directions, and vehicle types. | D |
Output: Estimator β^^𝛽\hat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β, an estimator of its asymptotic variance V^^𝑉\hat{V}over^ start_ARG italic_V end_ARG and 1−α1𝛼1-\alpha1 - italic_α level confidence interval C^(α)^𝐶𝛼\hat{C}(\alpha)over^ start_ARG italic_C end_ARG ( italic_α ) for β𝛽\betaitali... | Algorithm 1 introduced a generic approach for incorporating weight functions learnt from the data into an estimator for β𝛽\betaitalic_β via approximately minimising the sandwich loss over some class of functions 𝒲𝒲\mathcal{W}caligraphic_W. We now introduce an approach for performing this approximate minimisation ove... | Given a class 𝒲𝒲\mathcal{W}caligraphic_W of functions W𝑊Witalic_W, we then propose to find an (approximate) minimiser W^^𝑊\hat{W}over^ start_ARG italic_W end_ARG of L^SLsubscript^𝐿SL\hat{L}_{\mathrm{SL}}over^ start_ARG italic_L end_ARG start_POSTSUBSCRIPT roman_SL end_POSTSUBSCRIPT over 𝒲𝒲\mathcal{W}caligraphic_... | In this work we have highlighted and clarified the shortcomings of some popular classical methods in the estimation of weights for weighted least squares-type estimators in partially linear models when the conditional covariance is misspecified. We instead advocate for choosing weights to minimise a sandwich estimate o... | over some class of functions 𝒲𝒲\mathcal{W}caligraphic_W corresponding to a working covariance structure (e.g. using sandwich boosting, see Section 3.2). | A |
This work highlights the importance of using a properly constrained null model when extracting the backbone of bipartite projections, and identifies several avenues for future research. First, while 𝐐𝐐\mathbf{Q}bold_Q under the SDSM can be estimated quickly and precisely using the BiCM [14, 15], 𝐐𝐐\mathbf{Q}bold_Q ... | Data availability statement. The data and code necessary to reproduce the results reported above are available at https://osf.io/7z4gu. | Many null models exist for extracting the backbone of bipartite networks, with each model specifying different constraints on the random networks against which an observed network is compared. However, none of the existing models permit constraints on specific edges. In this paper, we extend the fastest and most robust... | Figure 3 illustrates two backbones extracted from these data, using shape to represent classroom (circles = 3-year-olds, squares = 4-year-olds) and color to represent attendance status (black = full day, gray = AM only, white = PM only). Figure 3A was extracted using the SDSM and therefore does not consider these edge ... | These data were collected in Spring 2013 by observing the behaviors of 53 children in a preschool in the Midwestern United States [3, 6, 7, 8]. A scan observation method was employed whereby a randomly selected child was observed for a period of 10 seconds. After the 10 second period had elapsed, the trained observer c... | A |
^{T}\cdot)]\,.roman_D ( italic_γ , italic_β ) = italic_E [ ( italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_γ + ( 1 , italic_Z ) italic_β ) roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⋅ ) ] . | (a) E[Y2]<∞𝐸delimited-[]superscript𝑌2E[Y^{2}]<\inftyitalic_E [ italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] < ∞; (b) | Assumption 3.1 holds and E‖X‖2<∞𝐸superscriptnorm𝑋2E\|X\|^{2}<\inftyitalic_E ∥ italic_X ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < ∞. | The following assumption ensures that E[Yexp(𝐢WT⋅)]∈Lμ2E[Y\exp(\mathbf{i}W^{T}\cdot)]\in L^{2}_{\mu}italic_E [ italic_Y roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⋅ ) ] ∈ italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT, and that AA... | g(Z)|W]( italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_g start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT × caligraphic_G ↦ italic_E [ italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_γ + italic_g ( italic_Z ) | italic_W ] is injec... | B |
Compute the low-dimensional embedding Y=XV1:k∈ℝn×k𝑌𝑋subscript𝑉:1𝑘superscriptℝ𝑛𝑘Y=XV_{1:k}\in\mathbb{R}^{n\times k}italic_Y = italic_X italic_V start_POSTSUBSCRIPT 1 : italic_k end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_k end_POSTSUPERSCRIPT. | A primary drawback of Scheme 1 is its unfeasibility when the dimensionality is high - that is, when p𝑝pitalic_p is large. Computing the empirical covariance matrix becomes impractical. In the R programming environment (R Core | When the data is centered – ‖x¯‖=0norm¯𝑥0\|\bar{x}\|=0∥ over¯ start_ARG italic_x end_ARG ∥ = 0 – or centering is applied directly to the data, Scheme 2 avoids the need to create and store an empirical covariance matrix Σ^^Σ\hat{\Sigma}over^ start_ARG roman_Σ end_ARG in memory. | Suppose we are given a data matrix X∈ℝn×p𝑋superscriptℝ𝑛𝑝X\in\mathbb{R}^{n\times p}italic_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_p end_POSTSUPERSCRIPT, consisting of n𝑛nitalic_n observations, each represented as a p𝑝pitalic_p-dimensional vector. The derivation of PCA directly leads to a sequential... | In this paper, we revisited the classical problem of PCA from an algorithmic perspective. A common choice of implementation is to apply SVD onto the data matrix for efficient computation, which is a coveted property in an era where data is increasingly large and high-dimensional. While the method is straightforward, a ... | A |
In this paper, we present a base-to-global framework to quantify the uncertainty of global FI values. We define a two-level hierarchy of importance values, namely the base and global FI values, where the global FI values are the average of independent base FI values. | In this section, we evaluate our base-to-global framework and ranking method. We use synthetic data to assess our method’s validity (simultaneous coverage) and efficiency. We analyze our ranking method by generating base FI values directly (Section 5.1). We note that feature ranking is an interpretability step at the e... | Based on this framework, we propose a novel method for confidently ranking features. We define the true rank as a feature’s rank, obtained based on an infinite sample, for both a trained prediction model and an FI method. Our ranking method reports simultaneous CIs, ensuring, with high probability, that each feature’s ... | In this section, we introduce our ranking method which is designed to rank FI values while taking into account the uncertainty associated with the post-hoc FI method and the sampling process. Using our base-to-global framework, we are able to quantify the uncertainty by calculating simultaneous CIs for the true ranks. | Existing uncertainty measures are insufficient, because stakeholders often rely on the rank of the FI value, rather than the value itself, in their decisions. Feature rankings are unit-independent and are therefore easy to interpret and compare across FI methods [21, 22]. Instability in the global FI values can lead to... | B |
1}^{(i)},x_{t-1}^{(i)},y_{t})}.∝ divide start_ARG italic_h start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∣ bold_italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT italic_... | Table 5 reports the posterior mean, median, standard deviation, and 95 % credible intervals of the parameter estimates. The posterior distributions of estimated model parameters are provided in Figure 10. For the COVID-19 in BC, the estimated incubation period is \added0.562 (0.384, 0.997) weeks, and the recovery perio... | The prior distributions of the model parameters ψ𝜓\psiitalic_ψ are specified in Table 2. We assume the hyperparameters of α,β𝛼𝛽\alpha,\betaitalic_α , italic_β and γ𝛾\gammaitalic_γ were derived from the historical information on similar epidemics. The transition probability matrix 𝑷Xsubscript𝑷𝑋\boldsymbol{P}_{X}b... | According to the posterior distribution in (6), the prior distribution π(ψ)𝜋𝜓\pi(\psi)italic_π ( italic_ψ ) plays a critical role in Bayesian inference as it allows us to incorporate prior knowledge and beliefs about the unknown parameters into our analysis, and it can heavily influence the posterior distribution. I... | The importance weight represents the likelihood of obtaining a specific sample from the true posterior distribution given the proposal distribution. It is used to adjust for the discrepancy between the proposal distribution and the true posterior distribution. A detailed derivation of the importance weight is described... | D |
The need for such a method extends beyond intercropping applications. Any phenomenon in which asymmetric spatial effects between multiple categories can be postulated provides a potential application. The categories are features inherent to each location in the data, such that based on the label of that feature, asymme... | In this contribution, we propose a new statistical methodology that is able to infer multivariate symmetric within-location and asymmetric between-location spatial effects. In essence, the proposed model can be seen as a fusion, and extension, of two different models: the multivariate spatial autoregressive model and t... | Despite a growing interest in the design of productive intercropping systems (Federer 2012), there has been little methodological development around the identification of the kind of multi-trait and multi-species interactions that would determine which crops should ideally be combined (Brooker et al., 2015). Given that... | In line with Dahlhaus and Eichler (2003), who proposed a time series (vector autoregressive) chain graph, showcasing contemporaneous conditional dependencies and dynamic effects, we propose a spatial autoregressive graphical model that fills the methodological gap of methods than can capture asymmetric between-location... | This article proposes a new statistical methodology: the spatial autoregressive graphical model. The methodological novelty arises from the method’s capacity to learn multivariate asymmetric between-location effects, combined with the capacity of illustrating complex within-location effects through a conditional indepe... | C |
Here we derive the RR α∈ℛ𝛼ℛ\alpha\in\mathcal{R}italic_α ∈ caligraphic_R that optimises the efficiency bound of a sample analogue of θwsubscript𝜃𝑤\theta_{w}italic_θ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. By Theorem 1, the optimal RR implies a corresponding optimally weighted WADE. Our chosen optimality crite... | Furthermore, since our class of estimands represents a unified view of WADEs and WATEs, it enables us to extend WATE results from the binary exposure setting, to new WADE results in the continuous exposure setting. In particular, we derive the estimand in our class which is optimally efficient, in the sense of minimisi... | Next, we show that least squares estimands, which are estimands connected to partially linear model projections, are in fact WADEs for a particular choice of weight. We further motivate least squares estimands by considering the RR that minimises the nonparametric efficiency bound of the WADE, when the weight is known ... | Thus, our contribution is to extend their method to continuous exposures with the extra subtlety being that the WADE weight depends on the exposure as well as covariates. | We compare estimators of ψ𝜓\psiitalic_ψ and ΨΨ\Psiroman_Ψ, the former being a contribution of our work, and the latter following from existing results. These estimators do not require estimation of the exposure density, thus alleviating the aforementioned concerns regarding kernel estimation in other WADEs (ψ𝜓\psiita... | C |
Assuming a normal distribution for random effects can be problematic when the true distribution is far from normal. For instance, McCulloch and Neuhaus, (2011) discovered that when the true distribution is multi-modal or long-tailed, the distribution of the EB predictions may reflect the assumed Gaussian distribution r... | To address the threats outlined in the previous section, two strategies can be employed. The first strategy entails adopting flexible semiparametric or nonparametric specifications for the prior distribution G𝐺Gitalic_G to protect against model misspecification (Paddock et al.,, 2006). One prominent Bayesian nonparame... | In practice, the joint application of these two strategies has been relatively rare, with only a few notable exceptions (e.g., Paddock et al.,, 2006; Lockwood et al.,, 2018). The costs and benefits of these strategies have not been systematically compared in previous simulation studies exploring similar topics (e.g., K... | Instead of relaxing the normality assumption, some approaches replace EB or PM estimators with alternative posterior summaries, such as constrained Bayes (CB, Ghosh,, 1992) and triple-goal (GR111The abbreviation ”GR” denotes the dual inferential objectives: the EDF (G𝐺Gitalic_G) and the rank (R𝑅Ritalic_R) of site-spe... | The three inferential goals, their associated loss functions, and their optimal estimators reveal two primary challenges in achieving valid finite-population estimation. The first challenge is model misspecification, which arises when we assume an incorrect parametric form for the super population distribution G𝐺Gital... | C |
In order to provide a comprehensive comparison between the simulation and surrogate model runtimes, it is important to include information about the computational environment. The simulations are performed on a machine running the Ubuntu 22.04 operating system, equipped with an AMD Ryzen9 3900X CPU (12 Cores/24 Threads... | This section compares the model’s performance trained with Set1 on the test data against FCN and CNN. As mentioned earlier, DeepONet takes functions as inputs, and the model test evaluates its response to unseen input functions. In this study, 380 test input functions were provided to the model, and the obtained model ... | In order to showcase the capabilities of DeepONet, a surrogate model is constructed for calculating the 2-dimensional spatial distribution of neutron flux in a maze. The training and test datasets used for training the DeepONet model are prepared using Particle and Heavy Ion Transport code System (PHITS) version 3.24 S... | Furthermore, as demonstrated in Fig. 3 (c), while the entire set of 6,40064006,4006 , 400 (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) coordinate pairs with corresponding simulation results ψ(x,y)𝜓𝑥𝑦\psi(x,y)italic_ψ ( italic_x , italic_y ) is available for surrogate model construction, we have methodically created multip... | A systematic multi-stage protocol is implemented for preprocessing the data to trainining and test the DeepONet model. | D |
We are given a signal in ℝpsuperscriptℝ𝑝\mathbb{R}^{p}blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT that is expressed as a linear combination of some unknown source signals and the goal is to estimate these sources. The poset here is the collection of linearly independent subsets of unit-norm vectors... | In this section, we turn our attention to the task of identifying models of large rank that provide false discovery control. We begin in Section 3.1 with a general greedy strategy for poset search that facilitates the design of model selection procedures, and we specialize this framework to specific approaches in Secti... | Classic approaches to model selection such as the AIC and BIC assess and penalize model complexity by counting the number of attributes included in a model [1, 22]. More generally, such complexity measures facilitate a hierarchical organization of model classes, and this perspective is prevalent throughout much of the ... | With respect to formalizing the notion of false positive and false negative errors, Example 1 is prominently considered in the literature, while Examples 3 and 5 are multivariate generalizations of previously studied cases [10, 12]. Finally, Example 8 was studied in [25], although that treatment proceeded from a geomet... | In these preceding examples, we lack a systematic definition of model complexity, false positive error, and false negative error due to the absence of Boolean logical structure in each collection of models. In particular, in the first three examples, valid models are characterized by structural properties such as trans... | C |
\end{subarray}}(\theta^{k}).= roman_prox start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_η ∥ ⋅ ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_CELL end_ROW end_ARG end_POSTSUBSCRIPT ( italic_θ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ) . | Motivated by the promising results in the deterministic setting, we now study the MNIST dataset in a stochastic setting, i.e., using mini-batches for the loss function and a neural network with 16,3301633016,33016 , 330 parameters as described previously in IV-A. First, we obtain the initial point on the Pareto front a... | The joint consideration of loss and ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT regularization is well-studied for linear systems. However, it is much less understood for the nonlinear problems that we face in deep learning. In DNN training, the regularization path is usually not of inter... | In this work, we consider two objective functions, namely the empirical loss and the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm of the neural network weights. The Pareto set connecting the individual minima (at least locally), is also known as the regularization path. In the context... | Equation (8) is the gradient step for the loss objective function, i.e., “move left” in Fig. 2 and equation (9) represents the shrinkage performed on the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm, i.e., “move down” in Fig. 2. | D |
The adjective “visible” refers to the martingale (Sn)subscript𝑆𝑛(S_{n})( italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) | P({ω})>0𝑃𝜔0P(\{\omega\})>0italic_P ( { italic_ω } ) > 0 for all ω∈Ω𝜔Ω\omega\in\Omegaitalic_ω ∈ roman_Ω. | (and not depending on the hidden aspects of the realized sample point ω∈Ω𝜔Ω\omega\in\Omegaitalic_ω ∈ roman_Ω). | We say that a sequence (Yn)subscript𝑌𝑛(Y_{n})( italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) of random variables in (Ω,P)Ω𝑃(\Omega,P)( roman_Ω , italic_P ) is adapted | (i.e., not on θ𝜃\thetaitalic_θs, parameter values, but on the x𝑥xitalic_xs and y𝑦yitalic_ys, observables) | B |
The deep learning (DL) revolution impacts almost every branch of life and sciences [1]. The deep learning models are also adapted and developed for solving problems in physics [2]. The statistical analysis of the data is one of the popular and straightforward domains of application of neural networks in physics [3, 4].... | Differential equations are the essence of physics. Indeed, they describe the system and its evolution in time in classical mechanics, electrodynamics, fluid physics, quantum mechanics, Etc. Some can be solved analytically, but a vast class of problems can be solved numerically only. | The deep learning (DL) revolution impacts almost every branch of life and sciences [1]. The deep learning models are also adapted and developed for solving problems in physics [2]. The statistical analysis of the data is one of the popular and straightforward domains of application of neural networks in physics [3, 4].... | Karniadakis et al. [25] provides a comprehensive review of the PINN approach. In particular, they point out the major difficulties of the approach, namely, the problem of tuning the hyperparameters, fixing relative weights between various terms in the loss function, and the convergence to the global minimum. The PINN i... | One of the exciting ideas is to adapt a neural network framework to solve PDEs numerically. The problem of numerical integration comes down to the optimization problem. Indeed, the approximate solution of a differential equation is parametrized by a feed-forward neural network that depends on the parameters (weights). ... | A |
We have so far assumed that the spectral density is available in closed form. However, we only need regularly spaced point evaluations of the spectral density, for which it suffices to evaluate the discrete Fourier transform of regularly spaced evaluations of the covariance function. This adds, at worst, O(M2)𝑂supers... | IFF can be used for faster learning for large datasets in low dimensions, which matches our target applications. Typically, it will perform poorly for D⪆4greater-than-or-approximately-equals𝐷4D\gtrapprox 4italic_D ⪆ 4, and both in this case and for low N𝑁Nitalic_N, we expect SGPR to outperform all alternatives, inclu... | We seek to show that IFF gives a significant speedup for large datasets in low dimensions, with a particular focus on spatial modelling. Amongst other fast sparse methods, we compare against VFF and B-Spline features. For spherical harmonics, learning independent lengthscales for each dimension is incompatible with pre... | In Section 2 we review variational GP regression in the conjugate setting, and we review related work in Section 3. In Section 4 we present our IFF method, and the complexity analysis; the main convergence results and guidance for tunable parameter selection follows in Section 4.1. Finally in Section 5 we evaluate our ... | We exclude SKI in Figure 5 in order to zoom in on the curves for the variational methods. We are interested in the regime where M≪Nmuch-less-than𝑀𝑁M\ll Nitalic_M ≪ italic_N; as we move to the right and M𝑀Mitalic_M is similar to N𝑁Nitalic_N, inducing points will become competitive with the faster methods, since the ... | A |
In terms of variable selection, the Adaptive Lasso and the Adaptive Transfer Lasso outperformed the others, and the Adaptive Transfer Lasso was slightly superior to the Adaptive Lasso. | We provide the property of the Adaptive Lasso for an initial estimator with source data of size m𝑚mitalic_m. | We mainly considered two cases: one with a large amount of source data and the other with the same amount of source data as the target data. | The Transfer Lasso [16], in contrast, is performed on target data using the initial estimator without the need for source data. | These results imply the superiority of the Adaptive Transfer Lasso with initial estimators using large amounts of source data. | D |
Every ϵitalic-ϵ\epsilonitalic_ϵ-DP algorithm is ρ𝜌\rhoitalic_ρ-zCDP with ρ=12ϵ2𝜌12superscriptitalic-ϵ2\rho=\frac{1}{2}\epsilon^{2}italic_ρ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Proposition 1.4, [23]). Due to this observation, it is possible to provide ... | To prove regret lower bounds in bandits, we leverage the generic proof ideas in [2]. The main technical challenge in these proofs is to quantify the extra cost of “indistinguishability” due to DP. This cost is expressed in terms of an upper bound on KL-divergence of observations induced by two ‘confusing’ bandit enviro... | In order to prove the lower bounds, we deploy the KL upper bound of Theorem 7 in the classic proof scheme of regret lower bounds [2]. The high-level idea of proving bandit lower bounds is selecting two hard environments, which are hard to statistically distinguish but are conflicting, i.e. actions that may be optimal i... | Hardness of Preserving Privacy in Bandits as Lower Bounds. Addressing the open problem of [11, 8], we prove minimax lower bounds for finite-armed bandits and linear bandits with ρ𝜌\rhoitalic_ρ-Interactive zCDP, that quantify the cost to ensure ρ𝜌\rhoitalic_ρ-Interactive zCDP in these settings. To prove the lower boun... | In this section, we quantify the cost of ρ𝜌\rhoitalic_ρ-Interactive zCDP for bandits by providing regret lower bounds for any ρ𝜌\rhoitalic_ρ-Interactive zCDP policy. These lower bounds on regret provide valuable insight into the inherent hardness of the problem and establish a target for optimal algorithm design. We ... | A |
Corollary 8 generalizes the results in Chaudhuri and Tewari (2017) that showed local observability fails only for k=1𝑘1k=1italic_k = 1, and rules out the possibility of better regret for values of k𝑘kitalic_k that are practically interesting. Also, there are efficient algorithms for k=1,2,…,m−2𝑘12…𝑚2k=1,2,...,m-2it... | We are interested in ranking measures that can be expressed in the form of f(σ)⋅R⋅𝑓𝜎𝑅f(\sigma)\cdot Ritalic_f ( italic_σ ) ⋅ italic_R where f:ℝm→ℝm:𝑓→superscriptℝ𝑚superscriptℝ𝑚f:\mathbb{R}^{m}\to\mathbb{R}^{m}italic_f : blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPE... | The ranking loss measure RL(σ,R)𝑅𝐿𝜎𝑅RL(\sigma,R)italic_R italic_L ( italic_σ , italic_R ) can be expressed in the form f(σ)⋅R⋅𝑓𝜎𝑅f(\sigma)\cdot Ritalic_f ( italic_σ ) ⋅ italic_R where f:ℝm→ℝm:𝑓→superscriptℝ𝑚superscriptℝ𝑚f:\mathbb{R}^{m}\to\mathbb{R}^{m}italic_f : blackboard_R start_POSTSUPERSCRIPT italic_m... | The negated P@n does not satisfy Assumption 1 because fssuperscript𝑓𝑠f^{s}italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT is not strictly increasing (see Eq. (7)), so Theorem 6 does not apply to negated P@n. | Similarly, since negated DCG also satisfies Assumption 1, we have the following corollary from Theorem 6. | C |
Electroencephalogram (EEG) captures signals from electrodes, thereby recording the electrical activity during epileptic seizures. The EEG dataset encompasses both pre-ictal and ictal data, organized in a matrix format of channels over time, with a sampling rate of 256 points per second. Here, each electrode signal unde... | To construct the diffusion matrix, the data from all electrodes at each time point is viewed as a high-dimensional node in the diffusive graph. Notably, electrode signals display abnormal fluctuations preceding the onset of an epileptic seizure. Timely detection of abnormalities in electrical signals holds the potentia... | In summary, the goal of the present study is to model and analyze the brain activity data from epileptic patients to identify early warnings of epileptic seizures automatically, with stochastic dynamical systems tools. Our main contributions are | Early warning of epileptic seizures is of paramount importance for epileptic patients. The abrupt change is caught for early warning in the latent space, where normal state and ictal state can be viewed as two meta-stable states. The ability to identify transitions between meta-stable states plays a pivotal role in pre... | The diffusion matrix is constructed by the diffusion kernel and shows the transition among all high-dimensional nodes in the diffusive graph, which exhibits the transition probability from one point to another. | A |
There has been several recent results on computationally efficient learning of unbounded Gaussians \citepkamath2022private,kothari2022private,ashtiani2022private, with the method of \citetashtiani2022private achieving a near-optimal sample complexity using a sample-and-aggregate-based technique. Another sample-and-aggr... | In density estimation, which is the main focus of this work, the goal is to find a distribution which is close to the underlying distribution w.r.t. dTVsubscriptdTV\operatorname{d_{\textsc{TV}}}roman_d start_POSTSUBSCRIPT TV end_POSTSUBSCRIPT. Unlike parameter estimation, the sample complexity of density estimation can... | There has been several recent results on computationally efficient learning of unbounded Gaussians \citepkamath2022private,kothari2022private,ashtiani2022private, with the method of \citetashtiani2022private achieving a near-optimal sample complexity using a sample-and-aggregate-based technique. Another sample-and-aggr... | The methods of \citetashtiani2022private,kothari2022private also work in the robust setting achieving sub-optimal sample complexities. | Recently, \citetalabi2023privately improved this result in terms of dependence on the dimension. Finally, \citethopkins2023robustness achieved a robust and efficient learner with near-optimal sample complexity for unbounded Gaussians. | C |
Cooperative Diffusion Recovery Likelihood (CDRL), that jointly estimates a sequence of EBMs and MCMC initializers defined on data perturbed by a diffusion process. At each noise level, the initializer and EBM are updated by a cooperative training scheme (Xie et al., 2018a): The initializer model proposes initial sample... | We first showcase our model’s capabilities in unconditional image generation on CIFAR-10 and ImageNet datasets. The resolution of each image is 32×32323232\times 3232 × 32 pixels. FID scores (Heusel et al., 2017) on these two datasets are reported in Tables 1 and 4.3, respectively, with generated examples displayed in ... | Our main contributions are as follows: (1) We propose cooperative diffusion recovery likelihood (CDRL) that tractably and efficiently learns and samples from a sequence of EBMs and MCMC initializers; (2) We make several practical design choices related to noise scheduling, MCMC sampling, noise variance reduction for EB... | Cooperative Diffusion Recovery Likelihood (CDRL), that jointly estimates a sequence of EBMs and MCMC initializers defined on data perturbed by a diffusion process. At each noise level, the initializer and EBM are updated by a cooperative training scheme (Xie et al., 2018a): The initializer model proposes initial sample... | We propose CDRL, a novel energy-based generative learning framework employing cooperative diffusion recovery likelihood, which significantly enhances the generation performance of EBMs. We demonstrate that the CDRL excels in compositional generation, out-of-distribution detection, image inpainting, and compatibility wi... | B |
\right\}.over^ start_ARG italic_L end_ARG start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_M ) - over^ start_ARG italic_L end_ARG start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ≤ roman_max { italic_ρ ( 1 + squ... | Since the first term inside the maximum is always greater than ρ𝜌\rhoitalic_ρ, this simplifies to our desired result. | To upper bound the absolute value in (51), we need to both lower and upper bound the quantity inside, with respect to R𝑅Ritalic_R, and take the maximum of the two. There are two terms inside the maximum, which must be lower and upper bounded separately. | Examining the bound in Theorem 9, we can see it does not depend on the ambient dimension, but on the stable dimension of the data support, just like the bound in Theorem 6. This means that if the empirical error in the ambient space is small, the empirical error in the compressed space scales with the stable dimension,... | As an important ingredient of our analysis, we revisit a well-known result due to Gordon [11] that uniformly bounds the maximum norm of vectors in the compressed unit sphere under a Gaussian RP. We extend this result into a dimension-free version, for arbitrary domains, in Lemma 4, which may be of independent interest. | A |
We apply our framework to yield novel results for three applications. In our first application, we study bounds on the APO with a treatment that may be selected on an unobserved confounder. | The result in Proposition 1 extends the analysis of Tan (2022); Frauen et al. (2023) to allow ℓ(X)ℓ𝑋\ell(X)roman_ℓ ( italic_X ) to be arbitrarily small or equal to zero. As a result, it includes Masten and Poirier (2018)’s conditional c-dependence assumption so long as λ(R)Y𝜆𝑅𝑌\lambda(R)Yitalic_λ ( italic_R ) it... | Our framework nests unconfoundedness, Manski-type bounds that restrict only the support of the unobserved potential outcomes, and Tan (2006)’s Marginal Sensitivity Model as special cases. As a corollary, we obtain a simpler characterization of bounds under Masten and Poirier (2018)’s conditional c-dependence model. | Our work is related to the recent literature on sensitivity analysis for IPW estimators, which relates to our first application. A sensitivity analysis is an approach to partial identification that begins from assumptions that point-identify the causal estimand of interest and then considers increasing relaxations of t... | This family has several advantages. The restrictions on dℚdℙObs𝑑ℚ𝑑superscriptℙObs\frac{d\mathbb{Q}}{d\mathbb{P}^{\textup{Obs}}}divide start_ARG italic_d blackboard_Q end_ARG start_ARG italic_d blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_ARG decouple across values of R𝑅Ritalic_R, enabling tractab... | B |
We build a comprehensive ECG dataset to evaluate various deep learning algorithms. The dataset consists of 220,251 recordings with 28 common ECG diagnoses annotated by medical experts and significantly surpasses the sample size of publicly available ECG datasets. | After pre-training, we fine-tune the pre-trained encoder on the same dataset. For fine-tuning, we also use the AdamW optimizer and the cosine learning rate schedule. The default hyperparameters includes | 3. Strong Pre-training and Fine-tuning Recipe: We conduct comprehensive experiments to explore the training strategies on the proposed ECG dataset. The key components contributing to the proposed method are presented, including the masking ratio, | In the ablation study, we explore the properties of important components for the proposed method on the Fuwai dataset, and report the marco F1 score on the validation set. | We conduct experiments across three different settings, indicated as Fuwai, PTB-XL, and PCinC in Table 2. For the two-stage methods, including CLECG, MaeFE, CRT and MTECG-T, we develop algorithms as follow. In the first setting, we pre-train and fine-tune the models on the training set of the Fuwai dataset. In the seco... | B |
Πkm−1superscriptsubscriptΠ𝑘𝑚1\Pi_{k}^{m-1}roman_Π start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT, see the considerations in Subsection 5.3. | We now estimate the conditional expectation with respect to the given u(m)superscript𝑢𝑚u^{(m)}italic_u start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT, separately for the three terms in (22). Here and in the following we denote this conditional expectation by 𝔼′superscript𝔼′\mathbb{E}^{\prime}blackboard_E st... | For the third term, by (10) and the definition (16) of the noise variance σH2superscriptsubscript𝜎𝐻2\sigma_{H}^{2}italic_σ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we have | Michael Griebel and Peter Oswald were supported by the Hausdorff Center for Mathematics in Bonn, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research | Foundation) under Germany’s Excellence Strategy - EXC-2047/1 - 390685813 and the CRC 1060 The Mathematics of Emergent Effects of the Deutsche Forschungsgemeinschaft. | C |
Sensitivity Analysis. We perform sensitivity analysis on Heckman-FA by testing the approach over different values for the number of epochs T𝑇Titalic_T, fixed initial value c𝑐citalic_c, and number of Gumbel-Softmax samples B𝐵Bitalic_B drawn during assignment extraction. Table II gives the testing MSE of Heckman-FA ac... | Execution Time. We report the execution time after running Heckman-FA across different values of T𝑇Titalic_T and B𝐵Bitalic_B in the left three columns of Table VII in the Appendix. For both datasets, Heckman-FA runs fast for each combination of T𝑇Titalic_T and B𝐵Bitalic_B. | Sensitivity Analysis. We perform sensitivity analysis on Heckman-FA by testing the approach over different values for the number of epochs T𝑇Titalic_T, fixed initial value c𝑐citalic_c, and number of Gumbel-Softmax samples B𝐵Bitalic_B drawn during assignment extraction. Table II gives the testing MSE of Heckman-FA ac... | We also run a paired t𝑡titalic_t-test on 10 different prediction feature assignments to analyze the significance of comparing Heckman-FA to the other baselines. Table VI in the Appendix shows results of the test. We find that the p-value is very small after running the hypothesis test on both datasets. Given that Heck... | We also consider the complexity of Heckman-FA*. Similar to Heckman-FA, we first see that ψ𝜓\psiitalic_ψ is trained in O(nKT)𝑂𝑛𝐾𝑇O(nKT)italic_O ( italic_n italic_K italic_T ) time when running Heckman-FA*. However, the complexity of extraction is different for Heckman-FA* than for Heckman-FA. Since the Heckman m... | A |
Originally motivated for solving the variable selection problem in linear regression, spike-and-slab priors (or, “discrete spike-and-slab”, Tadesse and Vannucci 2021) have the marginal form of a two-component mixture for each parameter element: one component (spike) from a point mass at zero, and the other (slab) from ... | Since one could reparameterize a discrete spike-and-slab prior as a special case of the L1-ball prior by setting κ𝜅\kappaitalic_κ according to a quantile of π0(β)subscript𝜋0𝛽\pi_{0}(\beta)italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_β ), we expect our geometric ergodicity result could be extended to an... | With the rich literature, there is a recent interest in structured sparsity (Hoff, 2017; Griffin and Hoff, 2023) that has inspired new extensions of sparsity priors. Specifically, the sparsity is “structured” in the sense that: (i) the occurrences of zeros could be dependent, according to some temporal, spatial, or gro... | For linear regression with Gaussian errors, very efficient Markov chain Monte Carlo (MCMC) algorithms have been developed. When the slab prior distribution follows a Gaussian, the Stochastic Search Variable Selection (SSVS) algorithm (George and McCulloch, 1995) exploits the posterior conjugacy and samples from the mar... | Focusing on the computational aspect, the soft-thresholding transform is differentiable almost everywhere with respect to π0βsuperscriptsubscript𝜋0𝛽\pi_{0}^{\beta}italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT. This means we can use off-the-shelf gradient-based MCM... | C |
Mcculloch (1997), and Hoeting et al. (1999) for more details and references therein. From a frequentist perspective, several attractive strategies have been proposed to combine models, including boosting (Freund, 1995), bagging (Breiman, 1996), random forest (Amit and | Claeskens, 2003), adaptive regression by mixing (Yang, 2001; Yuan and Yang, 2005), exponentially weighted aggregation (Leung and | However, it has been increasingly recognized that choosing just one model inherently ignores possibly high uncertainty in the selection process (Chatfield, 1995; Draper, 1995; Yuan and Yang, 2005). Model averaging (MA), on the other hand, provides an alternative to reduce the variability in MS while offering a possibil... | In statistical modeling, multiple candidate models are usually considered to explore the data. Model selection (MS) guides us in search for the best model among candidates based on a traditional selection criterion, such as AIC (Akaike, 1973), Cpsubscript𝐶𝑝C_{p}italic_C start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ... | Condition 1 includes the case θj=j−α1subscript𝜃𝑗superscript𝑗subscript𝛼1\theta_{j}=j^{-\alpha_{1}}italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_j start_POSTSUPERSCRIPT - italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT for α1>1/2subscript𝛼112\alpha_{1}>1/2italic_α start_POSTSU... | A |
{ and }X_{2}\neq 3]\right)italic_Y = roman_max ( blackboard_𝟙 [ italic_X start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT = 3 and italic_X start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = 1 ] , blackboard_𝟙 [ italic_X start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT ≠ 4 and italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≠ 3 ] ) for the sam... | To identify which genes are stably importance across good models, we evaluated this dataset using RID over the model class of sparse decision trees using subtractive model reliance. We selected 14,614 samples (all 7,307 high HIV load samples and 7,307 random low HIV load samples) from the overall dataset in order to ba... | We compare the ability of RID to identify extraneous variables with that of the following baseline methods, whose details are provided in Section D of the supplement: subtractive model reliance ϕsubsuperscriptitalic-ϕsub\phi^{\text{sub}}italic_ϕ start_POSTSUPERSCRIPT sub end_POSTSUPERSCRIPT of a random forest (RF) [6],... | Several methods for measuring the MR of a model from a specific model class exist, including the variable importance measure from random forest which uses out-of-bag samples [7] and Lasso regression coefficients [20]. Lundberg et al. [28] introduce a way of measuring MR in tree ensembles using SHAP [27]. Williamson et ... | To create the uncertainty interval on the training dataset and for each method, we first find the subtractive model reliance ϕ(sub)superscriptitalic-ϕ𝑠𝑢𝑏\phi^{(sub)}italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT across 500 bootstrap iterations of a given dataset for the four algo... | B |
Since the inception of the field, a strong parallelism has been drawn between PQC-based QML models and kernel methods [15, 14, 21]. | Yet, unlike neural networks, kernel methods reach the solution by solving a linear optimization task on a larger feature space, onto which input data is mapped. | The data input is mapped onto the “quantum feature space” of quantum density operators via a quantum embedding. | Kernel methods solve ML tasks as linear optimization problems on large feature spaces, sometimes implicitly. | The ultimate goal is to, given the data distribution, find a map onto a feature space where the problem becomes solvable by a linear model. | A |
This paper establishes an explicit link between MA and shrinkage in a multiple model setting, which significantly enhances the previous understanding of the relationship between MA and shrinkage in the two-model settings. It is revealed that the MMA estimator can be viewed as a variant of the positive-part Stein estima... | Despite the extensive theoretical work and wide applications of MA, there is a commonly held viewpoint that MA is essentially a shrinkage estimator, and that other shrinkage methods can also achieve the objectives of MA. This view has been substantiated by several studies. For instance, the results in Section 5.1 of Kn... | This paper addresses the previously mentioned questions in a general linear model setting with multiple nested candidate models. The main contribution is twofold. First, we demonstrate that the optimal MA estimator is equivalent to the optimal linear estimator with monotonically non-increasing weights in a specific Gau... | This paper establishes an explicit link between MA and shrinkage in a multiple model setting, which significantly enhances the previous understanding of the relationship between MA and shrinkage in the two-model settings. It is revealed that the MMA estimator can be viewed as a variant of the positive-part Stein estima... | The unveiled connections between MA and shrinkage offer the possibility of novel methodological developments in the area of MA. The focus of this paper has been on a linear regression setting. It is of great interest to bridge the gap between MA and shrinkage in the generalized linear model setting, and then apply the ... | D |
Mathematically, consider a sample Y¨1(ti),Y¨2(ti),…,Y¨D(ti)subscript¨𝑌1subscript𝑡𝑖subscript¨𝑌2subscript𝑡𝑖…subscript¨𝑌𝐷subscript𝑡𝑖\ddot{Y}_{1}(t_{i}),\ddot{Y}_{2}(t_{i}),...,\ddot{Y}_{D}(t_{i})over¨ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i en... | Note that covXtreme also provides functionality to simulate data with known characteristics for checking of the performance of the statistical methodology. | In addition we require that the statistical model also describes the joint tail of all metocean variables in general. However the nature of extremal dependence between different metocean variables is generally unknown. The specification of the statistical model therefore needs to be sufficiently general to admit differ... | The objective of the current article is to provide motivation and description of the covXtreme software, and illustrations of its use in the development of design conditions for ocean engineering. The layout of the article is as follows. Section 2 provides an overview of the software and the statistical methodology on ... | The covXtreme methodology makes a number of simplifying assumptions, motivated by the authors’ experience of extreme value analysis applied to the ocean environment using a range of methodologies of difference complexities. For example, covXtreme relies on sensible user-specified partitioning of the covariate domain in... | A |
Under mild regularity condition on the density of the considered generative models, we prove the stability of iterative retraining of generative models under the condition that the initial generative model is close enough to the real data distribution and that the proportion of real data is sufficiently large (Theorems... | We empirically validate our theory through iterative retraining on CIFAR10 and FFHQ using powerful diffusion models in OTCFM, DDPM, and EDM. | We then prove in Theorem 2 that, with high probability, iterative retraining remains within a neighborhood of the optimal generative model in parameter space when working in the stable regime. Finally, we substantiate our theory on both synthetic datasets and high dimensional natural images on a broad category of model... | We perform experiments on synthetic toy data as found in Grathwohl et al. (2018), CIFAR-10 (Krizhevsky and Hinton, 2009), and FlickrFacesHQ 64×64646464\times 6464 × 64 (FFHQ-64646464) datasets (Karras et al., 2019). For deep generative models, we conduct experiments with continuous normalizing flows (Chen et al., 2018... | Our main contribution is showing that if the generative model initially trained on real data is good enough, and the iterative retraining is made on a mixture of synthetic and real data, then the retraining procedure Algorithm 1 is stable (Theorems 1 and 2). Additionally, we validate our theoretical findings (Theorems ... | A |
The objective function in (1) linearly approximates electric losses. Eq. (2)-(5) describe the Linearized DistFlow model [1] which assumes lossless power balance (2)-(3), and approximates Ohm’s Law as a linear relationship between voltages and power (4)-(5). Eq. (5) accommodates switches in the model with a conditional ... | The GNN models the distribution grid topology as an undirected graph, with switch embeddings modeling the switches in the electrical grid. The GNN’s message passing layers incorporate these embeddings as gates, which enables GraPhyR to learn the representation of linearized Ohm’s law of (5) across multiple topologies i... | We propose GraPhyR, a physics-informed machine learning framework to solve (1)-(11). Our framework in Fig. 1 features four architectural components: (A) gated message passing to model switches, (B) local predictions to scale across nodes, (C) physics-informed rounding to handle binary variables, and (A) topology input ... | After the ℒℒ\mathcal{L}caligraphic_L message passing layers, the embeddings extracted from the input data are used to predict the switch open/close status and a subset of the power flow variables, denoted as independent variables. | Our local predictors exploit the full flexibility of GNNs. They are permutation invariant to the input graph data; are independent of the size of the graph (scale-free); and are smaller than the corresponding global predictor for the same grid. The first feature means our framework is robust to changes in input data. T... | B |
Our second example considers a classical dataset of wind catastrophes taken from Hogg and Klugman, (1984, p. 64). It represents 40 losses (in million U.S. dollars) due to wind-related disasters. Data are reported to the nearest million, including only losses of 2 million or more. | Thus, there is no concentration of mass and the family of Pareto distributions takes over the role of the exponential distributions in the first scenario. | This is not a contradiction to the above remark: the fact that the goodness-of-fit tests do not reject the hypothesis of a Pareto distribution does not prove that the hypothesis holds. | Brazauskas and Serfling, (2003) and Rizzo, (2009) proposed goodness-of-fit tests for the Pareto model and applied them to the de-grouped wind catastrophes data, and concluded that there were no evidence against the model. | Table 3: Values of t^n(u)subscript^𝑡𝑛𝑢\hat{t}_{n}(u)over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_u ) for the wildfire suppression cost data for specific values of the threshold u𝑢uitalic_u and the corresponding shape parameter α𝛼\alphaitalic_α under the assumption that ... | C |
\mbox{shot}_{i}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋅ version start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋅ prompt start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 3 end_... | This computational process requires priors to be placed on R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, the | where ζ𝜁\zetaitalic_ζ is a vector of cutpoints. The latent variable y∗superscript𝑦y^{*}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is | For the prior on R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we follow Gelman, Hill, and Vehtari (2020, | related to an underlying continuous latent variable, y∗superscript𝑦y^{*}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, through a | A |
This distinguishes GSBM from prior methods (e.g., Liu et al. (2022)) that learn approximate solutions to the same problem (3) but whose subsequent solutions only approach (μ,ν)𝜇𝜈(\mu,\nu)( italic_μ , italic_ν ) after final convergence. | This further results in a framework that relies solely on samples from μ,ν𝜇𝜈\mu,\nuitalic_μ , italic_ν—without knowing their densities—and enjoys stable convergence, | This distinguishes GSBM from prior methods (e.g., Liu et al. (2022)) that learn approximate solutions to the same problem (3) but whose subsequent solutions only approach (μ,ν)𝜇𝜈(\mu,\nu)( italic_μ , italic_ν ) after final convergence. | such that X0∼μsimilar-tosubscript𝑋0𝜇X_{0}\sim\muitalic_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_μ X1∼νsimilar-tosubscript𝑋1𝜈X_{1}\sim\nuitalic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ italic_ν follow the (unknown) laws of two distributions μ,ν𝜇𝜈\mu,\nuitalic_μ , italic_ν. | By default, we use the explicit matching loss (5) without path integral resampling, mainly due to its scalability, but ablate their relative performances in Sec. 4.4. | A |
CTM, a novel generative model, addresses issues in established models. With a unique training approach accessing intermediate PF ODE solutions, it enables unrestricted time traversal and seamless integration with prior models’ training advantages. A universal framework for Consistency and Diffusion Models, CTM excels i... | CTM poses a risk for generating harmful or inappropriate content, including deepfake images, graphic violence, or offensive material. Mitigating these risks involves the implementation of strong content filtering and moderation mechanisms to prevent the creation of unethical or harmful media content. | ImageNet CTM surpasses any previous non-guided generative models in FID. Also, CTM most closely resembles the IS of validation data, which implies that StyleGAN-XL tends to generate samples with a higher likelihood of being classified for a specific class, even surpassing the probabilities of real-world validation dat... | CTM poses a risk for generating harmful or inappropriate content, including deepfake images, graphic violence, or offensive material. Mitigating these risks involves the implementation of strong content filtering and moderation mechanisms to prevent the creation of unethical or harmful media content. | CTM’s anytime-to-anytime jump along the PF ODE greatly enhances its training flexibility as well. It allows the combination of the distillation loss and auxiliary losses, such as denoising score matching (DSM) and adversarial losses. These auxiliary losses measures statistical divergences111The DSM loss is closely link... | A |
\mathop{\mathrm{subject\,\,to}}\quad&{\bm{x}}\geq\bm{0}.\end{aligned}\end{cases}italic_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( bold_italic_y ) := { start_ROW start_CELL start_ROW start_CELL roman_min start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT end_CELL start_CELL ∥ bold_italic_y - bold_italic_K bold_ita... | Comparison of (1.5) with (1.3) shows that Rust and Burrus proposed a “simultaneous-like” construction. | In Theorem 4.1, we leverage this novel interpretation to disprove the Burrus conjecture Rust and Burrus, (1972); Rust and O’Leary, (1994) in the general case, by refuting a previously proposed counterexample and providing a new, provably correct counterexample in Lemma 4.5. | In this setting, Burrus, (1965); Rust and Burrus, (1972) posed that the following interval construction yields valid 1−α1𝛼1-\alpha1 - italic_α confidence intervals, a result now known as the Burrus conjecture Rust and O’Leary, (1994): | Rust and Burrus, (1972) and subsequently Rust and O’Leary, (1994) investigated the conjecture posed in Burrus, (1965). | A |
In Tab. 3 we present further results on the challenging and widely-adopted ImageNet-1k dataset. The results are consistent with those found in the CIFAR100 case, strengthening the general applicability of our methods, and its scalability to larger models and more challenging datasets. We also stress the fact that, espe... | In Tab. 3 we present further results on the challenging and widely-adopted ImageNet-1k dataset. The results are consistent with those found in the CIFAR100 case, strengthening the general applicability of our methods, and its scalability to larger models and more challenging datasets. We also stress the fact that, espe... | We evaluate the quality of our approach with two prominent transformer-based architectures: the ViT (Dosovitskiy et al., 2020) and BERT (Devlin et al., 2018). Our focus is to assess the performance and robustness of our proposed fusion techniques in both image and NLP domains. These models offer a direct comparison as ... | We show the finetuning results on the widely adopted datasets CIFAR100, and ImageNet-1k (results on Tiny ImageNet in the Appendix). | In this work, we focused on the vision application of the Transformer architecture, but our method is agile to architectural changes, and we demonstrate its wide applicability to the BERT model. Although preliminary explorations of our fusion strategy on the BERT model show some differences with respect to the ViT case... | D |
∙∙\bullet∙ Unified framework. The Ito chain equation 1 incorporates a variety of practical approaches and techniques – see Table 1. In particular, equation 1 can be used to describe: | The key and most popular MC is Langevin-based (Raginsky et al., 2017; Dalalyan, 2017; Cheng et al., 2018; Erdogdu et al., 2018; Durmus & Moulines, 2019; Orvieto & Lucchi, 2018; Cheng et al., 2020) (which corresponds to Langevin diffusion). Such a chain is found in most existing works. In this paper, we propose a more g... | Dynamics. Primarily, chain equation 1 is suitable for analyzing Langevin Dynamics, which have a wide range of applications. Here we can note the classical results in sampling (Ma et al., 2019; Chatterji et al., 2020; Dalalyan, 2017; Durmus et al., 2019; Durmus & Moulines, 2019), continuous optimization (Gelfand et al.,... | Non-normality of noise. The central and widely used assumption about noise in analyses of MC satisfying equation 1 (e.g., Langevin-based) is that it has a normal distribution (Raginsky et al., 2017; Dalalyan, 2017; Cheng et al., 2018; Durmus & Moulines, 2019; Ma et al., 2019; Feng et al., 2019; Orvieto & Lucchi, 2018; ... | Without convexity and dissipativity assumptions. Note also that often, when dealing with Langevin MC, the authors consider the convex/monotone setup (Dalalyan, 2017; Erdogdu et al., 2018; Durmus & Moulines, 2019; Li et al., 2019b; Chatterji et al., 2020; Xie et al., 2021), which is possible and relevant, but at the sam... | B |
}(\bm{x}\cdot\bm{x}^{\prime})^{2}italic_K ( bold_italic_x , bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = bold_italic_x ⋅ bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( bold_italic_x ⋅ bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPER... | These 𝒘¯¯𝒘\bar{\bm{w}}over¯ start_ARG bold_italic_w end_ARG and 𝑴𝑴\bm{M}bold_italic_M can be plugged into the loss decomposition in the main text. At initialization, the kernel has the form | The value of α𝛼\alphaitalic_α controls the scale of the output, and consequently the speed of feature learning. The value of ϵitalic-ϵ\epsilonitalic_ϵ alters how difficult the task is for the initial NTK. We consider training on a fixed dataset {(𝒙μ,yμ)}μ=1Psuperscriptsubscriptsubscript𝒙𝜇subscript𝑦𝜇𝜇1𝑃\{(\bm{x}... | The Mercer eigenvalue problem for data distribution p(𝒙)𝑝𝒙p(\bm{x})italic_p ( bold_italic_x ) has the form | \lambda\phi(\bm{x}^{\prime})∫ italic_d bold_italic_x italic_p ( bold_italic_x ) italic_K ( bold_italic_x , bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) italic_ϕ ( bold_italic_x ) = italic_λ italic_ϕ ( bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) | C |
In the dataset, all subjects are diagnosed as healthy, so we sampled 3-second crops in order to capture at least one complete heartbeat. Due to the imbalanced nature of the dataset, we opted for a 60/20/20 split between training, validation, and test sets at the subject level. Importantly, we maintained the age-group d... | Age-group distribution Fig 1 shows the age distribution across the dataset in terms of 15 age groups, where the first age group contains subjects aged 18 to 19, whereas all following age groups but the last cover age intervals of 5 years. There is a clear imbalance in the age distribution, with the majority in age grou... | Predictive performance results of the models per age group in terms of AUC on the test set, where the yellow (left) age group represents the XGBoost and the right (blue) the XResNet. | Age-group distribution in terms of age groups provided in the Autonomic Aging dataset[11]. The age groups span a range from 18 to 92 years, where the majority of patients are between 20 to 50 years old. | Beat-level descriptive analysis At first, we explore superimposed mean heartbeats for all age groups in Fig 4 as a plausibility test and to compare with literature statements. The amplitude of the T-wave decreases with age and shifts to the right, indicating an overall longer cardiac cycle, meaning a slower heart rate.... | A |
In addition we also exhibit how we can use our results to conduct inference on the mode of the target distribution and how the permissible range for γ𝛾\gammaitalic_γ changes when the preconditioning matrix is no longer spatially varying. This is shown in Corollary 2.1 and Propositions 2.1, 2.3. | We establish a fast sampling bound of the preconditioned LMC algorithm to the target distribution when the preconditioning is spatially invariant, in the Wasserstein distance. This may be viewed in Theorem 4. | We establish the convergence of the preconditioned LMC algorithm for general preconditioning matrices to a stationary distribution dependent on the step size in total variation. This is given in Theorem 1. | In our work, as mentioned previously, we consider the problem of inferential and approximate sampling guarantees using the preconditioned LMC algorithm. In this regard we establish a Central Limit Theorem for preconditioned LMC around the mode which may be used for the purposes of statistical inference. We also, in add... | There has been some recent work on the analysis of preconditioned algorithms [24, 11, 4]. These works mainly address the problem of establishing guarantees for fast sampling using preconditioned LMC in KL-divergence or in Wasserstein distance in the dissipative setting and also establishing geometric ergodicity conditi... | A |
\partial x_{j}x_{k}}.- divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT roman_Σ start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT divide start_ARG ∂ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_f end_ARG st... | Input features are often correlated with one another. When this is the case, it may be imprudent to falsely assume independence for the sake of computational convenience. This may produce Shapley values that misrepresent the relationship between inputs and predictions, and rely on the values that a machine learning fun... | While ControlSHAP is generally strong for neural networks, it tends to work better when in the dependent features case. On the simulated, bank, and census datasets, variance reductions are typically below 25% assuming independence and above 50% otherwise. Presumably, this owes to the fact that neural networks are less ... | We seek to mitigate this issue by employing Monte Carlo variance reduction techniques. In particular, we use control variates, a method that adjusts one random estimator based on the known error of another. Here, the related estimator approximates the Shapley values of a first or second order Taylor expansion to the or... | ControlSHAP can be employed as a relatively “off-the-shelf” tool, in the sense that it stabilizes Shapley estimates with close to no extra computational or modeling work. The only model insight required is the gradient, as well as the hessian in the independent features case. Computationally, the single substantial cos... | A |
When T𝑇Titalic_T is sufficiently large, the term 1nT1𝑛𝑇\frac{1}{\sqrt{nT}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_n italic_T end_ARG end_ARG (or 1nT1𝑛𝑇\frac{1}{nT}divide start_ARG 1 end_ARG start_ARG italic_n italic_T end_ARG for the strongly convex setting) will dominate the rate. In t... | Step 2. (Lemma 2) Based on this equivalent update of ProxSkip and by the L𝐿Litalic_L-smoothness of fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we establish the following descent inequality. | In addition, based on Theorem 2, we can even get a tighter rate by carefully selecting the stepsize to obtain the following result. | where α𝛼\alphaitalic_α is the stepsize of ProxSkip, σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT denotes the variance of the stochastic gradient, 1−λ21subscript𝜆21-\lambda_{2}1 - italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a topology-dependent quantity that approaches 00 for ... | Achieving linear speedup by n𝑛nitalic_n and 1/p1𝑝\nicefrac{{1}}{{p}}/ start_ARG 1 end_ARG start_ARG italic_p end_ARG. We choose the regularizer r(𝐱)=12‖𝐱‖2𝑟𝐱12superscriptnorm𝐱2r({\bf{x}})=\frac{1}{2}\|{\bf{x}}\|^{2}italic_r ( bold_x ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∥ bold_x ∥ start_POSTSUPERS... | B |
}^{m}\{p_{1,(i)}-p_{1,(i-1)}\}y_{i}=1.over^ start_ARG bold_y end_ARG = roman_argmax start_POSTSUBSCRIPT bold_y ∈ caligraphic_Q end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_t start_POSTSUBSCRIPT 1 , ( italic_i ) end_POSTSUBSCRIPT roman_l... | To get the rate of convergence of the MLE based on the Hellinger distance (13), we use the bracketing entropy of ℱ¯1/2superscript¯ℱ12\mathcal{\bar{F}}^{1/2}over¯ start_ARG caligraphic_F end_ARG start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT employed with the metric ∥⋅∥2.\|\cdot\|_{2}.∥ ⋅ ∥ start_POSTSUBSCRIPT 2 end_PO... | The estimation of f2subscript𝑓2f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be obtained by solving the optimization problem (11) in the same way, and we omit the details. | In this paper, we develop a robust and powerful empirical Bayes approach for high dimensional replicability analysis. We assume that the data are summarized in p𝑝pitalic_p-values for each study. We use p𝑝pitalic_p-values mainly for versatility. Without loss of generality, we use two studies to illustrate. To account ... | We first ignore the monotonic constraint 𝒬𝒬\mathcal{Q}caligraphic_Q. By applying the Lagrangian multiplier, the objective function to maximize is | D |
An example is the f𝑓fitalic_f-divergence subclass [21], commonly employed as an extension of Shannon entropy for various purposes in statistics, such as, variational inference [45, 1], surrogate model design [52], PAC Bayesian learning [54] and Differential Privacy [46]. | The reference prior in the sense of Bernardo [8] is an asymptotic maximal point of the mutual information defined in equation (3). Using the formalization built in [10] for such a notion of asymptotic maximization as a reference, we suggest the following definition for what we call generalized reference priors, i.e. op... | A study of those divergences within our generalized mutual information is also a main contribution of this paper, with the goal of deriving what one is invited to call generalized reference priors. | The following definition results from the use of f𝑓fitalic_f-divergences as dissimilarity measures. They constitute the class of generalized mutual information we focus on within this paper. | In the next section, we formalize the usual Bayesian framework that we consider in our work. Our motivation supported by a Global Sensitivity Analysis viewpoint for an enrichment of the mutual information is elucidated in section 2. Afterwards, a sub-class of that generalized mutual information is studied in section 3 ... | B |
Data-driven Koopman learning methods are founded on the assumption that a non-trivial finite-dimensional Koopman invariant subspace exists [20]. Even if this assumption holds true, it has proven to be exceedingly challenging to resolve this finite set of observables that completely closes the dynamics [5]. In order to ... | Data-driven Koopman learning methods are founded on the assumption that a non-trivial finite-dimensional Koopman invariant subspace exists [20]. Even if this assumption holds true, it has proven to be exceedingly challenging to resolve this finite set of observables that completely closes the dynamics [5]. In order to ... | It has been shown that a higher-order correction to the approximate Koopman operator can be obtained using the Mori-Zwanzig formalism by accounting for the residual dynamics through the non-Markovian term. Lin et al. [21] proposed a data-driven method for this purpose that recursively learns the memory kernels using Mo... | Observing these problems, this work proposes an interpretable data-driven reduced order model termed Mori-Zwanzig autoencoder (MZ-AE), that exploits the Mori-Zwanzig formalism and approximates the invariant Koopman subspace in the latent manifold of a nonlinear autoencoder. A higher-order non-Markovian correction is pr... | passed through a nonlinear autoencoder to produce a small set of observables enriched with the nonlinearities of the dynamical system. To ensure the observables lie in the linearly invariant subspace, an approximate Koopman operator is obtained through linear regression in time over these observables. The motivating id... | B |
We now compare our contributions to [13], which is the closest related work. In the aforementioned reference, the authors derive a minimax sample-complexity lower bound of Ω(1ϵ)Ω1italic-ϵ\Omega\left(\frac{1}{\epsilon}\right)roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_ϵ end_ARG ) in a probabilistic sense for ... | In Section 3.1, we looked at the case where the cost function f𝑓fitalic_f was deterministic. However, to get the Ω(ϵ−2)Ωsuperscriptitalic-ϵ2\Omega(\epsilon^{-2})roman_Ω ( italic_ϵ start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ) sample complexity, we require that the cost f1(A)subscript𝑓1𝐴f_{1}(A)italic_f start_POS... | Lower bounds: We derive a minimax sample complexity lower bound of Ω(1/ϵ2)Ω1superscriptitalic-ϵ2\Omega(1/\epsilon^{2})roman_Ω ( 1 / italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for risk estimation in two types of MCP problem instances: one with deterministic costs and the other with stochastic costs. In eith... | It now remains to derive the lower bounds for the CVaR case. The above proof works in more or less the same way, except for some minor modifications. In the CVaR case, consider the optimization problem that is analogous to (33). Due to (2), any (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-pair that is feasible for (33) is als... | In contrast, the proofs of our lower bounds are more challenging owing to the lack of a closed form expression for the risk measures we consider. Moreover, our lower bounds, when specialized to mean estimation, leads to an improvement in comparison to [13]. | D |
,x^{\beta}\rangle]_{\alpha,\beta=1}^{m}\,,italic_d italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_b ( italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) italic_d italic_t , italic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n start_POSTSUBSCRIPT in en... | Starting with the Markov chain in Lemma 3.2, we will treat the random term of order O(n−1/2)𝑂superscript𝑛12O(n^{-1/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) as part of the drift instead. | At the same time, since the higher order term in the Markov chain is at the desired order of O(n−3p)𝑂superscript𝑛3𝑝O(n^{-3p})italic_O ( italic_n start_POSTSUPERSCRIPT - 3 italic_p end_POSTSUPERSCRIPT ), which will vanish in the limit, we get the desired result. | We will start by deriving the precise Markov chain update up to a term of size O(n−3p)𝑂superscript𝑛3𝑝O(n^{-3p})italic_O ( italic_n start_POSTSUPERSCRIPT - 3 italic_p end_POSTSUPERSCRIPT ), which will be a slight modification of the Euler discretization we saw in Equation 2.4. | In view of the SDE convergence theorem Proposition A.7, if we eventually reach an SDE, we will only need to keep track of the expected drift μrsubscript𝜇𝑟\mu_{r}italic_μ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT instead of the random drift. | A |
It is shown that a sequence of slowly decreasing step sizes γi=Ai−ζsubscript𝛾𝑖𝐴superscript𝑖𝜁\gamma_{i}=Ai^{-\zeta}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_A italic_i start_POSTSUPERSCRIPT - italic_ζ end_POSTSUPERSCRIPT would lead to rate-optimal estimators, where A𝐴Aitalic_A is a positive... | The second line of the kernel-SGD update (8) also suggests a direct way to construct the update from basis expansions without specifying a kernel. | The performance of kernel SGD depends on the chosen kernel function 𝒦𝒦\mathcal{K}caligraphic_K and the learning rate sequence γisubscript𝛾𝑖\gamma_{i}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. | In a stochastic approximation-type estimator (2), each new sample point is used to update the current estimates. Our method for tuning sequence selection is based on the idea of “rolling validation”, which, in addition to the estimate update, also uses the new sample point to update the online prediction accuracy of ea... | We illustrate this issue using the reproducing kernel stochastic gradient descent estimator (kernel-SGD). Let 𝒦(⋅,⋅):ℝp×ℝp→ℝ:𝒦⋅⋅→superscriptℝ𝑝superscriptℝ𝑝ℝ\mathcal{K}(\cdot,\cdot):\mathbb{R}^{p}\times\mathbb{R}^{p}\rightarrow\mathbb{R}caligraphic_K ( ⋅ , ⋅ ) : blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTS... | A |
Here, we mimic outbreak data collected from households (Walker et al., 2017), by generating hℎhitalic_h observations with household sizes uniformly sampled between 2 and 7. We use values h=100,200ℎ100200h=100,~{}200italic_h = 100 , 200 and 500, assume that all households are independent, and that all outbreaks are gove... | Table 2 shows a comparison of the posterior statistics for the parameters R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and κ𝜅\kappaitalic_κ. The metrics for the other parameters are deferred to Appendix C. The bias in the mean value of R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSU... | Our proof of concept example is a single observed outbreak in a population of size 50. The left of Figure 3 shows the posterior pairs plot of SNL compared to PMMH, which clearly shows that SNL provides an accurate approximation of the true posterior for this experiment; see Appendix C for a quantitative comparison. For... | Table 3 shows the comparison between the posterior statistics for the experiments with r=0.9𝑟0.9r=0.9italic_r = 0.9, which suggests negligible differences between the posterior mean and standard deviations. It is also worth noting that SNL correctly reproduces a highly correlated posterior between d1subscript𝑑1d_{1}i... | Table 1 shows the quantitative comparison between the two posteriors, indicating a negligible bias in the SNL means on average. The SNL variance appears to be slightly inflated on average, mainly for the h=500ℎ500h=500italic_h = 500 experiments, though this is partly due to the PMMH variance shrinking with hℎhitalic_h,... | D |
\mathscr{Y}}}}}^{*}({\mathscr{H}}_{n+1}))( script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT script_Y end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_n + 1 end_POSTSUBSCRIPT ) - script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT itali... | Γ1(ϵ1)+Γ2(ϵ2)subscriptΓ1subscriptitalic-ϵ1subscriptΓ2subscriptitalic-ϵ2\Gamma_{1}(\epsilon_{1})+\Gamma_{2}(\epsilon_{2})roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCR... | constant, the score-based abstention estimation loss (ℰ𝖫abs(h)−ℰ𝖫abs∗(ℋ))subscriptℰsubscript𝖫absℎsuperscriptsubscriptℰsubscript𝖫absℋ({\mathscr{E}}_{{{\mathsf{L}}_{\rm{abs}}}}(h)-{\mathscr{E}}_{{{\mathsf{L}}_{% | ϵ2subscriptitalic-ϵ2\epsilon_{2}italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, then, modulo constant factors, the score-based abstention | calibration gap of the score-based abstention loss 𝖫abssubscript𝖫abs{{\mathsf{L}}_{\rm{abs}}}sansserif_L start_POSTSUBSCRIPT roman_abs end_POSTSUBSCRIPT and that | C |
(Mozannar and Sontag, 2020; Cao et al., 2022; Mao et al., 2024b). Another problem closely related to | (Madras et al., 2018; Raghu et al., 2019a; Mozannar and Sontag, 2020; Okati et al., 2021; Wilder et al., 2021; Verma and Nalisnick, 2022; Narasimhan et al., 2022; Verma et al., 2023; Mao et al., 2023a; Cao et al., 2023; Mao et al., 2024a; Chen et al., 2024; Mao et al., 2024d). | (Cortes et al., 2016a, b, 2023; Cheng et al., 2023; Mohri et al., 2024; Li et al., 2024); and a more | Awasthi et al. (2021a, c, 2022a, 2022b, 2023, 2024); Mao et al. (2023c, d, e); Zheng et al. (2023); Mao et al. (2023b, f, 2024e, 2024c). | (Mozannar and Sontag, 2020; Cao et al., 2022; Mao et al., 2024b). Another problem closely related to | A |
\mathscr{H}})\right).script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_h ) - script_E start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) + script_M start_POSTSUBSCRIPT roman_... | ℳℓ2(ℋ)=𝒜ℓ2(ℋ)subscriptℳsubscriptℓ2ℋsubscript𝒜subscriptℓ2ℋ{\mathscr{M}}_{\ell_{2}}({\mathscr{H}})={\mathscr{A}}_{\ell_{2}}({\mathscr{H}})script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) = script_A start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUB... | ℳℓ2(ℋ)=𝒜ℓ2(ℋ)subscriptℳsubscriptℓ2ℋsubscript𝒜subscriptℓ2ℋ{\mathscr{M}}_{\ell_{2}}({\mathscr{H}})={\mathscr{A}}_{\ell_{2}}({\mathscr{H}})script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) = script_A start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUB... | For a target loss function ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with discrete outputs, such as the | target loss function ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a surrogate loss function ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, | C |
2. We obtain fast rates for empirical risk minimization procedures under an additional classical assumption called a Bernstein condition. Namely we prove upper bounds on the excess risk scaling as 1/(np)1𝑛𝑝1/(np)1 / ( italic_n italic_p ), which matches fast rate results in the standard, balanced case, up to replac... | The argument from the cited reference relies on a fixed point technique relative to a sub-root function upper bounding some local Rademacher complexity. Leveraging fine controls of the latters (Section D.1) we establish that the fixed point of the sub-root function is of order O(log(n)/n)𝑂𝑛𝑛O(\log(n)/n)italic_O ( ... | Outline. Some mathematical background about imbalanced classification and some motivating examples are given in Section 2. In Section 3, we state our first non-asymptotic bound on the estimation error over VC class of functions and consider application to k𝑘kitalic_k-nearest neighbor classification rules. In Section 4... | The previous result shows that whenever np→∞→𝑛𝑝np\to\inftyitalic_n italic_p → ∞, learning from ERM based on a VC-type class of functions is consistent. Another application of our result pertains to k𝑘kitalic_k-nearest neighbor classification algorithms. In this case the sharpness of our bound is fully exploited by ... | Our purpose is to obtain upper bounds on the deviations of the empirical risk (and thus on the empirical risk minimizer) matching the state-of-the art, up to replacing the sample size n𝑛nitalic_n with np𝑛𝑝npitalic_n italic_p, the mean size of the rare class. To our best knowledge, the theoretical results which come... | B |
We have presented novel empirical evidence for the existence of grokking in non-neural architectures and discovered a data augmentation technique which induces the phenomenon. Relying upon these observations and analysis of training trajectories in a GP and BNN, we suggested a mechanism for grokking in models where sol... | For both GP learning scenarios we also completed experiments without the complexity term arising under the variational approximation. The results of these experiments, namely a lack of grokking, can be seen in Appendix L. This demonstrates that some form of regularisation is needed in this scenario and provides further... | All experiments can be found at this GitHub page. They have descriptive names and should reproduce the figures seen in this paper. For Figure 6, the relevant experiment is in the feat/info-theory-description branch. | To discover the relationship between concealment and grokking, we measured the “grokking gap” ΔksubscriptΔ𝑘\Delta_{k}roman_Δ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. In particular, we considered how an increase in the number of spurious dimensions relates to this gap. The algorithm used to run the experiment is... | Having been proposed to explain the empirical observation we have uncovered in this paper, Mechanism 1 should be congruent with these new findings – the first of which is the existence of grokking in non-neural models. Indeed, one corollary of our theory (Corollary 1) is that grokking should be model agnostic. This is ... | B |
Further, for any bivariate copula C𝐶Citalic_C and for all univariate distribution functions F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and F2,subscript𝐹2,F_{2}\,,italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , the right-hand side of (2) defines a bivariate distribution function. | Furthermore, the upper orthant order on 𝒞2subscript𝒞2\mathcal{C}_{2}caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is defined by the pointwise comparison of survival functions of bivariate copulas, i.e., | To be more precise, consider for a bivariate copula E𝐸Eitalic_E the subclass 𝒞E:={C∈𝒞2∣C≤∂1SE}assignsuperscript𝒞𝐸conditional-set𝐶subscript𝒞2subscriptsubscript1𝑆𝐶𝐸\mathcal{C}^{E}:=\{C\in\mathcal{C}_{2}\mid C\leq_{\partial_{1}S}E\}caligraphic_C start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT := { italic_C ∈ ... | Statement (i) is equivalent to 𝒞D⊆𝒞Esuperscript𝒞𝐷superscript𝒞𝐸\mathcal{C}^{D}\subseteq\mathcal{C}^{E}caligraphic_C start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT ⊆ caligraphic_C start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT. | Denote by 𝒞2subscript𝒞2\mathcal{C}_{2}caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT the class of bivariate copulas. | D |
Theorem 1 implies that our proposed weights, 𝔼[Z|XE]p𝔼delimited-[]conditional𝑍subscript𝑋𝐸𝑝\frac{\mathbb{E}\left[Z|X_{E}\right]}{p}divide start_ARG blackboard_E [ italic_Z | italic_X start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ] end_ARG start_ARG italic_p end_ARG and 1−𝔼[Z|XE]1−p1𝔼delimited-[]conditional𝑍s... | In this section, we construct a potential outcomes model (Imbens and Rubin, 2015) for A/B tests that incorporate the training | In this section, we present simulation results. In subsection 5.1, we specify the simulation setup and the implementation | Once again, our approach demonstrates the lowest bias and reasonable variance. However, it’s important to note that in this case with p=0.2𝑝0.2p=0.2italic_p = 0.2, the data splitting method exhibits higher bias and variance compared to the simulation with p=1/2𝑝12p=1/2italic_p = 1 / 2. | The rest of the paper is organized as follows: Section 2 discusses related literature on interference in A/B tests. Section 3 introduces a potential outcome framework modeling interference caused by data training loops. Section 1 presents our weighted training approach along with theoretical justification. Section 5 sh... | B |
Each of the five factors is accompanied by a designated set of predefined levels of variation, which are listed in Table 5.1. These levels were determined to cover a range of values that would effectively capture the variability and impact of these factors on the desired coating properties. The chosen levels allow for ... | Figure 5.3: Photograph illustrating the experimental setup during the HVOF coating process, showing the robot, turning lathe, and coating stream in action. | Thermal spraying is a versatile and widely used surface engineering technique that involves the deposition of coatings on the surface of a substrate to enhance its functional properties, such as wear resistance, corrosion resistance, and thermal insulation. The thermal spray coating process typically involves the appli... | The HVOF coatings were produced using an Oerlikon Metco thermal spraying equipment, namely the DJ 2700 gas-fuel HVOF system with water-cooled gun assembly. The fuel gas used for these tests was propane, its amount and ratio defined by the two key factors TGF and Lambda. For the process preparation, steel plates of type... | The selected factors play a critical role in the HVOF coating process, exerting significant influence on the quality and performance of the resultant coatings. The PFR governs the amount of coating material supplied, while the SOD regulates the spacing between the spray gun and the substrate. The stoichiometric ratio o... | C |
\varepsilon\cdot\mathbb{E}\left[f(x)\right]\,.roman_Minimize start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_c ∈ caligraphic_C end_CELL end_ROW end_ARG end_POSTSUBSCRIPT | italic_c | , subject to roman_Δ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≥ italic_ε ⋅ blackboard_E [ italic_f ( italic_x ) ] . | In essence, using Eq. (1), we attribute to any candidate the drop in prediction of samples where the candidate is perturbed. | This definition guarantees that, for a large amount of samples, the empirical drop is a good estimate of Eq. (1), as expressed by the following: | Calculating the prediction drop for each candidate in closed form, as formulated in Eq. (1), necessitates an exhaustive search and evaluation of 2bsuperscript2𝑏2^{b}2 start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT candidates—an impractical endeavor for large documents. | The optimal candidate, denoted as c⋆superscript𝑐⋆c^{\star}italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT, is determined by minimizing the size of the candidate subset while ensuring that it causes the average prediction 𝔼[f(x)]𝔼delimited-[]𝑓𝑥\mathbb{E}\left[f(x)\right]blackboard_E [ italic_f ( italic_x ) ]... | C |
From 2016 to 2023, a noticeable shift in price dynamics emerge towards the end of 2021. As a result, we can observe three distinct phases: a period of stability, a subsequent phase characterized by increased volatility, and an intermediate transitory interval. | While the electricity market has been gaining attention over the years (Hong et al., 2020), and a rich literature related to the Day-Ahead market price forecasting has been developed (Lago et al., 2021), most studies focus on older stable periods that do not reflect the peculiarities of the current market. | In this work, a new framework for a price prediction model in the Day-Ahead market based on the price dynamics has been proposed. This new approach has been thoroughly studied, demonstrating improved results across various metrics and showing statistical improvement in five different markets and two distinct market per... | EPF is an open field in which a wide variety of tasks are included, mainly depending on the market being dealt with: Day-Ahead market, Intra-Day markets or Balancing markets. Among these, the Day-Ahead market has garnered the most significant attention. While the regulatory framework of this market varies across countr... | Although probabilistic forecasting is beyond the scope of this study, it is worth noting that a significant portion of the EPF field is devoted to this area. Therefore, it is necessary to mention some of the main works within this particular trend. A satisfactory idea was introduced in Nowotarski and Weron (2015): appl... | A |
For the nonmixing case, Config3 (Fig. 4, third row), the univariate estimator H¯^Usuperscript¯^𝐻U\underline{\hat{H}}^{\rm U}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT naturally shows much smaller biases compared to the multivariate estimators H¯^Msupersc... | The asymptotic estimation performance is studied theoretically in Section IV-B. The finite-size estimation performance is investigated numerically in Section V. | The estimation performance of the proposed multivariate estimator is compared, in terms of biases, variances, MSE, and covariance structures, to an earlier multivariate estimator defined in [25, 26] and also to the univariate estimator defined in [19]. | Importantly, this shows that even for data corresponding to nonmixing situations, there is no cost in estimation performance associated with the use of multivariate estimators. | For the nonmixing case with equal Hmsubscript𝐻𝑚H_{m}italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, Config4 (Fig. 4, fourth row), the univariate estimator H¯^Usuperscript¯^𝐻U\underline{\hat{H}}^{\rm U}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT... | C |
The section culminates in a result showing that the empirical variance is a distribution-uniform almost-surely consistent estimator for the true variance and its convergence rate is polynomial in the sample size (Section 3.2), which, when combined with Eq. 16 from Section 2 yields our main result in Section 3.3. | We will now shift our focus to sequential conditional independence testing with anytime-valid type-I error guarantees. Before deriving an explicit test, we first demonstrate in Section 4.3 that the hardness of conditional independence testing highlighted in (42) has a similar analogue in the anytime-valid regime. | Section 4 applies the content of the previous sections to the problem of anytime-valid conditional independence testing. We first show that distribution-uniform anytime-valid tests of conditional independence are impossible to derive without imposing structural assumptions, a fact that can be viewed as a time-uniform a... | While Section 2 is a natural extension of distribution-uniform inference to the anytime-valid setting, it is deceptively challenging to derive procedures satisfying Section 2 even for the simplest of statistical problems such as tests for the mean of independent and identically distributed random variables and the main... | The proof can be found in Section A.4. It should be noted that Section 4.3 is not an immediate consequence of S&P’s fixed-n𝑛nitalic_n hardness result in (42) since while it is true that the time-uniform type-I error in the right-hand side of (51) is always larger than its fixed-n𝑛nitalic_n counterpart, the time-unifo... | B |
By applying the loss function shown in Eq.5, we can have representations qϕu(ru∣x)subscript𝑞subscriptitalic-ϕ𝑢conditionalsubscript𝑟𝑢𝑥q_{\phi_{u}}(r_{u}\mid x)italic_q start_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_r start_POSTSUBSCRIPT italic_u end_POSTSUBSC... | Thanks to the availability of the Reasoner dataset, we are able to have both general feedback data and real user preference labels. We evaluate the performance of all baselines and SLFR on real user preference label data, trained with general feedback data. The results are shown in Table 4. The methods that fit the dat... | where γ𝛾\gammaitalic_γ is a temperature hyperparameter, used to control the debiasing strength of the model, higher values of γ𝛾\gammaitalic_γ imply stronger debiasing, we will discuss the effect of the value of γ𝛾\gammaitalic_γ in the following experiments. The Framework of SLFR is shown in Figure 3. | (1) We investigate the new problem of debiasing in recommender systems when incorporating the effects of former recommender systems and unmeasured confounders. (2) We state the assumption of independence of confounders and user preferences, the basis for separating them in the latent parameter space. (3) We propose a n... | In order to address the General Debiasing Recommendation Problem, we propose a novel debiasing framework named SLFR, which consists of two stages. | D |
Qτ(Y|∅)=Qτ(Y)subscript𝑄𝜏conditional𝑌subscript𝑄𝜏𝑌Q_{\tau}(Y|\emptyset)=Q_{\tau}(Y)italic_Q start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( italic_Y | ∅ ) = italic_Q start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( italic_Y ), which is the τ𝜏\tauitalic_τ-th unconditional quantile of Y𝑌Yitalic_Y. | This condition is often used in the literature; See, for example, Huang et al., (2010), Fan et al., (2011), He et al., (2013), Zhong et al., (2020), and references therein. | also widely used in the variable screening literature; see, for example, Fan and Lv, (2008), Fan et al., (2011), Li et al., (2012), He et al., (2013), Ma et al., (2017), | The B-spline approximation technique has been widely used to approximate unknown functions in nonparametric regression; see, for example, Sherwood and Wang, (2016), Fan et al., (2011), He et al., (2013), | also widely used in the variable screening literature; see, for example, Fan and Lv, (2008), Fan et al., (2011), Li et al., (2012), He et al., (2013), Ma et al., (2017), | C |
Qb,c3subscriptsuperscript𝑄3𝑏𝑐Q^{3}_{b,c}italic_Q start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b , italic_c end_POSTSUBSCRIPT \tabto1.5cm a locally defined index-set (see (3.81) or (4.25)) | S∙subscript𝑆∙S_{\bullet}italic_S start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT \tabto1.5cm an operator on layouts (see (2.1)-(2.3) and (4.2)) | J(k,q)1subscriptsuperscript𝐽1𝑘𝑞J^{1}_{(k,q)}italic_J start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_k , italic_q ) end_POSTSUBSCRIPT \tabto1.5cm a locally defined operator on route sequences (see (2.24) or (3.21)) | J(k,q)2subscriptsuperscript𝐽2𝑘𝑞J^{2}_{(k,q)}italic_J start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_k , italic_q ) end_POSTSUBSCRIPT \tabto1.5cm a locally defined operator on route sequences (see (2.41) or (3.52)) | 𝒮∙subscript𝒮∙\mathcal{S}_{\bullet}caligraphic_S start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT \tabto1.5cm an operator on layouts (see (3.1)-(3.4)) | A |
Additionally, one can notice that the rows of the generated matrix mirror those of the Hadamard matrix HNsubscript𝐻𝑁H_{N}italic_H start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, but in a different order. Essentially, there exist a permutation matrix Oijsubscript𝑂𝑖𝑗O_{ij}italic_O start_POSTSUBSCRIPT italic_i itali... | Figure 5 illustrates the relative frequency distribution of the energies of discovered solutions in relation to the planted energies. The symbol ΩΩ\Omegaroman_Ω represents the entire probability space, which is partitioned based on the likelihood of events associated with finding a solution within specified energy rang... | One of the benchmarks for the initial testing of the CIM was the specific Möbius ladder graph instance [15]. However, it appeared that the minimisation of Ising Hamiltonian on such graphs does not pose serious difficulty, and many optical optimization machines show a good performance using such instances. This problem ... | Models with planted solutions are an old subject in information theory and statistical physics [55, 56, 57]. Addressing the planted solution problems appeared in many other domains beyond optimization, e.g. inference-related tasks [25] or image-reconstruction [58, 59]. For instance, the Wishart planted ensemble was int... | However, many of the suggested benchmarking instances have specific drawbacks and cannot characterise the physical systems’ evolution. Some of them are specifically tailored to particular hardware to highlight its strengths (e.g. Möbius ladder instances) or inherently possess statistical properties that make it hard to... | C |
For the first component z1subscript𝑧1z_{1}italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we select the parameters 𝒅σ=(1,1)subscript𝒅𝜎11\boldsymbol{d}_{\sigma}=(1,1)bold_italic_d start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = ( 1 , 1 ), ασ=2subscript𝛼𝜎2\alpha_{\sigma}=2italic_α start_POSTSUBSCRIPT italic_σ en... | While we have the theoretical results of identifiability when the variances of latent components are varying enough based on the auxiliary variable 𝒖𝒖\boldsymbol{u}bold_italic_u, these results apply only in limit of infinite data. In real life applications, with finite data, there is however no guarantee that the ide... | Settings 1 and 2 are considered the easiest settings for the iVAE model as variances (and means in Setting 2) of the latent fields explicitly change between the clusters. These settings are spatial variants of the time series settings of some previous simulation studies, such as [13, 16], where the latent components ha... | In Settings 1, 2, 3 and 6, where the variances of the latent fields were varying based on the spatial location, and thus fulfilling the identifiability conditions, iVAE showed superior performance as compared with SBSS, SNSS, and TCL. In Setting 1, where the latent fields are zero-mean Gaussian with varying variances b... | Based on the results of this paper, iVAE is a preferable method in settings, where the variances of the latent fields are not stable across space. However, in stationary settings where the sample mean and sample variance did not change enough, SBSS still performed better. In practice, this means for example having smal... | B |
Hence, since the comonotonic coupling is optimal for submodular cost functions [24], the result follows. | The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling. | The comonotonic coupling is also optimal when the cost function is a score that elicits the α𝛼\alphaitalic_α-expectile. | The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling. | The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling. | B |
))/\log p(\xi(X_{u})=f(X_{u}))∥ italic_u - italic_v ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT / ∥ italic_u ∥ start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∼ roman_log italic_p ( italic_ξ ( italic_X start_POSTSUBSCRIPT italic... | Hence, the relative error in KFs is like log-likelihood ratio. This fact allows the application of some tools from AIT, as explained below, to show that the relative error, and consequently KFs, can be viewed as a measure of data compression in AIT. | In Section 2 we show that the relative error used to learn the kernel in the original version of Kernel Flows can be viewed as a log-likelihood ratio. In Section 3, we give a brief introduction to AIT and introduce Kolmogorov Complexity (KC) and the Minimal Description/Message Length (MDL/MML) principle. In Section 4 w... | Now, let us consider the problem of learning the kernel from data. As introduced in [OY19], the method of KFs is based on the premise that a kernel is good if there is no significant loss in accuracy in the prediction error if the number of data points is halved. This led to the introduction of333A variant of KFs based... | In this paper, we look at the problem of learning kernels from data from an AIT point of view and show that the problem of learning kernels from data can be viewed as a problem of compression of data. In particular, using the Minimal Description Length (MDL) principle, we show that Sparse Kernel Flows [YSH+{}^{+}start_... | A |
This work contains three sections along with the introductory Section 1. In the preliminary Section 2, we present all our q𝑞qitalic_q-definitions. | In the main Section 3, we state and prove our results concerning the q𝑞qitalic_q-order statistics and their distributional properties. | We have studied their main properties concerning the q𝑞qitalic_q-distribution functions and q𝑞qitalic_q-density functions of the relative q𝑞qitalic_q-ordered random variables. | Order statistics and their properties have been studied thoroughly the last decades. The literature devoted to order statistics | The main objective of this work is to introduce q𝑞qitalic_q-order statistics, for 0<q<10𝑞10<q<10 < italic_q < 1, arising from dependent and not identically distributed q𝑞qitalic_q-continuous random variables and to study their distributional properties. We introduce q𝑞qitalic_q-order statistics as q𝑞qitalic_q-anal... | A |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.