Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
12k
A
stringlengths
100
5.1k
B
stringlengths
100
6.02k
C
stringlengths
100
4.6k
D
stringlengths
100
4.68k
label
stringclasses
4 values
(14), empirical PSIS result (blue dots) for the average sample size to obtain fixed L1 deviation, minimum sample size (yellow line) required
(10), empirical IS result (green dots) for the average sample size to obtain fixed L1 deviation (from 10 000 repeated simulations). The required sample size grows more quickly for IS than PSIS, and for PSIS quickly grows infeasibly large when k>0.7𝑘0.7k>0.7italic_k > 0.7.
(14), empirical PSIS result (blue dots) for the average sample size to obtain fixed L1 deviation, minimum sample size (yellow line) required
(11), and empirical PSIS result (blue dots) for the average sample size to obtain fixed RMSE (from 10 000 repeated simulations). The required sample size quickly grows infeasibly large when k>0.7𝑘0.7k>0.7italic_k > 0.7.
Figure 4: Convergence rate as a function of k𝑘kitalic_k and S𝑆Sitalic_S. Red dashed line shows the theoretical convergence rate based on the CLT and generalized CLT. Blue dots show the empirical convergence rate from the simulation with Pareto distributed ratios (from 10 000 repeated simulations). Empirical converg...
A
β∼N⁢(0,λ−1⁢(Mw⊤⁢Mw)−)similar-to𝛽𝑁0superscript𝜆1superscriptsuperscriptsubscript𝑀𝑤topsubscript𝑀𝑤\beta\sim N(0,\lambda^{-1}{(M_{w}^{\top}M_{w})}^{-})italic_β ∼ italic_N ( 0 , italic_λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_...
\text{exp}(\beta(v))].italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT | italic_β start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ exp ( italic_β ( italic_v ) ) ] .
{v}^{T}\beta)]italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ roman_exp ( x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_β ) ].
The maximum a posteriori (MAP) estimate β^^𝛽\widehat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β is
rv=(Yv−μ^v)/V⁢(μ^v)subscript𝑟𝑣subscript𝑌𝑣subscript^𝜇𝑣𝑉subscript^𝜇𝑣r_{v}=(Y_{v}-\widehat{\mu}_{v})/\sqrt{V(\widehat{\mu}_{v})}italic_r start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = ( italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_PO...
C
Aside from challenges above, many of the real world biological and medical data sets are collected along with multiple response variables. These responses are often more closely related and could share common relevant covariates than others and then form the tree or other kinds of structures [15, 16, 17, 18]. For insta...
Thus, to improve the performance of the variable selection, incorporating the complex correlation structure in the responses is under our consideration. In this paper, we extend the recent solutions of sparse linear mixed model [8, 9] that can correct confounding factors and perform variable selection simultaneously fu...
Based on the sparsity of β𝛽\betaitalic_β, it’s reasonable to assume that β𝛽\betaitalic_β follows Laplace shrinkage prior. Such assumptions lead to the sparse linear mixed model. However, sparse LMM fails to consider the relatedness among response variables. The defect drives us to the tree-guided sparse linear mixed ...
To address these problems, we propose the tree-guided sparse linear mixed model for sparse variable selection. Apart from extending the recent solutions of LMM that can correct confounding factors, we can perform variable selection simultaneously further to account the relatedness between different responses. By conduc...
The linear mixed model (LMM) is an extension of the standard linear regression model that explicitly describes the relationship between response variables and explanatory variables incorporating an extra random term to account for confounding factors. To introduce the sparse linear mixed model, we briefly revisit the c...
A
(d) Cumulative regret for SMC-based Bayesian policies in scenario F: known and unknown dynamic parameters.
Mean regret (standard deviation shown as shaded region) in contextual, linear Gaussian bandit Scenarios A and B
Mean regret (standard deviation shown as shaded region) in contextual, linear Gaussian bandit Scenarios A and B
Mean regret (standard deviation shown as shaded region) in contextual, non-stationary categorical bandit Scenarios E and F
Mean regret (standard deviation shown as shaded region) in contextual linear logistic dynamic bandit Scenarios C and D
C
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical ac...
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times.
Table 2: Descriptive statistics for the number of patient data entries per day. Active intervals are 10 minute intervals with at least 10 steps taken.
B
We also assess bias and absolute bias of the outcomes of interest (for Simulation 1 and 2, λ^2subscript^𝜆2\hat{\lambda}_{2}over^ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, for Simulation 3, β^1subscript^𝛽1\hat{\beta}_{1}over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT,...
Table 7 shows the power of the traditional Sargan’s Test and the BMA Sargan’s test to detect model misspecification.
Table 4: Simulation 2 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
Table 7: Simulation 3 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
Table 1: Simulation 1 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
D
Our results are poor with 20202020K interactions. For 50505050K they are already almost as good as with 100100100100K interactions. From there the results improve until 500500500500K samples – it is also the point at which they are on par with model-free PPO. Detailed per game results can be found in Appendix F.
Since the publication of the first preprint of this work, it has been shown in van Hasselt et al. (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime. The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 1...
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed in Section 6.4 assigning bigger computational budget helps in 100100100100K setting. We suspect that gains would be ev...
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score...
C
&log{(W_{def}(l,t))}\end{split}start_ROW start_CELL italic_l italic_o italic_g italic_C italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) end_CELL start_CELL = italic_l italic_o italic_g ( italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) ) + italic_l italic_o italic_g ( 1 ...
x˙1=(1.46)⁢x1.129⁢y1.404+(.906)⁢x1.138⁢y2.136subscript˙𝑥11.46superscriptsubscript𝑥1.129superscriptsubscript𝑦1.404.906superscriptsubscript𝑥1.138superscriptsubscript𝑦2.136\dot{x}_{1}=(1.46){x_{1}}^{.129}{y_{1}}^{.404}+(.906){x_{1}}^{.138}{y_{2}}^{.1%
y˙1=(.704)⁢y1.129⁢x1.404+(.953)⁢y1.138⁢x2.136subscript˙𝑦1.704superscriptsubscript𝑦1.129superscriptsubscript𝑥1.404.953superscriptsubscript𝑦1.138superscriptsubscript𝑥2.136\dot{y}_{1}=(.704){y_{1}}^{.129}{x_{1}}^{.404}+(.953){y_{1}}^{.138}{x_{2}}^{.1%
C⁢P⁢(x1,y1,t),C⁢P⁢(x1,y1,t),…,C⁢P⁢(xn,yn,tn)∈Fϕ𝐶𝑃superscript𝑥1superscript𝑦1𝑡𝐶𝑃superscript𝑥1superscript𝑦1𝑡…𝐶𝑃superscript𝑥𝑛superscript𝑦𝑛superscript𝑡𝑛subscript𝐹italic-ϕCP(x^{1},y^{1},t),CP(x^{1},y^{1},t),\dots,\\
K⁢S=m⁢a⁢x1≤t*≤30⁢[F⁢(et*˙^)−t*−130,t*30−F⁢(et*˙^)]𝐾𝑆𝑚𝑎subscript𝑥1superscript𝑡30𝐹^˙subscript𝑒superscript𝑡superscript𝑡130superscript𝑡30𝐹^˙subscript𝑒superscript𝑡KS={max}_{1\leq{t^{*}}\leq 30}[F(\hat{\dot{e_{t^{*}}}})-\frac{t^{*}-1}{30},%
C
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In pra...
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In pra...
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
Assume we have K𝐾Kitalic_K workers. The training data are distributed or partitioned across K𝐾Kitalic_K workers. Let 𝒟ksubscript𝒟𝑘\mathcal{D}_{k}caligraphic_D start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT denote the training data stored on worker k𝑘kitalic_k, and Fk⁢(𝐰)=1|𝒟k|⁢∑ξ∈𝒟kf⁢(𝐰;ξ)subscript𝐹𝑘𝐰1subs...
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai...
B
For this case d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT does not apply.
Although ReLU creates exact zeros (unlike its predecessors s⁢i⁢g⁢m⁢o⁢i⁢d𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes.
Sparsely Activated Networks (SANs) (Fig. 2) in which spike-like sparsity is enforced in the activation map (Fig. 1LABEL:sub@subfig:extremapoolindices and LABEL:sub@subfig:extrema) through the use of a sparse activation function.
The ReLU activation function produces sparsely disconnected but internally dense areas as shown in Fig. 1LABEL:sub@subfig:relu instead of sparse spikes.
Recently, in k𝑘kitalic_k-Sparse Autoencoders [21] the authors used an activation function that applies thresholding until the k𝑘kitalic_k most active activations remain, however this non-linearity covers a limited area of the activation map by creating sparsely disconnected dense areas (Fig. 1LABEL:sub@subfig:topkabs...
C
|)+A(\gamma,m)\cdot O(n^{-1/3}).blackboard_E ( italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ≤ italic_α + italic_O ( 1 / italic_m ) + italic_O ( italic_m italic_γ start_POSTSUBSCRIPT 0 italic_n end_POSTSUBSCRIPT ) + italic_O ( italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / | caligraphic_G start_POST...
Indeed, Theorem 2 of this paper shows that the rate of convergence of Condition (C1) determines a finite-sample bound between the Type I error rates of ϕnsubscriptitalic-ϕ𝑛\phi_{n}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and ϕn∗superscriptsubscriptitalic-ϕ𝑛\phi_{n}^{*}italic_ϕ start_POSTSUBSCRIPT itali...
The condition only stipulates that the variation in the error from the approximate randomization test using the proxy variables (numerator) is dominated by the variation in the spacings of the statistic values in the original randomization test using the true variables (denominator).
The key implication of this result is that the approximate randomization test ‘inherits’ the asymptotic properties of the original randomization test as long as
Next, we use Theorem 2 to establish a finite-sample bound for the Type II error of the approximate randomization test. This shows that the approximate test is consistent as long as the “signal” dominates the natural variation in the true randomization test.
D
Given the challenge to identification, empirical evidence on the effect of turning away volunteers on future behavior is scant. One context that has received some attention to identify temporary rejections on future volunteering is blood donations (e.g., Custer et al., (2007); Bruhin et al., (2020). Understanding how a...
Not all attempted donations are successful.444In our data, for unsuccessful donations, we do not know what type of donations was attempted. We can safely assume that it was a whole blood donation for women. For men, we do not know whether the attempted donation was for whole blood, plasma, or red cell aspheris. If the ...
Despite these costs, in cases where there is already excess supply, the prevailing view across blood banks is that the risk of deferrals reducing future donations is too high, and thus donors will not be deferred unless there is a medical concern for the donor or if the donor is unable to provide a safe blood donation....
Men with a reported h-level between 13 and 13.5, allowing a plasma donation but not a whole blood donations, are very different from other men. They are more experience with a lower propensity to be first time donor and a higher number of donations previous to this one. They are heavier and taller. They are also less l...
The Abu Dhabi Blood Bank collects different types of blood donations: whole blood, plasma, and red-cell aspheris.333They also collect (i) Samples for medical tests which are not meant to be used for donations and (ii) Autologous donations, where a person donates blood for their own future use, typically before a schedu...
B
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is us...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinf...
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requiring...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and afte...
C
As baselines, we consider the network used to generate the word embeddings (Dense) and two more advanced architectures.
Interestingly, the GNNs configured with GRACLUS and NDP always achieve better results than the Dense network, even if the latter generates the word embeddings used to build the graph on which the GNN operates. This can be explained by the fact that the Dense network immediately overfits the dataset, whereas the graph s...
Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation.
The first (LSTM), is a network where the dense hidden layer is replaced by an LSTM layer [55], which allows capturing the temporal dependencies in the sequence of words in the review.
The LSTM baseline generally achieves a better accuracy than Dense, since it captures the sequential ordering of the words in the reviews, which also helps to prevent overfitting on training data.
C
Sethi, Welbl (ind-full), and Welbl (joint-full) generate networks with around 980 000980000980\,000980 000 parameters on average.
In the first hidden layer, the number of neurons equals the number of split nodes in the decision tree. Each of these neurons implements the decision function of the split nodes and determines the routing to the left or right child node.
Compared to state-of-the-art methods, the presented implicit transformation significantly reduces the number of parameters of the networks while achieving the same or even slightly improved accuracy due to better generalization.
Welbl (2014) and Biau et al. (2019) follow a similar strategy. The authors propose a method that maps random forests into neural networks as a smart initialization and then fine-tunes the networks by backpropagation. Two training modes are introduced: independent and joint. Independent training fits all networks one af...
Of the four variants proposed by Welbl, joint training has a slightly smaller number of parameters compared to independent training because of shared neurons in the output layer.
D
And if Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT satisfies a SDE then by Itô-Tanaka formula max⁡{Xt,0}subscript𝑋𝑡0\max\{X_{t},0\}roman_max { italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , 0 } and min⁡{Xt,0}subscript𝑋𝑡0\min\{X_{t},0\}roman_min { italic_X start_POSTSUBSCRI...
Indeed, study of (quasi) maximum likelihood estimators (MLE) of drift coefficients from high frequency observations
Nevertheless, in Theorem 2, we prove the analogous of Theorem 1 for the well known estimator of the normalized number of crossings when the process is a more general threshold diffusion.
Theorem 1 has been applied, in [42], to exhibit the asymptotic behavior in high frequency of (quasi) MLE of the drift parameters of a threshold diffusion which is a continuous-time SETAR model: a threshold Ornstein-Uhlenbeck process which follows two different Ornstein-Uhlenbeck dynamics above and below a fixed thresho...
Some models in financial mathematics and econometrics are threshold diffusions, for instance continuous-time versions of SETAR (self-exciting threshold auto-regressive) models, see e.g. [15, 41]. SBM and OBM and their local time have been recently investigated in the context of option pricing, as for instance in [20] a...
C
{h}(x_{h}^{\tau},a_{h}^{\tau})^{\top}+\lambda\cdot I.where roman_Λ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_τ = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT italic_ϕ start_POSTSUPERSCRIPT italic_τ end_P...
Here β>0𝛽0\beta>0italic_β > 0 scales with d𝑑ditalic_d, H𝐻Hitalic_H, and K𝐾Kitalic_K, which is specified in Theorem 3.1.
with high probability, which is subsequently characterized in Lemma 4.3. Here the inequality holds uniformly for any (h,k)∈[H]×[K]ℎ𝑘delimited-[]𝐻delimited-[]𝐾(h,k)\in[H]\times[K]( italic_h , italic_k ) ∈ [ italic_H ] × [ italic_K ] and (x,a)∈𝒮×𝒜𝑥𝑎𝒮𝒜(x,a)\in{\mathcal{S}}\times\mathcal{A}( italic_x , italic_a ) ...
We establish an upper bound of the regret of OPPO (Algorithm 1) in the following theorem. Recall that the regret is defined in (2.1) and T=H⁢K𝑇𝐻𝐾T=HKitalic_T = italic_H italic_K is the total number of steps taken by the agent, where H𝐻Hitalic_H is the length of each episode and K𝐾Kitalic_K is the total number of e...
in the order of h=H,H−1,…,1ℎ𝐻𝐻1…1h=H,H-1,\ldots,1italic_h = italic_H , italic_H - 1 , … , 1. Here λ>0𝜆0\lambda>0italic_λ > 0 is the regularization parameter, which is specified in Theorem 3.1. Also, Γhk:𝒮×𝒜→ℝ+:subscriptsuperscriptΓ𝑘ℎ→𝒮𝒜superscriptℝ\Gamma^{k}_{h}:{\mathcal{S}}\times\mathcal{A}\rightarrow\mathbb{...
A
This results in similar activation statistics throughout the network which facilitates gradient flow during backpropagation.
Since each batch normalization parameter γ𝛾\gammaitalic_γ corresponds to a particular channel in the network, this results in channel pruning with minimal changes to existing training pipelines.
Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs.
Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations.
The linear transformation of the normalized activations with the parameters β𝛽\betaitalic_β and γ𝛾\gammaitalic_γ is mainly used to recover the DNNs ability to approximate any desired function—a feature that would be lost if only the normalization step is performed.
D
The analysis of density, however, is one example of an inherent characteristic of t-SNE, since it comes directly from its algorithm. A limitation that arises from building a tool that is tuned to tackle problems concerning a particular algorithm is the possibility of the algorithm becoming obsolete or being replaced by...
Although our proposed solution is inspired by the work of Wattenberg et al. [14] and touches on most of the points raised by the authors, not all of them are fully covered by t-viSNE. More specifically, t-viSNE addresses points (ii), (iii), (v), and (vi) described previously, partially covers (i), and leaves point (iv)...
Even in the improbable scenario that t-SNE becomes obsolete soon, the fact that most of our proposed views can be re-used or adapted to different DR methods means that our work is still relevant and largely future-proof.
Most of the related works described in Section 2 deal with the problem of assessing and interpreting DR in general, and aim to be applicable to a wide range of different scenarios, providing solutions that overlook the specific shortcomings of each DR method. While this approach has its merits, a gap remains regarding ...
Although our main design goal was to support the investigation of t-SNE projections, most of our views and interaction techniques are not strictly confined to the t-SNE algorithm. For example, the Dimension Correlation view could, in theory, be applied to any projection generated by any other algorithm. Its motivation,...
B
On text datasets (Text and 20news), most graph-based methods get a trivial result, as they group all samples into the same cluster such that NMIs approximate 0. Only k𝑘kitalic_k-means, MGAE, and AdaGAE obtain the non-trivial assignments.
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
From the comparison of 3 extra experiments, we confirm that the adaptive graph update plays a positive role. Besides, the novel architecture with weighted graph improves the performance on most of datasets.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has ...
B
=:T1+T2+T3.\displaystyle=:T_{1}+T_{2}+T_{3}.= : italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT .
𝔼⁢[G~1q]1q𝔼superscriptdelimited-[]superscriptsubscript~𝐺1𝑞1𝑞\displaystyle\mathbb{E}[\tilde{G}_{1}^{q}]^{\frac{1}{q}}blackboard_E [ over~ start_ARG italic_G end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG...
For any q~≤q/2~𝑞𝑞2\tilde{q}\leq q/2over~ start_ARG italic_q end_ARG ≤ italic_q / 2 in Assumption A.2, it holds
For t~=2~𝑡2\tilde{t}=2over~ start_ARG italic_t end_ARG = 2 and s~=O⁢(1)~𝑠𝑂1\tilde{s}=O(1)over~ start_ARG italic_s end_ARG = italic_O ( 1 ), we have to ensure that
\cdot\|_{Q,2})roman_sup start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT roman_log italic_N ( italic_ε ∥ italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_Q , 2 end_POSTSUBSCRIPT , caligraphic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , ∥ ⋅ ∥ start_POSTSUBSCRIPT italic_Q , 2 end_POSTSUBSCRIPT ...
B
Models’ Space. For the visual exploration of the models shown in Figure 5, we use MDS projections (t-SNE or UMAP are also available).
A summary of the performance of each model according to all selected and user-weighted metrics is color-encoded using the Viridis colormap [26]. The boxplots below the projection show the performance of the models per metric.
There is a large solution space of different learning methods and concrete models which can be combined in a stack. Hence, the identification and selection of particular algorithms and instantiations over the time of exploration is crucial for the the user. One way to manage this is to keep track of the history of each...
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS...
Each point is one model from the stack, projected from an 8-dimensional space where each dimension of each model is the value of a user-weighted metric. Thus, groups of points represent clusters of models that perform similarly according to all the metrics.
D
However, 𝒯π⁢Qsuperscript𝒯𝜋𝑄{\mathcal{T}}^{\pi}Qcaligraphic_T start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT italic_Q may be not representable by a given function class ℱℱ\mathcal{F}caligraphic_F.
Hence, we turn to minimizing a surrogate of the MSBE over Q∈ℱ𝑄ℱQ\in\mathcal{F}italic_Q ∈ caligraphic_F, namely the mean-squared projected Bellman error (MSPBE), which is defined as follows,
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represen...
Here 𝒯∗superscript𝒯{\mathcal{T}}^{*}caligraphic_T start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is the Bellman optimality operator, which is defined as follows,
We learn the Q-function by minimizing the mean-squared Bellman error (MSBE), which is defined as follows,
A
These interesting findings were rendered possible by an original extension of the current state of the art in the GSA literature, namely in the direction of defining sensitivity indices for complex data, and the statistical assessment of uncertainty on GSA indices. We prove the mathematical properties of such method, a...
A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of soft...
The testing effort provides even more interesting results, showing differences between the two contrasts analyzed in this paper, and, in general, defining a sparsity in effects: The only significant factors in determining C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions seem t...
Apart from generating a significant simplification in estimation, restricting ourselves to discrete variations allows us, from a modelling perspective, to deal with scenarios that we want to explore, that may be represented by a moltitude of different modelling choices and settings in a model, such as the different Sha...
Our findings provide a very strong signal to the climate-energy-economy modeling community that either the Shared Socio-economic Pathways are too refined to be actually significant inside a representative ensemble of models, or that, while preserving their own individuality and peculiarities in the modelling approach, ...
D
We impose two assumptions, respectively, on the smoothness of the loading functions and on tail behavior of the noise.
Given the identification condition (Assumption 2), we start with estimating the non-parametric component 𝐆m⁢(𝐗m)subscript𝐆𝑚subscript𝐗𝑚\mathbf{G}_{m}(\mathbf{X}_{m})bold_G start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( bold_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ).
The logarithmic factors in Lemma 2 emerge from the sub-exponential tail of noise distribution, which has never been studied in existing literature.
The smoothness assumption is standard in the non-parametric literature, while the tail condition is weaker than what is usually assumed in the tensor decomposition literature.
We impose two assumptions, respectively, on the smoothness of the loading functions and on tail behavior of the noise.
C
When β=0𝛽0\beta=0italic_β = 0, SNGM will degenerate to stochastic normalized gradient descent (SNGD) [9, 39].
LARS also adopts normalized gradient for large-batch training. Following the analysis in [34], we set β=0𝛽0\beta=0italic_β = 0 111We find that there are two different versions of LARS. The first one [32]
In the following content, we will compare SNGM with MSGD and LARS [34], the two most related works in the literature on large-batch training.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
Figure 3 shows the validation perplexity of the three methods with a small batch size of 20 and a large batch size of 2000. In small-batch training, SNGM and LARS achieve validation perplexity comparable to that of MSGD. Meanwhile, in large-batch training, SNGM achieves better performance than MSGD and LARS.
B
,\tilde{\mathbf{B}}_{D+1})+G(\bm{\theta})italic_L italic_G ( over~ start_ARG italic_ν end_ARG , bold_italic_θ , over~ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_D + 1 end_POSTSUBSCRIPT ) = italic_L ( over~ start_ARG italic_ν end_ARG , bold_italic_θ , over~ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_D ...
Before proposing the algorithm, we first rearrange ⟨⋅,⋅⟩⋅⋅\langle\cdot,\cdot\rangle⟨ ⋅ , ⋅ ⟩ in L⁢G𝐿𝐺LGitalic_L italic_G by using the Khatri-Rao product and mode-d𝑑ditalic_d matricization.
The Khatri-Rao product is defined as a columnwise Kronecker product for two matrices with the same column number (Smilde et al., 2005). More precisely, letting 𝐁=(𝐛1,…,𝐛L)∈ℝI×L𝐁subscript𝐛1…subscript𝐛𝐿superscriptℝ𝐼𝐿\mathbf{B}=(\mathbf{b}_{1},\dots,\mathbf{b}_{L})\in\mathbb{R}^{I\times L}bold_B = ( bold_b start_...
where ν∈ℝ𝜈ℝ\nu\in\mathbb{R}italic_ν ∈ blackboard_R, 𝜸∈ℝp0𝜸superscriptℝsubscript𝑝0\bm{\gamma}\in\mathbb{R}^{p_{0}}bold_italic_γ ∈ blackboard_R start_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, and 𝐁∈ℝp1×p2×⋯×pD𝐁superscriptℝsubscript𝑝1subscript𝑝2⋯subscript𝑝𝐷\bm{\mathrm{...
Φ~⁢(𝐗)(d)~Φsubscript𝐗𝑑\tilde{\Phi}(\bm{\mathrm{X}})_{(d)}over~ start_ARG roman_Φ end_ARG ( bold_X ) start_POSTSUBSCRIPT ( italic_d ) end_POSTSUBSCRIPT is the mode-d𝑑ditalic_d matricization of Φ~⁢(𝐗)~Φ𝐗\tilde{\Phi}(\bm{\mathrm{X}})over~ start_ARG roman_Φ end_ARG ( bold_X ).
A
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 2020...
A number of future directions are of interest. An immediate step is to investigate whether the dependence on the dimension d𝑑ditalic_d and planning horizon H𝐻Hitalic_H in our bounds can be improved, and whether the minimax regret lower bound can also be improved. It would also be interesting to investigate the settin...
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al....
Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov...
A
A two-sample test is performed to decide whether to accept the null hypothesis H0:μ=ν:subscript𝐻0𝜇𝜈H_{0}:~{}\mu=\nuitalic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_μ = italic_ν or the general alternative hypothesis H1:μ≠ν:subscript𝐻1𝜇𝜈H_{1}:~{}\mu\neq\nuitalic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : ...
When under H1subscript𝐻1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we set the distribution μ𝜇\muitalic_μ to be the uniform distribution on [−1,1]dsuperscript11𝑑[-1,1]^{d}[ - 1 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, and ν𝜈\nuitalic_ν to be the Gaussian distribution 𝒩⁢(0,σ2⁢Id)𝒩0super...
Similarly, in change-point detection [4, 5, 6], the post-change observations follow a different distribution from the pre-change one.
Given the function space ℱℱ\mathcal{F}caligraphic_F and a distribution μ𝜇{\mu}italic_μ, define the Rademacher complexity as
For instance, in anomaly detection [1, 2, 3], the abnormal observations follow a different distribution from the typical distribution.
D
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
In this paper we propose a principled framework, DS-VAE, for correctly realizing the data generation hypothesis while avoiding the disentangled representation vs. reconstruction trade-off.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised...
C
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An example of the trade-off between sparsity and interpretability of the set of selected views occurs when different views, or combinations of views, contain the same information. If the primary concern is sparsity, a researcher may be satisfied with just one of these combinations being selected, preferably the smalles...
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which ...
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi...
We apply multi-view stacking to each simulated training set, using logistic ridge regression as the base-learner. Once we obtain the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z, we apply the seven different meta-learners. To assess classification performance, we generate a matching test set of 1000 ob...
A
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
choice model for capturing consumer purchase behavior in assortment selection models (see Flores et al. [2019] and Avadhanula [2019]). Recently, large-scale field experiments at Alibaba [Feldman et al., 2018] have demonstrated the efficacy of the MNL model in boosting revenues. Rusmevichientong et al. [2010] and Sauré ...
Our result is still O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they ...
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star...
In summary, our work establishes strong worst-case regret guarantees by carefully accounting for local gradient information and using second-order function approximation for the estimation error.
D
G1: Analysis of predictions and validation metrics for the identification of effective hyperparameters.
The aforementioned works that make use of genetic algorithms contain similar mechanisms as in VisEvol, but without VA support for (1) the exploration of the interconnected hyperparameters, and (2) the selection of the proper number of models that should crossover and mutate.
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring the...
The study of the impact of particular hyperparameters is considered as a future direction for VisEvol. Also, E3 stated that we could allow the user to specify the hyperparameters range at every stage and test alternative mutation strategies [CK05]. E1 expressed his interest in checking combinations of evolutionary opti...
We aim to support the exploration of algorithms and models with various hyperparameters (R1) as follows:
D
Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random...
Table 2 records the error rates on the four real-world networks. The numerical results suggest that Mixed-SLIM methods enjoy satisfactory performances compared with SCORE, SLIM, OCCAM, Mixed-SCORE, and GeoNMF when detecting the four empirical datasets. Especially, the number error for Mixed-SLIM on the Polblogs network...
We report the averaged mixed Hamming error rates for our methods and the other three competitors in Table 4. Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT outperforms the other three Mixed-SL...
C
That is, when the target functions under Assumption 4.7 belong to a smoother RKHS class, variational transport attains a smaller statistical error.
we first solve the inner variational problem associated with the objective functional using the particles.
We study the distributional optimization problem where the objective functional admits a variational form.
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
The following assumption characterizes the regularity of the solution to the inner optimization problem associated with the variational representation of the objective functional.
B
The name of the data column containing observation times is supplied to the times argument; the name of the column containing the unit names is supplied to the units argument.
The prediction step advances the Monte Carlo ensemble to the next observation time by using simulations from the postulated model
The t0 argument supplies the initial time from which the dynamic system is modeled, which should be no greater than the first observation time.
The neighborhood is supplied via the nbhd argument to abf as a function which takes a point in space-time, (u,n)𝑢𝑛(u,n)( italic_u , italic_n ), and returns a list of points in space-time which correspond to Bu,nsubscript𝐵𝑢𝑛B_{u,n}italic_B start_POSTSUBSCRIPT italic_u , italic_n end_POSTSUBSCRIPT.
The name of the data column containing observation times is supplied to the times argument; the name of the column containing the unit names is supplied to the units argument.
B
Teal color encodes the current action’s score, and brown the best result reached so far. The choice of colors was made deliberately because they complement each other, and the former denotes the current action since it is brighter than the latter.
The size of the circle encodes the order of the main actions, with larger radii for recent steps. The brown color is used only if the overall performance increases.
The brown circles in the punchcard in Fig. 1(e) enable us to acknowledge that the feature generation boosted the overall performance of the classifier.
To verify each of our interactions, we continuously monitor the process through the punchcard, as shown in Fig. 6(c). From this visualization, we acknowledge that when F16 was excluded, we reached a better result. The feature generation process (described previously) led to the best predictive result we managed to acco...
(a) presents another transformation of the second most impactful feature (according to Fig. 5(b)). F4_p4///F15///F18_l1p is the most important combination (see the darker green color in (b)). The punchcard visualization in (c) indicates that when we removed F16, the performance increased and that the new feature booste...
B
These techniques impair the ability of the representation learner to encode biases [69, 1, 52, 25]. Like ensembling methods, they also employ a two-branch setup, with the representation encoder in the main branch being penalized if the bias-only branch: fb⁢()subscript𝑓𝑏f_{b}()italic_f start_POSTSUBSCRIPT italic_b end...
IRMv1 [5] is an efficient approximation of an otherwise computationally expensive bi-level IRM objective. It consists of a regularization constraint on the gradient norm with respect to a fixed scalar θc=1.0subscript𝜃𝑐1.0\theta_{c}=1.0italic_θ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 1.0:
Learning Not to Learn (LNL) [37] uses an adversarial setup derived from minimization of mutual information between representation and bias. In addition to the gradient reversal, the mutual information formulation introduces an entropy regularization on the bias predictions.
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA a...
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue for...
B
It is worth mentioning that generating Y^(s)⁢(𝐱)superscript^𝑌𝑠𝐱\hat{Y}^{(s)}(\mathbf{x})over^ start_ARG italic_Y end_ARG start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT ( bold_x ) incurs a constant cost of 𝒪⁢(M3)𝒪superscript𝑀3\mathcal{O}(M^{3})caligraphic_O ( italic_M start_POSTSUPERSCRIPT 3 end_POSTSUPER...
The procedure to generate a sample of the GP posterior is outlined in Algorithm 1. Now, one can generate multiple such GP samples by drawing different realisations 𝐰(s)superscript𝐰𝑠\mathbf{w}^{(s)}bold_w start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT. This idea is used to emulate dynamical simulators where d...
This paper presents a novel data-driven approach for emulating complex dynamical simulators relying on emulating the numerical flow map over a short period of time. The flow map is a function that maps an initial condition to the solution of the system at a future time t𝑡titalic_t. We emulate the numerical flow map of...
This work presents a novel approach for emulating dynamical simulators, where samples from the posterior GP are defined analytically. In order to do this we approximate the kernel with RFF given that there is no know method to draw exact GP samples. The approximate sample paths are then employed to perform one-step ahe...
We proposed a novel data-driven approach for emulating deterministic complex dynamical systems implemented as computer codes. The output of such models is a time series and presents the evolving state of a physical phenomenon over time. Our method is based on emulating the short-time numerical flow map of the system an...
A
While the plug-in procedure displays an analytical solution, which depends on unknown quantities that need to be estimated, the double kernel is performed empirically. Notice also that one may use the maximum likelihood cross-validation method to determine the smoothing parameter; however, this procedure performs very ...
The ideal bandwidth selection for nonparametric testing differs from that for nonparametric estimation because we must balance the test’s size and power rather than the estimator’s bias and variance. There are no methods for calculating the appropriate bandwidth for our test, and it is difficult to formulate a theory t...
For testing a parametric model for conditional mean function against a nonparametric alternative, Horowitz and Spokoiny (2001) proposed an adaptive-rate-optimal rule. Gao and Gijbels (2008) proposed, utilizing the Edgeworth expansion of the asymptotic distribution of the test, to select the bandwidth such that the powe...
To our best knowledge, this is the first time that the general context L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm for testing the independence appeared in the literature and gives the main motivation of the present work by responding to the open problems mentioned in Gretton and Györfi (20...
tests are all model-free. Furthermore, no regularity requirements for the densities are necessary to demonstrate the asymptotic normality of our statistic, a desired attribute. We conduct simulations to determine the size and power of the test. We illustrate that the proposed test has superior power characteristics com...
A
We also show that the Away-step Frank-Wolfe Wolfe [1970], Lacoste-Julien & Jaggi [2015] and the Blended Pairwise Conditional Gradient Tsuji et al. [2022] can use the aforementioned line search to achieve linear rates over polytopes.
We also show that the Away-step Frank-Wolfe Wolfe [1970], Lacoste-Julien & Jaggi [2015] and the Blended Pairwise Conditional Gradient Tsuji et al. [2022] can use the aforementioned line search to achieve linear rates over polytopes.
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes wh...
for 𝒳𝒳\mathcal{X}caligraphic_X, to obtain a linear convergence rate in primal gap over polytopes given in inequality description. The authors in Dvurechensky et al. [2022] also present an
For clarity we want to stress that any linear rate over polytopes has to depend also on the ambient dimension of the polytope; this applies to our linear rates and those in Table 1 established elsewhere (see Diakonikolas et al. [2020]).
D
We prove these theorems via a new notion, pairwise concentration (PC) (Definition 4.2), which captures the extent to which replacing one dataset by another would be “noticeable,” given a particular query-response sequence. This is thus a function of particular differing datasets (instead of worst-case over elements), a...
We measure the harm that past adaptivity causes to a future query by considering the query as evaluated on a posterior data distribution and comparing this with its value on a prior. The prior is the true data distribution, and the posterior is induced by observing the responses to past queries and updating the prior. ...
In order to leverage this more careful analysis of the information encoded in query-response sequences, we rely on a simple new characterization (Lemma 3.5) that
The PC notion allows for more careful analysis of the information encoded by the query-response sequence than differential privacy does.
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POS...
C
Construct a model with architecture 𝒜𝒜\mathcal{A}caligraphic_A, where the parameters are sampled from p⁢(θ)𝑝𝜃p(\theta)italic_p ( italic_θ )
Output : Predictive distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D )
In Bayesian inference one tries to model the distribution of interest by updating a prior estimate using a collection of observed data. The conditional distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) is inferred from a given parametric model or likelihood...
return p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D )
Infer the predictive distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) using Eq. (5)
D
We focus on the estimates of five primary parameters.555Both of the inertia terms are significantly negative, indicating a tendency of players to bias their actions toward those that they took in the previous round. The first parameter, θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, captures ...
The coefficients θ3subscript𝜃3\theta_{3}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and θ4subscript𝜃4\theta_{4}italic_θ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT are related to generalized reciprocity. The interpretation of generalized reciprocity is that it measures the tendency of players to share more when they...
The other four main coefficients concern the behavioral component of payoffs. The positive sign of θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, which is the coefficient on the interaction between contribution cost and the treatment, indicates that having access to this new information makes...
Because there are only three trust questions, the first principal component summarizes most of the information from the trust questionnaire. It places positive weight on the question that involves trust and negative weights on two questions that suggest mistrust. Perhaps surprisingly, this measure of trust is associate...
On the other hand, the second component of reciprocity places positive weight on questions involving positive reciprocity and negative weight on questions involving negative reciprocity or punishment. Individuals who align with this characteristic place much lower weight on the actual cost of contributing, suggesting s...
B
On a larger scale, by exploiting the synergy between Bayesian modeling and formal verification methods, we also advocate for the development and use of explainable algorithms where properties relevant to decision-making are incorporated into the data analytic process flow.
We demonstrate our novel approach with spatio-temporal areal data, where measurements are collected over time at various areal units, and a neighboring matrix allows calculating the distance between the different units. In particular, we consider an urban mobility application, given that urban population density dynami...
In this paper, we propose a Bayesian Machine Learning approach that naturally deals with uncertainty propagation, while simultaneously it allows to learn the value of the parameters from the data. Our proposed approach extends the classical approach to SMC to a Bayesian framework by performing verification and monitori...
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poster...
Therefore, the proposed approach has a clear potential in the area of sustainable cities and urban mobility, as these applications deal with complex systems, with a multitude of stakeholders and with a pressing need for transparency in the decision-making process. We hope for the illustration in the current paper to op...
D
More modern state of the art methods such as Discrimitative Deep Learning (DDL) [11] produces excellent results. This has been quantified in the recent benchmark paper [17].
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and mea...
Estimation: The coefficients 𝜽^^𝜽\hat{\bm{\theta}}over^ start_ARG bold_italic_θ end_ARG of the covariance coefficients
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic...
Unbiased Predictor (BLUP) [27]. We note that we refer to Kriging as both the estimation of the coefficients of the covariance function and BLUP, although we mostly use Kriging/BLUP for clarification
D
It follows that ([ϵ¯k,ϵ¯k])k=1,…,Mϵsubscriptsubscript¯italic-ϵ𝑘subscript¯italic-ϵ𝑘𝑘1…subscript𝑀italic-ϵ([\underline{\epsilon}_{k},\overline{\epsilon}_{k}])_{k=1,\ldots,M_{\epsilon}}( [ under¯ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , over¯ start_ARG italic_ϵ end_ARG start_POSTSUBSC...
The first two steps consist in independent intermediate results. Their proofs are given in the Appendix. They will be put together in the third and last step of the proof.
Step (iii): End of the proof. Define the process, for any function f𝑓fitalic_f and u∈[1/2,3/2]𝑢1232u\in[1/2,3/2]italic_u ∈ [ 1 / 2 , 3 / 2 ]
Another way to obtain (1) is given in the next proposition. It requires the existence of a dominating measure for which a standard bracketing entropy condition is satisfied. The proof of the next proposition is deferred to the end of the Appendix, Section 10.
The weak convergence property of the k𝑘kitalic_k-NN process is obtained under the following metric entropy condition. For any u>0𝑢0u>0italic_u > 0, define the probability measure
B
$a$}}_{ik}^{(m)},\quad i=1,...,r,\ \ k=1,...,K.over^ start_ARG bold_italic_a end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_cOALS end_POSTSUPERSCRIPT = over^ start_ARG bold_italic_a end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( itali...
The rest of the paper is organized as follows. After a brief introduction of the basic notations and preliminaries of tensor analysis in Section 1.1, we introduce a tensor factor model with CP low-rank structure in Section 2. The estimation procedures of the factors and the loading vectors are presented in Section 3. S...
In this section, we compare the empirical performance of different procedures of estimating the loading vectors of TFM-cp, under various simulation setups. We consider the cPCA initialization (Algorithm 1) alone, the iterative procedure HOPE, and the intermediate output from the iterative procedure when the number of i...
In this section, we focus on the estimation of the factors and loading vectors of model (1). The proposed procedure includes two steps: an initialization step using a new composite PCA (cPCA) procedure, presented in Algorithm 1, and an iterative refinement step using a new iterative simultaneous orthogonalization (ISO)...
In addition, ALS and cALS are always the worst under the cases δ≥0.1𝛿0.1\delta\geq 0.1italic_δ ≥ 0.1. The hybrid methods cALS and cOALS improve the original randomized initialized ALS and OALS significantly, showing the advantages of the cPCA initialization. It is worth noting that cOALS has comparable performance wit...
B
CB estimator as B→∞→𝐵B\to\inftyitalic_B → ∞ and α→0→𝛼0\alpha\to 0italic_α → 0, and prove that under the
α𝛼\alphaitalic_α, the CB estimator is unbiased for Riskα⁢(g)subscriptRisk𝛼𝑔\mathrm{Risk}_{\alpha}(g)roman_Risk start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ).
original risk Risk⁢(g)Risk𝑔\mathrm{Risk}(g)roman_Risk ( italic_g ). For any estimator R^⁢(g)^𝑅𝑔\hat{R}(g)over^ start_ARG italic_R end_ARG ( italic_g ) of
CB estimator when it is viewed as an estimator of Risk⁢(g)Risk𝑔\mathrm{Risk}(g)roman_Risk ( italic_g ), the original
estimator is unbiased for Riskα⁢(g)subscriptRisk𝛼𝑔\mathrm{Risk}_{\alpha}(g)roman_Risk start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ), the risk of the given function g𝑔gitalic_g,
C
(2) discretize the two distributions of each feature into bins based on the Local Feature Ranking - Bins value set by the user (default is 10);
(1) break each feature into two disjoint distributions: the values inside the selected group vs. all the rest of the points;
Examining the Global Contribution of Features. After this new selection of models, Amy observes in Figure 1(b) that most features (except for the last two) are more important now than in the initial state. Ins_perc and Val_sa_st importances drop only by 0.01, implying these features are stable. She suggests Joe to keep...
(3) compute the cross-entropy Mannor2005The between the two distributions of each feature: higher values of cross-entropy suggest more unique features (i.e. the within-selection distribution is very different than the rest), while lower values suggest more common, shared features; and
The order of the features is initially the global one, as described in Section Global Feature Ranking. When a group of points is selected using the lasso tool in the decisions space (DS) view, a contrastive analysis Zou2013Contrastive is used to rank the features and highlight unique features that explain a cluster’s ...
C
Note that for n≪smuch-less-than𝑛𝑠n\ll sitalic_n ≪ italic_s, which is the setting of our application, the matrix A𝐴Aitalic_A is sparse. Therefore, the minimization problem (21) can be efficiently solved by conjugate gradients, or its variations, e.g. LSQR \parencitepaige1982algorithm, without requiring the explicit c...
As a result, the coefficients of the approximate solution of the model in equation (15) are given by
An approximate solution to the univariate model in equation (10) follows as a special case of the multivariate case considered here.
If the covariance structures of the two classes are believed to be different, the proposed functional linear discriminant model can be generalized to an approximate functional quadratic discriminant model, following the approach proposed by \textcitegaynanova2019sparse, as follows. We estimate the discriminant rule by ...
Similar to the univariate case, we assume that the population quantity 𝜷0∈ℋsuperscript𝜷0ℋ\bm{\beta}^{0}\in\mathcal{H}bold_italic_β start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ∈ caligraphic_H is well-defined and satisfies the equation
B
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does n...
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild co...
(4) We conduct extensive experiments to illustrate the advantages of MMDF and fuzzy weighted modularity.
This section conducts extensive experiments to demonstrate that DFSP is effective for mixed membership community detection and our fuzzy weighted modularity is capable of the estimation of the number of communities for mixed membership weighted networks generated from our MMDF model. We conducted all experiments on a s...
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does n...
B
Another extension of this work might obtain theoretical guarantees about the identifiability of the CCA parameters in the submodel of our model for semiparametric CCA where the multivariate marginals P1,P2subscript𝑃1subscript𝑃2P_{1},P_{2}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_P start_POSTSUBSCRIPT ...
Fig. 4: Results of simulation study for p1=p2=3subscript𝑝1subscript𝑝23p_{1}=p_{2}=3italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3. Sum of squares error for three simulation scenarios and four estimation methods: traditional CCA (CCA), Gaussian copula-based CCA ...
In the first part of Section 2 of this article, we describe a CCA parameterization of the multivariate normal model for variable sets, which separates the parameters describing between-set dependence from those determining the multivariate marginal distributions of the variable sets. We then introduce our model for sem...
Code to reproduce the figures and tables in this article, as well as software for inference with the semiparametric CCA model are available at https://github.com/j-g-b/cmcca.
Fig. 1: Sum of squares error for three simulation scenarios and four estimation methods: traditional CCA (CCA), Gaussian copula-based CCA (GCCCA), and our methods for semiparametric CCA using the pseudolikelihood strategy of Section 3.1 (CMCCA plugin) and the algorithm of Section 3.2 (CMCCA MCMC). (a) Estimation improv...
C
\geq\vartheta,\,t\leq\tau^{\vartheta})\,\mathrm{d}t.italic_P italic_h start_POSTSUBSCRIPT italic_ϑ end_POSTSUBSCRIPT = italic_α ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_r italic_t end_POSTSUPERSCRIPT blackboard_P ( italic_X start_POSTS...
where f:Θ→ℝ:𝑓→Θℝf:\Theta\to\mathbb{R}italic_f : roman_Θ → blackboard_R and ϑitalic-ϑ\varthetaitalic_ϑ are the parameter that controls the ‘loss’; see Feng [4] and Feng and Shimizu [5]. Hereafter, ϑ0subscriptitalic-ϑ0\vartheta_{0}italic_ϑ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT given by (1.2) is an optimal parameter fo...
is interpreted as the aggregate dividends paid up to ruin xt<ξsubscript𝑥𝑡𝜉x_{t}<\xiitalic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT < italic_ξ or maturity g⁢(ϑ)𝑔italic-ϑg(\vartheta)italic_g ( italic_ϑ ) depending on the parameter ϑitalic-ϑ\varthetaitalic_ϑ, where the dividend α𝛼\alphaitalic_α is paid when t...
In the dividends problem, we can consider a case where Uϑsubscript𝑈italic-ϑU_{\vartheta}italic_U start_POSTSUBSCRIPT italic_ϑ end_POSTSUBSCRIPT in (5.2) is of the form
In this quantity, the probability of paying the dividends is small when ϑitalic-ϑ\varthetaitalic_ϑ is large, although large dividends are paid, and vice versa. Therefore, the expectation can be optimized to a suitable level.
D
3:Run Algorithm 1 Pre-processing to obtain subset ℳℳ{\mathcal{M}}caligraphic_M which achieves the maximal SNRSNR\mathrm{SNR}roman_SNR.
𝔼⁢S11′⁢(u)𝔼superscriptsubscript𝑆11′𝑢\displaystyle\mathbb{E}S_{11}^{\prime}(u)blackboard_E italic_S start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u )
In Algorithm 2, we first randomly partition the vertex set V𝑉Vitalic_V into two disjoint subsets Z𝑍Zitalic_Z and Y𝑌Yitalic_Y by assigning +11+1+ 1 and −11-1- 1 to each vertex independently with equal probability. Let 𝐁∈ℝ|Z|×|Y|𝐁superscriptℝ𝑍𝑌{\boldsymbol{\rm{B}}}\in\mathbb{R}^{|Z|\times|Y|}bold_B ∈ blackboard_R ...
1:Randomly label vertices in Y𝑌Yitalic_Y with +11+1+ 1 and −11-1- 1 sign with equal probability, and partition Y𝑌Yitalic_Y into 2222 disjoint subsets Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
5:Randomly partition V𝑉Vitalic_V into 2222 disjoint subsets Y𝑌Yitalic_Y and Z𝑍Zitalic_Z by assigning +11+1+ 1 or −11-1- 1 to each vertex with equal probability.
D
ℙ⁢(q∗⁢(Hp)>q∗⁢(H)−ε/4)>1−ε/4,ℙsuperscript𝑞subscript𝐻𝑝superscript𝑞𝐻𝜀41𝜀4{\mathbb{P}}(q^{*}(H_{p})>q^{*}(H)-\varepsilon/4)>1-\varepsilon/4\,,blackboard_P ( italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) > italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUP...
𝔼⁢[q∗⁢(HZ)⁢𝟏X=H]=𝔼⁢[q∗⁢(Hp)]⁢ℙ⁢(X=H)𝔼delimited-[]superscript𝑞subscript𝐻𝑍subscript1𝑋𝐻𝔼delimited-[]superscript𝑞subscript𝐻𝑝ℙ𝑋𝐻\displaystyle{\mathbb{E}}[q^{*}(H_{Z}){\mathbf{1}}_{X=H}]\;=\;{\mathbb{E}}[q^{%
ℙ⁢(q∗⁢(Hp)>q∗⁢(H)−ε/4)>1−ε/4,ℙsuperscript𝑞subscript𝐻𝑝superscript𝑞𝐻𝜀41𝜀4{\mathbb{P}}(q^{*}(H_{p})>q^{*}(H)-\varepsilon/4)>1-\varepsilon/4\,,blackboard_P ( italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) > italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUP...
𝔼⁢[q∗⁢(Hp)]≥(1−ε/4)⁢(q∗⁢(H)−ε/4)>q∗⁢(H)−ε/2.𝔼delimited-[]superscript𝑞subscript𝐻𝑝1𝜀4superscript𝑞𝐻𝜀4superscript𝑞𝐻𝜀2{\mathbb{E}}[q^{*}(H_{p})]\geq(1-\varepsilon/4)(q^{*}(H)-\varepsilon/4)>q^{*}(%
ℙ⁢(Hp∈A)≥ℙ⁢(Hp0∈A)>1−ε/2.ℙsubscript𝐻𝑝𝐴ℙsubscript𝐻subscript𝑝0𝐴1𝜀2{\mathbb{P}}(H_{p}\in A)\geq{\mathbb{P}}(H_{p_{0}}\in A)>1-\varepsilon/2.blackboard_P ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ italic_A ) ≥ blackboard_P ( italic_H start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRI...
C
Moreover, our overview highlights the role of the interconnectivity of studies in driving some main findings of the environmental migration literature.
This first step provides the most comprehensive sample of economic contributions on the relationship between climatic variations (and natural hazards) and human mobility, in all its different forms. We implement a systematic review aimed at mapping the body of literature and defining the boundaries of our focus. System...
The paper also offers an encompassing methodology for the empirical analysis of very heterogeneous outcomes of a research field. The sample collected through a systematic review of the literature, the bibliometric analysis, the construction of a co-citation network and the community detection on the structure of the ne...
The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network anal...
Section 2 offers a systematic review of the literature and gives a detailed description of the data collection process; Section 3 analyses the structural characteristic of the network of the bibliographically coupled papers; Section 4 summarizes and discusses the results of the MA, finally, Section 5 concludes and offe...
D
‖α^‖0subscriptnorm^𝛼0\|\hat{\alpha}\|_{0}∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the number of nonzero elements in
to zjsubscript𝑧𝑗z_{j}italic_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, for j=1,…,d𝑗1…𝑑j=1,\ldots,ditalic_j = 1 , … , italic_d. A solution α^^𝛼\hat{\alpha}over^ start_ARG italic_α end_ARG in
‖α^‖0subscriptnorm^𝛼0\|\hat{\alpha}\|_{0}∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the number of nonzero elements in
α^^𝛼\hat{\alpha}over^ start_ARG italic_α end_ARG (the number of active basis functions), because evaluating
takes O⁢(‖α^‖0⁢(k+1)d)𝑂subscriptnorm^𝛼0superscript𝑘1𝑑O(\|\hat{\alpha}\|_{0}(k+1)^{d})italic_O ( ∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_k + 1 ) start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT ) operations, where
C
Karwa and Slavković (2016) derived the differentially private estimators of parameters in the β𝛽\betaitalic_β-model,
Yan (2021) developed differentially private inferences in the p0subscript𝑝0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT model for directed networks with a bi-degree sequence.
In this paper, we aim to establish the unified asymptotically theoretical framework in a class of directed networks for differentially private analysis.
Fan (2023) established the unified theoretical framework for directed graphs with bi-degree sequence.
We have established the asymptotic theory in a class of directed random graph model parameterized by the differentially private bi-sequence and illustrated application to the Probit model. The result shows that statistical inference can be made using the noisy bi-sequence. We assume that the edges are mutually independ...
A
We have introduced Bayesian hierarchical models based on DAG constructions of latent spatial processes for large scale non-Gaussian multivariate multi-type data which may be misaligned, along with computational tools for adaptive posterior sampling. We illustrated our methods using applications with data sizes in the t...
We have applied our methodologies using practical cross-covariance choices such as models of coregionalization built on independent stationary covariances. However, nonstationary models are desirable in many applied settings. Recent work (Jin et al., 2021) highlights that DAG choice must be made carefully when consider...
Furthermore, our methods can be applied for posterior sampling of Bayesian hierarchies based on more complex conditional independence models of multivariate dependence (Dey et al., 2021).
Our work in this article will enable new research into nonstationary models of large scale non-Gaussian data.
Our methodologies rely on the ability to embed the assumed spatial DAG within the larger Bayesian hierarchy and lead to drastic reductions in wall clock time compared to models based on unrestricted GPs. Nevertheless, high posterior correlations of high dimensional model parameters may still negatively impact overall s...
A
\rightarrow\end{subarray}\;N^{y}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_ARG start_ROW start_CELL ← end_CELL end_ROW start_ROW start_CELL → end_CELL end_ROW end_ARG italic_N start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT, N∗←Ny←superscript𝑁superscript𝑁𝑦N^{*}\leftarrow N^{y}italic_N start_POSTSU...
In this section we combine and further generalise the previous results. We wish to draw inference on the effect of a hypothetical intervention on a treatment or exposure process Nxsuperscript𝑁𝑥N^{x}italic_N start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT on one or more outcome processes 𝒩0subscript𝒩0\mathcal{N}_...
A graph G=(𝒱,ℰ)𝐺𝒱ℰG=(\mathcal{V},\mathcal{E})italic_G = ( caligraphic_V , caligraphic_E ) is given by a set of vertices (or nodes) 𝒱𝒱\mathcal{V}caligraphic_V and directed edges ℰℰ\mathcal{E}caligraphic_E; the nodes represent variables or processes; there can be up to two edges between nodes representing dynamic r...
However, in general it does not hold that the latent projection over eliminable nodes corresponds to the induced subgraph on the remaining nodes as bi-directed edges could occur between nodes within 𝒩0subscript𝒩0\mathcal{N}_{0}caligraphic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Latent projections of causal graphs ...
Our result on eliminability is related to the marginalisation considered by Mogensen and Hansen (2020). The authors propose an extended class of local independence graphs, and corresponding μ𝜇\muitalic_μ-separation, which is closed under marginalisation. These more general graphs include bi-directed edges as a possibl...
C
\mathrm{r}}(t-1),t)italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_italic_H start_POSTSUBSCRIPT ∖ 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_r end_POSTSUPERSCRIPT ( italic_t - 1 ) , italic_t ) > italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_italic_H start_POSTSUBSCRIPT ∖ 3 end_POSTSUBSCRIPT start...
A challenge in analyzing a sequential decision-making algorithm is its flexibility. The decision I⁢(t)𝐼𝑡I(t)italic_I ( italic_t ) at round t𝑡titalic_t depends on the results of the Bellman equation, which is difficult to compute exactly. Accordingly, we have introduced a quantity called the EBI, which represents the...
Figure 2 compares the regret in SR and ABO. Unlike the successive rejects algorithm, the regret of ABO remains large, even for large T𝑇Titalic_T, suggesting that the simple regret of the Bayes optimal algorithm is polynomial in T−1superscript𝑇1T^{-1}italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT unlike SR tha...
We conducted a computer simulation to observe the polynomial rate of the simple regret.121212The code that replicates the results is available at https://github.com/jkomiyama/bayesoptimalalg/.
We show that the ABO algorithm has simple regret polynomial in T−1superscript𝑇1T^{-1}italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT.
C
The RELAX [20] estimator generalizes REBAR by noticing that their continuous relaxation can be replaced with a free-form CV.
We then apply it to generalize the linear CVs in Double CV to very flexible ones such as neural networks.
The RELAX [20] estimator generalizes REBAR by noticing that their continuous relaxation can be replaced with a free-form CV.
Although RELAX was often observed to have very strong performance in prior work [14, 60], our results in Figure 1 suggest that, for dynamically binarized datasets, much larger gains can be achieved by using the same number of function evaluations in other estimators.
However, in order to get strong performance, RELAX still includes the continuous relaxation in their CV and only adds a small deviation to it.
D
(T^n)n≥1subscriptsubscript^𝑇𝑛𝑛1(\hat{T}_{n})_{n\geq 1}( over^ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_n ≥ 1 end_POSTSUBSCRIPT converges in probability to θ∗∈(−α∗,∞)subscript𝜃subscript𝛼\theta_{*}\in(-\alpha_{*},\infty)italic_θ start_POSTSUBSCRIPT ∗ end_...
where Z𝑍Zitalic_Z is a r.v. related to Sα0,θ0subscript𝑆subscript𝛼0subscript𝜃0S_{\alpha_{0},\theta_{0}}italic_S start_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT via the relation Sα0,θ0=exp⁡{ψ⁢(Z/α0+1)−α0⁢ψ⁢(Z+1)}subscript𝑆subsc...
Let α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) arbitrary. Recall that ψ𝜓\psiitalic_ψ is the derivative of
We conclude present representation of the EPSF in terms of compound Poisson sampling models [Charalambides, 2005, Chapter 7], thus providing an intuitive construction of the EPSF that sheds light on the sampling structure of the PYP prior. We consider a population of individuals with a random number K𝐾Kitalic_K of dis...
Here, ψ𝜓\psiitalic_ψ is the digamma function; that is ψ𝜓\psiitalic_ψ is the derivative of log⁡ΓΓ\log\Gammaroman_log roman_Γ.
B
The Lorenz dominance ordering is a partial ordering of multivariate distributions. In many cases, α𝛼\displaystyle\alphaitalic_α-Lorenz curves may cross. For a complete inequality ordering, we also propose an extension of the classical Gini index to compare inequality in multi-attribute allocations.
To visualize Lorenz dominance, we define an Inverse Lorenz Function at a given vector of resource shares as the fraction of the population that cumulatively holds those shares. It is characterized by the cumulative distribution function of the image of a uniform random vector by the Lorenz map. Hence, it is a cumulativ...
Figure 7. Top: Gini indices for income and for wealth, multivariate Gini index, and Kendall’s τ𝜏\displaystyle\tauitalic_τ (dashed) for US Income-Wealth across 1989-2022.
Gajdos and Weymark (2005) propose a multivariate Gini coefficient based on aggregation across individuals first, then across dimensions, which removes the effect of dependence across attributes. Decancq and Lugo (2012) propose to aggregate across dimensions first, then across individuals, in order to keep track of corr...
Multi-attribute inequality can vary substantially across population groups, as shown in Maasoumi and Racine (2016) within the information theoretic framework of Maasoumi (1986).
C
Following the analytical tasks and the resulting design goals, we have developed HardVis, an interactive web-based VA system that allows users to identify areas where instance hardness occurs and to micromanage sampling algorithms. Section 7.2 contains further implementation details.
(i) explore various projections with alternative distributions of data types, leading to the division of training data into SBRO (cf. Figure 3(b));
G2: Application of undersampling and oversampling in specific data types only, with different parameter settings.
G1: Visual examination of several data types’ distributions and projections to choose a generic ‘number of neighbors’ parameter.
The system consists of 8 interactive visualization panels (Figure 1): (a) data types projections (→→\rightarrow→ G1) incl. data sets and sampling techniques (→→\rightarrow→ G2), (b) data overview, (c) data types distribution, (d) data details, (e) data space, (f) predicted probabilities (→→\rightarrow→ G3 and G4), (g) ...
D
Björkegren et al. (2020) propose a structural model for manipulation and use data from a field experiment to estimate the optimal policy. Frankel and Kartik (2019a) demonstrate that optimal predictors that account for strategic behavior will underweight manipulable data. Munro (2023) studies the optimal unconstrained a...
Our work is also related to strategic classification (Ahmadi et al., 2022; Brückner et al., 2012; Chen et al., 2020; Dalvi et al., 2004; Dong et al., 2018; Hardt et al., 2016; Jagadeesan et al., 2021; Kleinberg and Raghavan, 2020; Levanon and Rosenfeld, 2022) and performative prediction (Miller et al., 2021; Perdomo et...
We describe some of the extensions of our model and learning procedure. First, our model assumes that the decision maker’s policy is fixed over time. Dynamic treatment rules, where the policy is time-varying, would extend this work and would likely require new equilibrium definitions. Second, we consider linear policie...
The goal of maximizing the equilibrium policy value is motivated by prior works that estimate policy effects or treatment effects at equilibrium (Heckman et al., 1998; Munro et al., 2021; Wager and Xu, 2021). Heckman et al. (1998) estimate the effect of a tuition subsidy program on college enrollment by accounting for ...
The problem of estimating the effect of an intervention in a marketplace setting is also relevant to our work. Marketplace interventions can impact the resulting supply-demand equilibrium, introducing interference and complicating estimation of the intervention’s effect (Blake and Coey, 2014; Heckman et al., 1998; Joha...
A
In this section, we present certain desirable properties of the proposed filtration and substantiate our claims in the introduction. In Section 4.1, we discuss how the proposed filtration prolongs persistences of homology classes of high-density regions. Then we discuss, in Section 4.2, the proposed filtration’s scale ...
In this subsection, we illustrate how the proposed filtration prolongs persistences of homology classes of high-density regions with a numerical example, and we formalize the observations from the example with theorems. For the numerical examples in this and subsequent subsections, parameters are summarized in Table 3 ...
In this section, we present certain desirable properties of the proposed filtration and substantiate our claims in the introduction. In Section 4.1, we discuss how the proposed filtration prolongs persistences of homology classes of high-density regions. Then we discuss, in Section 4.2, the proposed filtration’s scale ...
The rest of the paper is organized as follows. After reviewing the mathematical background in Section 2, we define the proposed filtration in Section 3 and discuss its properties in Section 4. We discuss bootstrapping in Section 5 and present numerical simulations in Section 6. A discussion and the conclusion are prese...
We illustrate the results above with corrupted versions of the “Antman" example in Figure 5. We compare the DAD filtration and the RDAD filtration in Figures 6 and 7. The persistence diagrams of RDAD for the corrupted datasets are affected to a lesser extent by the noise and outliers than those of DAD.
A
From this survey, we find that most papers do not explicitly discuss their parameter of interest, and that as many as a third of the experiments conduct analyses that, when paired with their corresponding sampling design, do not necessarily recover either of the parameters that we consider in this paper.
For each of the two parameters of interest we consider, we propose an estimator and develop the requisite distributional approximations to permit its use for inference about the parameter of interest when treatment is assigned using a covariate-adaptive stratified randomization procedure. In the case of the equally-wei...
Klar, 2000). In this paper, we consider the problem of inference about the effect of a binary treatment on an outcome of interest in such experiments in a super-population framework in which cluster sizes are permitted to be random and non-ignorable. By non-ignorable cluster sizes, we refer to the possibility that the ...
We refer to this quantity as the equally-weighted cluster-level average treatment effect. θ1⁢(QG)subscript𝜃1subscript𝑄𝐺\theta_{1}(Q_{G})italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_Q start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ) can be thought of as the average treatment effect where the clusters the...
et al., 2023) that differ in the way they aggregate, or average, the treatment effect across units. They differ, in particular, according to whether the units of interest are the clusters themselves or the individuals within the cluster. The first of these parameters takes the clusters themselves as the units of intere...
A
1:  Input: number of iterations K∈ℕ𝐾ℕK\in\mathbb{N}italic_K ∈ blackboard_N, confidence level β>0𝛽0\beta>0italic_β > 0
and receives the reward 𝒓h=r⁢(𝒐h,𝒂h)subscript𝒓ℎ𝑟subscript𝒐ℎsubscript𝒂ℎ\bm{r}_{h}=r(\bm{o}_{h},\bm{a}_{h})bold_italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = italic_r ( bold_italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , bold_italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ). Any map...
π¯k=mixing⁢{π0,…,πk−1}.subscript¯𝜋𝑘mixingsubscript𝜋0…subscript𝜋𝑘1\displaystyle\overline{\pi}_{k}={\rm mixing}\{\pi_{0},\ldots,\pi_{k-1}\}.over¯ start_ARG italic_π end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = roman_mixing { italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_π start_POSTSUBSCR...
2:  Initialization: set π0subscript𝜋0\pi_{0}italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as a deterministic policy
15:  Output: policy set {π1,…,πK}subscript𝜋1…subscript𝜋𝐾\{\pi_{1},\ldots,\pi_{K}\}{ italic_π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_π start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT }
C
Xn=(X1,…,Xn)superscript𝑋𝑛subscript𝑋1…subscript𝑋𝑛X^{n}=(X_{1},\ldots,X_{n})italic_X start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = ( italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) are i.i.d. ∼N⁢(θ,1)similar-toabsent𝑁𝜃1\sim N(\theta,1)∼ italic...
This is like the previous example, but rather than always being able to choose one among four actions, the very set of choices that is presented to DM via setting B=b𝐵𝑏B=bitalic_B = italic_b might depend on the data Y𝑌Yitalic_Y or on external situations.
[15] gives various suitable collections, but for simplicity we here stick to a single, simple choice, taken from Example 8 of [15], that, like the standard CI, is
But, assuming our p-value is strict so that it has a uniform distribution under the null, this gives a Type-I risk of
Note that B𝐵Bitalic_B is allowed to be any function of, hence ‘conditional on’ data; but its performance is evaluated ‘unconditionally’, i.e. by means of (12) which is an unconditional expectation. This quasi-conditional stance, explained further in [15], provides a middle ground between fully Bayesian and traditional...
B
Holmström, 1987). As a form of evidence, betting scores avoid some of the pathologies of significance testing, and by incorporating statistical evidence in an economic contract, we can afford a great deal of flexibility to researchers without ignoring their incentives.
In this work, the agent and the principal will enter into a contract that caps the reward the agent can receive as a function of the statistical evidence for the quality of the product. We explore how different contracts change the incentive landscape of the agent, and develop optimal contracts in this setting.
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). Fu...
We model this interaction as a game between two players. The first player is known as the principal (e.g., a regulator) and the second is called the agent (e.g., a pharmaceutical company).
The agent’s profit from the contract is L−C𝐿𝐶L-Citalic_L - italic_C (or zero if they opt out), and we model the agent as seeking to maximize this profit; see below.
C
\delta/2\mid x)\right\},0\right].1 - roman_max [ roman_sup start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT { italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_y + italic_δ / 2 ∣ italic_x ) - italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_y - italic_δ / 2 ∣ italic_x ) } , 0 ] .
Case 2: Now consider a uniform guarantee on the bound estimators across x∈𝒳x𝒳x\in\mathcal{X}italic_x ∈ caligraphic_X.
The contribution of the present work, relative to the contribution of Fan and Park, (2010) who discuss inference for only the randomized experiment setting, is the concentration inequality for the pibt bound estimators. Under regularity conditions, Fan and Park, (2010) show asymptotically that the plug-in bound estimat...
Correspondingly, we can obtain the bound estimators by plugging in the cdf estimators in analogy to Fan and Park, (2010) who consider only the randomized experiment case:
Fay et al., (2018) also discuss the statistical inference technique of Fan and Park, (2010) in conjunction with the quantity 1−η⁢(δ)1𝜂𝛿1-\eta(\delta)1 - italic_η ( italic_δ ) in Definition 1.2. Interestingly, it has been established that the Makarov bounds for the marginal cdf of Yi⁢(1)−Yi⁢(0)subscript𝑌𝑖1subscript�...
C
Limitation. A well-trained generator is critical in MEKD, and GANs are known to suffer from mode collapse, especially for challenging tasks.
For the training of teacher and student models, we adopt the same setting of hyperparameters, so as to verify the distillation effect of student models trained with different methods compared with the teacher model trained with vanilla supervised learning under the same conditions.
Although the parameter size and structural limitations of the model prevent the student from fully mimicking the function of the teacher, MEKD can still improve distillation performance compared with other B2KD methods.
The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model.
The effect of α𝛼\alphaitalic_α is also reported in Tab. 4, which reflects that the utilization of ℒI⁢Msubscriptℒ𝐼𝑀\mathcal{L}_{IM}caligraphic_L start_POSTSUBSCRIPT italic_I italic_M end_POSTSUBSCRIPT can improve the performance of model distillation.
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3

Collection including liangzid/robench2024b_all_setstatSCP-p