context stringlengths 100 2.74k | A stringlengths 107 1.69k | B stringlengths 105 1.85k | C stringlengths 102 2.35k | D stringlengths 104 2.11k | label stringclasses 4
values |
|---|---|---|---|---|---|
For simplicity, we limit ourselves to ordinary PSIS, although consistency of self-normalized PSIS follows from Slutsky’s theorem | not generally possible. Furthermore, even if the variance would be finite, it is possible that the pre-asymptotic behavior is indistinguishable from the infinite variance case as discussed in Section 3. | the real pre-asymptotic convergence behavior. The k^^𝑘\hat{k}over^ start_ARG italic_k end_ARG diagnostic correctly | Section 4 proves asymptotic consistency and finite variance. In this section we use various large sample results to characterize finite sample behavior of IS, TIS, and PSIS. There is no strict definition of large sample in each case, but the theory is able to explain many empirical results shown by our experiments. | Section 3 discusses pre-asymptotic behavior, and we demonstrated in Section 3.3 that reaching the asymptotic regime can require infeasible | D |
β∼N(0,λ−1(Mw⊤Mw)−)similar-to𝛽𝑁0superscript𝜆1superscriptsuperscriptsubscript𝑀𝑤topsubscript𝑀𝑤\beta\sim N(0,\lambda^{-1}{(M_{w}^{\top}M_{w})}^{-})italic_β ∼ italic_N ( 0 , italic_λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_... | The maximum a posteriori (MAP) estimate β^^𝛽\widehat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β is | \text{exp}(\beta(v))].italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT | italic_β start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ exp ( italic_β ( italic_v ) ) ] . | {v}^{T}\beta)]italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ roman_exp ( x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_β ) ]. | rv=(Yv−μ^v)/V(μ^v)subscript𝑟𝑣subscript𝑌𝑣subscript^𝜇𝑣𝑉subscript^𝜇𝑣r_{v}=(Y_{v}-\widehat{\mu}_{v})/\sqrt{V(\widehat{\mu}_{v})}italic_r start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = ( italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_PO... | A |
By using the proposed method, we are able to detect weak signals and reveal clear groupings in the patterns of associations between explanatory variables and responses and apply our method to many applications, such as variable selection, effect sizes estimation, and response prediction. | In Figure 2(a) as n𝑛nitalic_n increases, and in Figure 2(b) as p𝑝pitalic_p decreases, the ratio of pn𝑝𝑛\frac{p}{n}divide start_ARG italic_p end_ARG start_ARG italic_n end_ARG gets smaller and the performance gets better as expected. Compared to Tree-Lasso along with other methods, our method is more robust with big... | Having shown the capacity of TgSLMM in recovering explanatory variables of synthetic data sets, we now demonstrate how TgSLMM can be used in real-world genome data and discover meaningful information. To evaluate the method, we focus on some practical data sets, Arabidopsis thaliana, Heterogeneous Stock Mice and Human ... | Since we have access to a validated gold standard of the Arabidopsis thaliana data set, we compare the alternative algorithms in terms of their ability in recovering explanatory variables with a true association. Figure 5 illustrates the area under the ROC curve for each response variable for Arabidopsis thaliana. By a... | For Heterogeneous Stock Mice data set, ground truth is also available so that we could evaluate the methods based on the area under their ROC Curve as Figure 6. TgSLMM behaves as the best one on 22.2% of the traits and achieves the highest ROC area for the whole data set as 0.627. The second best model is MCP with the ... | B |
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters, | Figure 2(e) is clear evidence of the SMC-based agents’ ability to recover from linear to no-regret regimes. | The regret loss associated with the uncertainty about σa2superscriptsubscript𝜎𝑎2\sigma_{a}^{2}italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is minimal for SMC-based Bayesian agents, | —used by the SMC-based agents to propagate uncertainty to each bandit arms’ expected reward estimates— | However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters, | A |
Figures 1- 5 show measurements of blood glucose, carbohydrates and insulin per hour of day for each patient. | Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. | In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days. | Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times. | Figures 1- 5 show measurements of blood glucose, carbohydrates and insulin per hour of day for each patient. | A |
Once the MIIVs for each equation are determined, they are used to compute intermediate estimates of the endogenous predictors (via OLS) within the equation (stage 1 of Two Stage Least Squares). Those intermediate estimates are then used to estimate the associations between the endogenous predictors and the dependent va... | Recall the requirement that an instrument must not correlate with the equation error. We term variables that violate this requirement but are still inappropriately used as instruments, invalid instruments. Importantly, invalid instruments in the context of MIIVs arise when the model is misspecified. Although the validi... | However, as was mentioned previously, Sargan’s Test lacks the ability to pinpoint sources of model misspecification beyond the set of MIIVs of a specific equation. The Sargan’s Test assesses if at least one instrument is invalid. Though this is a local (equation) test of overidentification, it does not reveal which of ... | While the MIIV-2SLS approach has several advantages over maximum likelihood estimation when model misspecification is present, there are a number of open questions in the MIIV literature regarding the relationship between model misspecification diagnostics and instrument quality. One consideration is that if the struct... | If all MIIVs in an overidentified equation are valid instruments, then each overidentified coefficient should lead to the same value in the population. Even if this true, sampling fluctuations can lead to different values. Sargan’s Test of overidentification determines whether these different solutions are within sampl... | C |
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol... | As for the length of rollouts from simulated env′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we use N=50𝑁50N=50italic_N = 50 by default. We experimentally shown that N=25𝑁25N=25italic_N = 25 performs roughly on par, while N=100𝑁100N=100italic_N = 100 is sli... | Random starts. Using short rollouts is crucial to mitigate the compounding errors in the model. To ensure exploration, SimPLe starts rollouts from randomly selected states taken from the real data buffer D. Figure 9 compares the baseline with an experiment without random starts and rollouts of length 1000100010001000 o... | We will now describe the details of SimPLe, outlined in Algorithm 1. In step 6 we use the proximal policy optimization (PPO) algorithm (Schulman et al., 2017) with γ=0.95𝛾0.95\gamma=0.95italic_γ = 0.95. The algorithm generates rollouts in the simulated environment env′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n ... | Figure 1: Main loop of SimPLe. 1) the agent starts interacting with the real environment following the latest policy (initialized to random). 2) the collected observations will be used to train (update) the current world model. 3) the agent updates the policy by acting inside the world model. The new policy will be eva... | B |
This paper fits the generalized form of Heterogeneous Lanchester equations to the Battle of Kursk data using the method of Maximum Likelihood estimation and compares the performance of MLE with the techniques studied earlier such as the Sum of squared residuals (SSR), Linear regression and Newton-Raphson iteration.Diff... | The basic idea of using GRG algorithm is to quickly find optimal parameters that maximize the log-likelihood. The objective is to find the parameters that maximize the log-likelihood or in other words provide the best fit. Given the values in Table 1, we investigate what values of the parameters best fit the data.Altho... | In the next section we have discussed in detail the mathematical formulations of homogeneous and heterogeneous situations. We have seen in Bracken [4], Fricker [16] , Clemens [7] , Turkes[38], Lucas [28] that LSE method have been applied for evaluating the parameters for fitting the homogeneous Lanchester equations to ... | For implementing this expression from table 1 we have taken zero as initial values for all the unknown parameters. Then we start running the GRG algorithm iterative. The GRG algorithm is available with the Microsoft Office Excel (2007) Solver [15] and MATLAB [30]. The GRG solver uses iterative numerical method. The der... | First, we applied the technique of Least Square for estimating the parameters of the heterogeneous Lanchester model.The GRG algorithm [15, 30] is applied for maximizing the MLE and for minimizing the LSE. For implementing the Least Square approach, the Sum of Squared Residuals (SSR) is minimized. The expression of SSR ... | D |
To the best of our knowledge, this is the first work to introduce global momentum into sparse communication methods. | Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor. | Furthermore, to enhance the convergence performance when using more aggressive sparsification compressors (e.g., RBGS), we extend GMC to GMC+ by introducing global momentum to the detached error feedback technique. | Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee... | In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mor... | B |
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data. | The cost of the description of the data could be seen as proportional to the number of weights and the number of non-zero activations, while the quality of the description is proportional to the reconstruction loss. | This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most representative features of the data. | What are the implications of trading off the reconstruction error of the representations with their compression ratio w.r.t to the original data? | As shown in Table. II, although we use a significantly reduced representation size, the classification accuracy differences (A±plus-or-minus\pm±%) are retained which suggests that SANs choose the most important features to represent the data. | B |
This assumption is generally mild and aims to preclude degenerate definitions of the test statistic. For example, the assumption holds true in regular settings where tn(Gε)subscript𝑡𝑛𝐺𝜀t_{n}(G\varepsilon)italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_G italic_ε ) converges in distribution, as in... | Moreover, under the limit hypothesis, Condition (C1) guarantees the asymptotic validity of the approximate test provided that the studentized test statistic based on the true variables satisfies Hoeffding’s condition. | The key implication of this result is that the approximate randomization test ‘inherits’ the asymptotic properties of the original randomization test as long as | With these three assumptions in place, Condition (C1) is key for the asymptotic performance of the approximate randomization test. | Indeed, Theorem 2 of this paper shows that the rate of convergence of Condition (C1) determines a finite-sample bound between the Type I error rates of ϕnsubscriptitalic-ϕ𝑛\phi_{n}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and ϕn∗superscriptsubscriptitalic-ϕ𝑛\phi_{n}^{*}italic_ϕ start_POSTSUBSCRIPT itali... | C |
_{\ell}}_{\hat{\beta}_{\ell}}.over^ start_ARG italic_τ end_ARG ( italic_c ) = italic_c start_POSTSUPERSCRIPT ⋆ italic_T end_POSTSUPERSCRIPT under⏟ start_ARG ( overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_M start_... | The impact of the exclusions induced by our optimization problem can be seen most clearly in the right panel. The upper bound on the causal effect is obtained primarily by tagging as manipulators those women for whom the hemoglobin level is 12.5 and who did not attempt to donate again in one year. These women are then ... | where Zrsubscript𝑍𝑟Z_{r}italic_Z start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is the vector of treatment indicators to the right of the cutoff and α^ℓsubscript^𝛼ℓ\hat{\alpha}_{\ell}over^ start_ARG italic_α end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT is the fitted coefficient corresponding to the regressi... | Our approach can be extended easily to the case of the fuzzy RDD. In this case, we suppose the estimate of the causal effect is obtained via an instrumental variable approach. The numerator is the difference of the mean treated outcomes just above and just below the cutoff, and the denominator is the difference of the ... | Per the under-bracketed quantities, these estimators separately calculate two coefficient vectors: one from a regression relating outcomes to the running variable below the cutoff, the other above the cutoff. The causal estimate is given by the difference in these two regression predictions at the cutoff c𝑐citalic_c. | D |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes be... | In the experiments we detected variance using standard deviation from average score collected from many independent learning trails. | We detected the variance between DQN and Dropout-DQN visually and numerically as Figure 3 and Table I show. | To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classi... | Figure 3: Dropout DQN with different Dropout methods in CARTPOLE environment. The bold lines represent the average scores obtained over 10 independent learning trials, while the shaded areas indicate the range of the standard deviation. | A |
In this task, different graph signals 𝐗isubscript𝐗𝑖\mathbf{X}_{i}bold_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, defined on the same adjacency matrix 𝐀𝐀\mathbf{A}bold_A, must be classified with a label 𝐲isubscript𝐲𝑖\mathbf{y}_{i}bold_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. | In particular, we used a Temporal Convolution Network [57] with 7 residual blocks with dilations [1,2,4,8,16,32,64]1248163264[1,2,4,8,16,32,64][ 1 , 2 , 4 , 8 , 16 , 32 , 64 ], kernel size 6, causal padding, and dropout probability 0.3. | In each experiment we adopt a fixed network architecture, MP(32)-P(2)-MP(32)-P(2)-MP(32)-AvgPool-Softmax, where MP(32) stands for a MP layer as described in (1) configured with 32 hidden units and ReLU activations, P(2) is a pooling operation with stride 2, AvgPool is a global average pooling operation on all the remai... | \textrm{MP}})= MP ( bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , bold_A ; bold_Θ start_POSTSUBSCRIPT MP end_POSTSUBSCRIPT ) | We use the same architecture adopted for graph classification, with the only difference that each pooling operation is now implemented with stride 4: MP(32)-P(4)-MP(32)-P(4)-MP(32)-AvgPool-Softmax. | D |
Another advantage is that the proposed method does not create a predefined architecture but enables arbitrary network architectures. | To study the sampling process, we analyze the variability of the generated data as well as different sampling modes in the next experiment. | Imitation learning performance (in accuracy [%]) of different data sampling modes on Soybean. NRFI achieves better results than random data generation. When optimizing the selection of the decision trees, the performance is improved due to more diverse sampling. | In the next step, the imitation learning performance of the sampling modes is evaluated. The results are shown in Table 3. | In the next experiment, we study the effects of training with original data, NRFI data, and combinations of both. For that, the | A |
Considering the convergence rate for σ^±Vsuperscriptsubscript^𝜎plus-or-minus𝑉\widehat{\sigma}_{\pm}^{V}over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT ± end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_V end_POSTSUPERSCRIPT and further studying the joint behavior goes beyond the aims of this article. | To provide the ideas and step of the proof, and for the reader convenience, a proof of Proposition 1 for OBM is provided in Appendix B as an introduction to the proof of Proposition 4. | Appendix B is an introduction to Appendix C: some of the main ideas are already given in this section through a proof of the convergence (without rates) towards the local time of the statistics. | As already mentioned, in the case of SBM, Proposition 1 follows from [40, Proposition 2] (with T=1𝑇1T=1italic_T = 1) and the scaling property (A.1) in Appendix A.1. | We first deal with the convergence in probability to the local time in Proposition 1, which was already known for SBM. Another proof of Proposition 1 is also given in Appendix B. | D |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient ... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; ... | step, which is commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), lacks such a notion of robustness. | A |
MobileNetV2 (Sandler et al., 2018a) extends this concept by introducing additive skip connections and bottleneck layers. | MobileNetV3 (Howard et al., 2019) extends this even further by also incorporating the neural architecture search (NAS) proposed in MnasNet (Tan et al., 2018). | Wu et al. (2018a) performed mixed-precision quantization using similar NAS concepts to those used by Liu et al. (2019a) and Cai et al. (2019). | Tan and Le (2019) proposed EfficientNet which employs NAS for finding a resource-efficient architecture as a key component. | In MnasNet (Tan et al., 2018), a RNN controller is trained by also considering the latency of the sampled DNN architecture measured on a real mobile device. | A |
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scat... | We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and ... | While this might be useful for quick overviews or automatic selection of projections, a single score fails to capture more intricate details, such as where and why a projection is good or bad [27]. In contrast, local measures such as the projection precision score (pps) [18] describe the quality for each individual poi... | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | t-viSNE is similar to these works in its use of measures to guide the user’s exploration, but we use measures and mappings that are either specific to t-SNE’s algorithm or customized to be more useful in this scenario. | B |
From the comparison of 3 extra experiments, we confirm that the adaptive graph update plays a positive role. Besides, the novel architecture with weighted graph improves the performance on most of datasets. | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph. | To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. | Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update t... | C |
Importantly, Gregory et al. (2021) do not explicitly focus on inference, and their analysis requires much stronger assumptions to obtain the oracle property. For example, these assumptions include normally distributed errors independent of X𝑋Xitalic_X, as well as a bounded support of X𝑋Xitalic_X. Similar to our frame... | In general, the performance of the estimator and the confidence bands depends on the specification of the cubic B-splines used to approximate the target functions (in terms of the number of knots). In our simulations, we observe that the quality of estimation and width of the confidence bands change only moderately whe... | The primary aim of our paper is to provide a method for constructing uniformly valid inference and confidence bands in sparse high-dimensional models in the sieve framework. In doing so, we contribute to the growing literature on high-dimensional inference in additive models, especially that on debiased/double machine ... | In a recent study based on the previously mentioned debiasing approach by Zhang and Zhang (2014), Gregory et al. (2021) propose an estimator for the first component f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in a high-dimensional additive model in which the number of additive components p𝑝pita... | A procedure explicitly addressing the construction of uniformly valid confidence bands for the components in high-dimensional additive models has been developed by Lu et al. (2020). The authors emphasize that achieving uniformly valid inference in these models is challenging due to the difficulty of directly generalizi... | D |
Figure 3(a) is a t-SNE projection [61] of the instances (MDS [22] and UMAP [31] are also available in order to empower the users with various perspectives for the same problem, based on the DR guidelines from Schneider et al. [47]). | (iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model exploration allows us to reduce the size of the stacking ensemble, discard any unnecessary models, and observe the predictions of the models collectively (StackGenVis: Alignment of Data,... | Figure 3: The data space projection with the importance of each instance measured by the accuracy achieved by the stack models (a). The parallel coordinates plot view for the exploration of the values of the features (b); a problematic case is highlighted in red with values being null (‘4’ has no meaning for Ca). (c.1)... | The point size is based on the predictive accuracy calculated using all the chosen models, with smaller size encoding higher accuracy value. | Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long w... | C |
Based on Theorem 4.3 and Lemma 4.4, we establish the following corollary, which characterizes the global optimality and convergence of the TD dynamics θ(m)(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) in (3.3). | Under the same conditions of Theorem 4.3, it holds with probability at least 1−δ1𝛿1-\delta1 - italic_δ that | where C∗>0subscript𝐶0C_{*}>0italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT > 0 is a constant depending on Dχ2(ν¯∥ν0)subscript𝐷superscript𝜒2conditional¯𝜈subscript𝜈0D_{\chi^{2}}(\underline{\nu}\,\|\,\nu_{0})italic_D start_POSTSUBSCRIPT italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( under... | where C∗>0subscript𝐶0C_{*}>0italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT > 0 is a constant depending on Dχ2(ν¯∥ν0)subscript𝐷superscript𝜒2conditional¯𝜈subscript𝜈0D_{\chi^{2}}(\bar{\nu}\,\|\,\nu_{0})italic_D start_POSTSUBSCRIPT italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( over¯ start... | Under Assumptions 4.1, 4.2, and 6.3, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that | A |
While not having been formalised by the original authors, one could interpret the area under the curves as an intuitive notion of effect ‘density’, as opposed to sparsity: an input with a sparse (dense) effect will have a relatively high (low) area under the pvalue curve. This is because many parts of the p-value funct... | Predicting a quantity for the long time scales which matter for the climate is a hard task, with a great degree of uncertainty involved. Many efforts have been undertaken to model and control this and other uncertainties, such as the development of standardized scenarios of future development, called Shared Socio-econo... | A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of soft... | Some fundamental pieces of knowledge are still missing: given a dynamic phenomenon such as the evolution of CO2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions in time a policymaker is interested if the input of the factor varies across time, and how. Moreover, given the presence... | For this paper we focus on \chCO2 emissions as the main output of an ensemble of coupled climate-economy-energy models. Each model-scenario produces a vector of \chCO2 emissions defined from the year 2010 to 2090 at 10-years time intervals. This discretization of the output space is in any case arbitrary, since \chCO2 ... | D |
Second, we mentioned briefly that, when there are multiple observations, one can apply IP-SVD on the sample covariance tensor. | The STEFA model is related to a list of tensor response regression models (Raskutti et al., 2019) with a low-rank coefficient tensor. | Last but not the least, it is of great need to develop new methods to use STEFA in tensor regression or other tensor data related applications. | The STEFA model is to the MMC tensor regression as the projected PCA is to the reduce-rank regression. | STEFA is a generalization of the semi-parametric vector factor model (Fan et al., 2016) to the tensor data. | B |
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. | Empirical results on deep learning show that with the same large batch size, SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods. | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. | Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods. | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | C |
The second is vector-on-tensor regression in which we have a vector response (Miranda et al., 2018). | To evaluate the estimation performance on the regression function, we define the integrated squared error (ISE) of the regression function as | In this section, we propose an interpretable nonparametric model for the regression function m𝑚mitalic_m. | In this work, we focus on the scalar-on-tensor regression model, and we denote the regression function by m𝑚mitalic_m. | We propose an estimator of the regression function m𝑚mitalic_m and a corresponding estimation algorithm. | C |
)\in\mathcal{E}\times[H].∥ ∑ start_POSTSUBSCRIPT italic_l = italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT bold_italic_ϕ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT [ italic_V start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT star... | Next we proceed to derive the dynamic regret bounds for two cases: (1) local variations are known, and (2) local variations are unknown. | We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en... | By applying a similar proof technique as Theorem 3, we can derive the dynamic regret within one epoch when local variations are unknown. | Now we derive the dynamic regret bounds for LSVI-UCB-Restart, first introducing additional notation for local variations. We let | A |
Existing works including [31, 32] also talk about the sample complexity bounds for the projected Wasserstein distance. | As suggested in [23], the power of MMD test with the median heuristic decays quickly into zero when the dimension d𝑑ditalic_d increases. | Except for the second-order moment term, the acceptance region does not depend on the dimension of the support of distributions, but only on the sample size and the dimension of projected spaces. | [31, 6] find the worst-case direction that maximizes the Wasserstein distance between projected sample points in one-dimension. | However, the bound presented in [31] depends on the input dimension d𝑑ditalic_d and focuses on the case k=1𝑘1k=1italic_k = 1 only. | D |
I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance. | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | D |
Forward selection is a simple, greedy feature selection algorithm (Guyon \BBA Elisseeff, \APACyear2003). It is a so-called wrapper method, which means it can be used in combination with any learner (Guyon \BBA Elisseeff, \APACyear2003). The basic strategy is to start with a model with no features, and then add the sing... | Excluding the interpolating predictor, nonnegative ridge regression produced the least sparse models. This is not surprising considering it performs view selection only through its nonnegativity constraints. Its high FPR in view selection appeared to negatively influence its test accuracy, as there was generally at lea... | Consider the view corresponding to the largest reduction in AIC. If the coefficients (excluding the intercept) of the resulting model are all nonnegative, update the model and repeat starting at step 2. | If some of the coefficients (excluding the intercept) of the resulting model are negative, remove the view (from step 3) from the list of candidates and repeat starting at step 3. | In MVS, the meta-learner takes as input the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z. To perform view selection, the meta-learner should be chosen such that it returns (potentially) sparse models. The matrix 𝒁𝒁\bm{Z}bold_italic_Z has a few special characteristics which can be exploited, and which... | B |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a mu... | Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to th... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe... | CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward mod... | A |
The analytical requirements (R1–R5) originate from the analysis of the related work in Section 2, including the three analytical needs from Park et al. [PNKC21], the three key decisions from Wang et al. [WMJ∗19], and the five sub-steps from Li et al. [LCW∗18]. | Also, our own experiences played a vital role, for instance, VA tools for ML such as t-viSNE [CMK20] and StackGenVis [CMKK21], and recently-conducted literature reviews [CMJK20, CMJ∗20]. | The use of parallel coordinates plots [ID87] is rather prominent for the visualization of automatic hyperparameter tuners such as HyperOpt [BKE∗15]. Most of the time, less interactive visualizations have been developed for monitoring automatic frameworks [ASY∗19, GSM∗17, KKP∗18, LLN∗18, LTKS19, TBCT∗18]. Visualizations... | Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t... | There are relevant works that involve the human in interpreting, debugging, refining, and comparing ensembles of models [DCCE19, LXL∗18, NP20, SJS∗18, XXM∗19, ZWLC19]. These papers use bagging [Bre01] and boosting [CG16, FSA99, KMF∗17] techniques for ranking and identifying the best combination of models in different a... | A |
The stochastic blockmodel (SBM) (SBM, ) is one of the most used models for community detection in which all nodes in the same community are assumed to have equal expected degrees. Some recent developments of SBM can be found in (abbe2017community, ) and references therein. Since in empirical network data sets, the degr... | In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random... | In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-network... | In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm. | This paper makes one major contribution: modified SLIM methods to mixed membership community detection under the DCMM model. When dealing with large networks in practice, we apply Mixed-SLIMapprosubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o e... | A |
Detommaso et al. (2018); Han and Liu (2018); Chen et al. (2018); Liu et al. (2019); Gong et al. (2019); Wang et al. (2019); Zhang et al. (2020); Ye et al. (2020) | we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe... | use the empirical distribution of the particles to approximate the probability measure and the iterates are updated via pushing the particles in directions specified the solution to a variational problem. | Departing from MCMC where independent stochastic particles are used, it leverages interacting deterministic particles to approximate the probability measure of interest. In the mean-field limit where the number of particles go to infinity, it can be viewed as the gradient flow of the KL-divergence with respect to a mod... | In other words, as the number of particles and the number of iterations both go to infinity, variational transport finds the global minimum of F𝐹Fitalic_F. | C |
A unit-specific covariate process, 𝐙(t)=Z1:U(t)𝐙𝑡subscript𝑍:1𝑈𝑡\mathbf{Z}(t)=Z_{1:U}(t)bold_Z ( italic_t ) = italic_Z start_POSTSUBSCRIPT 1 : italic_U end_POSTSUBSCRIPT ( italic_t ), has a value, Zu(t)subscript𝑍𝑢𝑡Z_{u}(t)italic_Z start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ( italic_t ), for each unit, u�... | Slots in this object encode the components of the SpatPOMP model, and can be filled or changed using the constructor function spatPomp() and various other convenience functions. | If any of the variables in the covariates data.frame is common among all units the user must supply the variable names as class ‘character’ vectors to the shared_covarnames argument of the spatPomp() constructor function. | Optionally, simulate can be made to return a class ‘data.frame’ object by supplying the argument format=‘data.frame’ in the call to simulate. | In spatPomp, covariate processes can be supplied as a class ‘data.frame’ object to the covar argument of the spatPomp() constructor function. | D |
(i) choose four suitable data space slices, which are then used for evaluating the impact of each feature on particular groups of instances (Fig. 1(a)); | After the feature selection phase, we use the graph view to transform the most contributing features (F4 in Fig. 5(e) and F18 in Fig. 6(a)). | (ii) in the exploration phase, choose subsets of features using diverse automatic feature selection techniques (see Fig. 1(b)); | Various visualization techniques have been proposed for the task of feature selection, including correlation matrices [42, 43], radial visualizations [44, 45, 46], scatterplots [47], scatterplot matrices [48], feature ranking [49, 50, 51, 52, 53, 54, 55, 56], feature clustering [57], and dimensionality reduction (DR) [... | There are several different techniques for computing feature importance that produce diverse outcomes per feature. The tool should facilitate the visual comparison of alternative feature selection techniques for each feature (T2). Another key point is that users should have the ability to include and exclude features d... | B |
We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. An orthogonal avenue for attacking bias mitigation is to use alternative architectures. Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured c... | This work was supported in part by the DARPA/SRI Lifelong Learning Machines program [HR0011-18-C-0051], AFOSR grant [FA9550-18-1-0121], and NSF award #1909696. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any s... | In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu... | In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicitly labele... | It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA a... | A |
The GP correlation function is the squared exponential kernel as is recommended in [13, 38]. The trend function is a first order regression model: μ(𝐱0)=q(𝐱0)⊤𝜷𝜇subscript𝐱0𝑞superscriptsubscript𝐱0top𝜷\mu(\mathbf{x}_{0})=q(\mathbf{x}_{0})^{\top}\boldsymbol{\beta}italic_μ ( bold_x start_POSTSUBSCRIPT 0 end_POST... | The accuracy of the time series prediction is measured via the mean absolute error (MAE) and root mean square error (RMSE) criteria. They are defined as | We proposed a novel data-driven approach for emulating deterministic complex dynamical systems implemented as computer codes. The output of such models is a time series and presents the evolving state of a physical phenomenon over time. Our method is based on emulating the short-time numerical flow map of the system an... | We note that the Lorenz attractor cannot be predicted perfectly due to its chaotic behaviour. The vertical dashed blue lines indicate the “predictability horizon” defined as the time at which a change point occurs in the SD of prediction [38]. The predictability horizon is acquired by applying the cpt.mean function imp... | Following the above procedure renders only one prediction of the time series. However, we wish to have an estimation of uncertainty associated with the prediction accuracy. This can be achieved by repeating the above steps with different draws from the emulated flow map to obtain a distribution over the time series. Th... | A |
One of the classical and important problems in statistics is testing the independence between two or more components of a random vector. Testing for mutual independence, which characterizes the structural relationships between random variables and is strictly stronger than pairwise independence, is a fundamental task i... | implemented in the R package dHSIC [Pfister and Peters (2017)]. The test based on ranks of distances introduced in Heller, Heller and Gorfine (2013) | Zhang, Gao and Ng (2023) proposed a new class of independence measures based on the maximum mean discrepancy in Reproducing Kernel Hilbert Space. In the literature, additional methods for testing the independence of two multidimensional random variables have emerged, including those based on the L1subscript𝐿1L_{1}ital... | [Bach and Jordan (2003)], [Chen and Bickel (2006)], [Samworth and Yuan (2012)] and [Matteson and Tsay (2017)]. Testing independence also has many applications, including causal inference ([Pearl (2009)], [Peters et al. (2014)], | [Pfister et al. (2018)], [Chakraborty and Zhang (2019)]), graphical modeling ([Lauritzen (1996)], [Gan, Narisetty and Liang (2019)]), linguistics ([Nguyen and Eisenstein (2017)]), clustering (Székely and Rizzo, 2005), dimension reduction (Fukumizu, Bach and Jordan, 2004; Sheng and Yin, 2016). The traditional approach f... | C |
{t})bold_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ← bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT + italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) | In Table 2 we provide a detailed complexity comparison between the Monotonic Frank-Wolfe (M-FW) algorithm (Algorithm 1), and other comparable algorithms in the literature. | In Table 3 we provide an oracle complexity breakdown for the Frank-Wolfe algorithm with Backtrack (B-FW), also referred to as LBTFW-GSC in Dvurechensky et al. [2022], when minimizing over a (κ,q)𝜅𝑞(\kappa,q)( italic_κ , italic_q )-uniformly convex set. | In Table 4 we provide a detailed complexity comparison between the Backtracking AFW (B-AFW) Algorithm 5, and other comparable algorithms in the literature. | We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refe... | A |
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so t... | One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individua... | In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior d... | Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay... | Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a nume... | A |
{x}^{*},\theta)p(\theta\,|\,\mathcal{D})\,\mathrm{d}\theta\,.italic_p ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , caligraphic_D ) = ∫ italic_p ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , i... | Instead of computing the posterior distribution through Eq. (4), the problem is reformulated as a variational problem, i.e. the posterior distribution p(θ|𝒟)𝑝conditional𝜃𝒟p(\theta\,|\,\mathcal{D})italic_p ( italic_θ | caligraphic_D ) is replaced by a parametric family of distributions q(θ;λ)𝑞𝜃𝜆q(\theta;\lambda... | To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ... | In Bayesian inference one tries to model the distribution of interest by updating a prior estimate using a collection of observed data. The conditional distribution p(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) is inferred from a given parametric model or likelihood... | This process is summarized in Algorithm 1. Note that the algorithm can simply be repeated in an on-line fashion when more data becomes available. One simply takes the “old” posterior distribution p(θ|𝒟)𝑝conditional𝜃𝒟p(\theta\,|\,\mathcal{D})italic_p ( italic_θ | caligraphic_D ) as the new prior distribution. | D |
Elliott and Golub (2019) characterize outcomes in public goods games on exogenous networks by the spectrum of a matrix called the benefits matrix, in which each entry gives the marginal rate of substitution between decreasing own contribution and increased benefits from a neighbor in a fixed network. Their results tie ... | Our model departs from the existing literature on public goods in endogenous networks in a number of ways. Primarily, we model a situation in which individuals choose others with whom they would like to share the externalities generated by their resources. This is the reverse of the situations studied in the previous l... | Cross-sectional network formation estimators rely on assumptions about the meeting process and dynamics that guarantee convergence to a stochastically stable stationary distribution, also called a Quantal Response Equilibrium (QRE). While the QRE (McKelvey and Palfrey, 1995, 1998) is a fixed point stationary distributi... | In another relevant study by Rand et al. (2011), the authors conducted an experiment to gauge the effects of endogenous networks on cooperation in a repeated prisoner’s dilemma. By varying the opportunity for network updates, they showed that subjects are able to take advantage of their ability to change social ties in... | Finally, in the exogenous/fixed network case, Boosey (2017) uses data from a laboratory experiment to examine the mechanisms for cooperation in a repeated network public goods game. Experimental results showed a significant portion of subjects playing strategies of conditional cooperation, in which subjects play strate... | D |
For illustration purposes, we employ in this work open source data from the “Telecom Italia Big Data Challenge”, which contains telecommunications activity aggregated over a fixed spatial grid of the city of Milan during the months of November and December 2013. | Table 1 shows the posterior mean and standard deviation of the satisfaction accuracy, satisfaction F1 score and robustness RMSE for all four properties. We observe that the CAR-AR-BNP model is the best-performing one in terms of the measures inspected, however, the difference in performance for some properties is not l... | for the evaluation of the city in terms of safety and quality of life, it is interesting to look at how the city is performing with respect to the reachability of some key points of interest. For example, in an emergency scenario, a traffic monitoring body would be interested in the following requirement (assuming that... | Figure 6 presents the average value of the measures in Table 1 for all testing periods, together with 80% credible intervals. This figure can be used for deciding which model performs best in terms of specific interest in the verified properties. For example, it can be seen that the autoregressive models perform simila... | Our results provide a deeper understanding of urban dynamics in Milan in terms of the best-performing model which identifies clusters of areas with similar temporal patterns and in terms of when and how well the formulated properties are satisfied. | D |
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... | other columns. For example, for the total charge variable we use the data available for total charge, length of stay, number of procedures, number of diagnoses and age. | For the experiments of predicting length of stay, totchg, npr and ndx are used as predictors due to the high | age. For predicting total charge, we employ los, npr, ndx and age as predictors for datasets of size N=2,000𝑁2000N=2,000italic_N = 2 , 000 to N=100,000𝑁100000N=100,000italic_N = 100 , 000 with a 90% training and 10% validation | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... | B |
The first above condition is meaningful for small τ>0𝜏0\tau>0italic_τ > 0 and hence deals with the boundary of the set SXsubscript𝑆𝑋S_{X}italic_S start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT while the second means that the measure PXsubscript𝑃𝑋P_{X}italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT charges ... | The proof of the next theorem is given in Section 5.3 using two lemmas, namely Lemma 3 and Lemma 4, that are proved in the Appendix. | })}italic_ρ start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_sup start_POSTSUBSCRIPT | italic_u | ≤ italic_δ end_POSTSUBSCRIPT ∥ italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_g start_POSTSUBSCRIPT 2... | We now give the proofs of Theorem 3, 4 and 5 by relying on several technical lemmas, namely Lemma 1, 2, 3 and 4, which proofs are given in the Appendix. | Based on some technical lemmas, proofs of which are given in the Appendix, the proofs of the main results (Theorem 3, 4 and 5) are presented in Section 5. | A |
In this paper, we develop a new estimation procedure, named as High-Order Projection Estimators (HOPE), for TFM-cp in (1). | The procedure includes a warm-start initialization using a newly developed composite principal component analysis (cPCA), and an iterative simultaneous orthogonalization scheme to refine the estimator. The procedure is designed to take the advantage of the special structure of TFM-cp whose autocovariance tensor has a s... | The estimation procedure takes advantage of the special structure of the model, resulting in faster convergence rate and more accurate estimations comparing to the standard procedures designed for the more general TFM-tucker, and the more general tensor CP decomposition. Numerical study illustrates the finite sample pr... | Although these methods can be used directly to obtain the low-rank CP components of the autocovariance tensors, they have been designed for general tensors and do not utilize the special structure embedded in the TFM-cp. | In this section, we focus on the estimation of the factors and loading vectors of model (1). The proposed procedure includes two steps: an initialization step using a new composite PCA (cPCA) procedure, presented in Algorithm 1, and an iterative refinement step using a new iterative simultaneous orthogonalization (ISO)... | A |
}\big{(}Y_{i}^{*},g_{i}(Y^{*})\,\big{|}\,Y\big{)}.blackboard_E [ over^ start_ARG roman_Cov end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | italic_Y ] = roman_Cov ( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , i... | Here, for the bias term, we used the fact that CBα(g)subscriptCB𝛼𝑔\mathrm{CB}_{\alpha}(g)roman_CB start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ) is unbiased for | by the law of total covariance, and where we used Cov(𝔼[Yi∗|Y],𝔼[gi(Y∗)|Y])=Cov(Yi,gi(Y∗))Cov𝔼delimited-[]conditionalsuperscriptsubscript𝑌𝑖𝑌𝔼delimited-[]conditionalsubscript𝑔𝑖superscript𝑌𝑌Covsubscript𝑌𝑖subscript𝑔𝑖superscript𝑌\mathrm{Cov}(\mathbb{E}[Y_{i}^{*}\,|\,Y],\mathbb{E}[g_{i}(Y^{*})\,|\,Y])=... | The intuition here is that each pair (Y∗b,Y†b)superscript𝑌absent𝑏superscript𝑌†absent𝑏(Y^{*b},Y^{\dagger{b}})( italic_Y start_POSTSUPERSCRIPT ∗ italic_b end_POSTSUPERSCRIPT , italic_Y start_POSTSUPERSCRIPT † italic_b end_POSTSUPERSCRIPT ) comprises two | Here we simply used the fact that an empirical covariance computed from i.i.d. samples of a pair of random variables is unbiased for their covariance | D |
In the following subsections, we explain VisRuler by describing a use case with the World Happiness Report 2019 Helliwell2019World data set obtained from the Kaggle repository. Kaggle2019 This data set contains 156 countries (i.e., instances) ranked according to an index representing how happy the citizens of each co... | The exploration starts with an overview of how 10 RF and 10 AB models performed based on three validation metrics: accuracy, precision, and recall. The models are initially sorted according to the overall score, which is the average sum of the three metrics. This choice guides users to focus mostly on the right-hand si... | From the analyses and the overall score of the RF and AB models, we observe that the most performant models for RF consider only 2 features when splitting the nodes (i.e., max_features hyperparameter). The PCPs in Figure 7(d) enable us to scan the internal regions of the hyperparameters’ solution space for RF. As for A... | Exploration and Selection of Algorithms and Models. Following the workflow in Section System Overview and Use Case, Amy loads the data set and checks the score of each model based on the three validation metrics (Figure 1(a)). For the AB algorithm, in blue, all models have a relatively low value for the recall metric, ... | The green color in the center of a point indicates that a decision is from RF, while blue is for AB. The outline color reflects the training instances’ class based on a decision’s prediction. The size maps the number of training instances that are classified by a specific decision, and the opacity encodes the impurity ... | A |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.