context stringlengths 250 3.95k | A stringlengths 250 5.12k | B stringlengths 250 3.78k | C stringlengths 250 5.56k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | \frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita... |
g2(x)fn′(x)=g1(x)fn(x)+g0(x)fn−1(x);subscript𝑔2𝑥superscriptsubscript𝑓𝑛′𝑥subscript𝑔1𝑥subscript𝑓𝑛𝑥subscript𝑔0𝑥subscript𝑓𝑛1𝑥\displaystyle g_{2}(x)f_{n}^{\prime}(x)=g_{1}(x)f_{n}(x)+g_{0}(x)f_{n-1}(x);italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_... | a1,n−1fn(x)=(a2,n−1+a3,n−1x)fn−1(x)−a4,n−1fn−2(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST... | (x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | C |
In other words, our algorithm initialises w:=gassign𝑤𝑔w:=gitalic_w := italic_g, u1:=1assignsubscript𝑢11u_{1}:=1italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := 1 and u2:=1assignsubscript𝑢21u_{2}:=1italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := 1 and multiplies w𝑤witalic_w, u1subscript𝑢1u_{1}italic_u start... |
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gicgrc−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSU... | The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
| does not yield an upper bound for the memory requirement in a theoretical analysis.
Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be... |
As for the simpler examples considered in the previous section, here to keep the presentation clear we do not write down explicit MSLP instructions, but instead determine the cost of Algorithm 3 while keeping track of the number of elements that an MSLP for this algorithm would need to keep in memory at any given time... | D |
It then follows from Lemma 1 that 1≤αiF≤α1superscriptsubscript𝛼𝑖𝐹𝛼1\leq\alpha_{i}^{F}\leq\alpha1 ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ≤ italic_α for all the local eigenvalues. Thus, Λ~h△=Λ~hfsuperscriptsubscript~Λℎ△superscriptsubscript~Λℎ𝑓\ti... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | C |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. | D |
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex... |
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work [6, 11] on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor). Our task is, to a point, a reverse engin... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... | A |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | A |
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic... | The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
|
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | D |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | D |
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a)
and the conditional reward function’s variance ... | We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm.
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters, | If the support of q(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
| For the more interesting case of unknown parameters,
we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions | We now describe in detail how to use the SMC-based posterior random measure pM(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali... | A |
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical ac... |
Table 1 shows basic patient information. Half of the patients are female and ages range from 17 to 66, with a mean age of 41.8 years. Body weight, according to BMI, is normal for half of the patients, four are overweight and one is obese. The mean BMI value is 26.9. Only one of the patients suffers from diabetes type ... | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | A |
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... |
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai... |
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)... |
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone... |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... | A |
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s... |
The relationship between cutwidth and pathwidth revealed by this direct reduction is best illustrated via a third graph parameter that we call second order cutwidth. To the best of our knowledge, this parameter has not explicitly been studied before. | One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr... | In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th... | A reason why this direct reduction from cutwidth to pathwidth has been overlooked might be that the literature on cutwidth and pathwidth approximation is focussed on more general approximation techniques (i. e., vertex and edge separators), which then yield approximation algorithms for these graph parameters. Another r... | A |
……\dots………\dots………\dots………\dots………\dots………\dots………\dots…y^^𝑦\hat{y}over^ start_ARG italic_y end_ARGJ𝐽Jitalic_Jy𝑦yitalic_yBackpropagationFeed-forward
Figure 2: A Convolutional Neural Network that calculates the LV area (y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG) from an MRI image (x𝑥xitalic_x). | The pyramidoid structure on top denotes the flow of the feed-forward calculations starting from input image x𝑥xitalic_x through the set of feature maps depicted as 3D rectangulars to the output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG.
The height and width of the set of feature maps is proportional to the height a... | The arrows at the bottom denote the flow of the backpropagation starting after the calculation of the loss using the cost function J𝐽Jitalic_J, the original output y𝑦yitalic_y and the predicted output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG.
This loss is backpropagated through the filters of the network adjustin... | Dashed lines denote a 2D convolutional layer with ReLU and Max-Pooling (which also reduces the height and width of the feature maps), the dotted line denotes the fully connected layer and the dash dotted lines at the end denote the sigmoid layer.
For visualization purposes only a few of the feature maps and filters are... | Additionally, convolutional layers create feature maps using shared weights that have a fixed number of parameters in contrast with fully connected layers, making them much faster.
VGG[17] is a simple CNN architecture that utilizes small convolutional filters (3×3333\times 33 × 3) and performance is increased by increa... | A |
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... |
Although prior works have proposed training predictive models for next-frame, future-frame, as well as combined future-frame and reward predictions in Atari games (Oh et al. (2015); Chiappa et al. (2017); Leibfried et al. (2016)), no prior work has successfully demonstrated model-based control via predictive models th... | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | A |
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods.
Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin... | D |
In the realm of mobile robotics research, the motion control of terrestrial robots across varied terrains is a complex endeavor. To enhance locomotion efficacy and elevate mobility, hybrid robots have been actively developed in the past decade [1]. These robots astutely choose the most suitable locomotion mode from a s... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
|
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking.... | This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks... | A |
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode... |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would... | D |
there were cases like this subject, in which SS3 failed to predict “depression” due to the accumulated positive value not being able to exceed the negative one even although, in some cases, it was able to get very close. Note that the positive value gets really close to the negative one at around the 100th writing2727... | In some cases, SS3 misclassified subjects as positive because, while it was true that the positive value changed at least 4 times more rapidly than the negative, the condition was mainly true only due to the negative change being very small.
For instance, if the change of the negative confidence value was 0.01, a reall... | This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (l... | the second one, denoted by SS3Δ, was more comprehensive and classified a subject as positive when the first case was met, or when the change of the positive slope was, at least, four times greater than the negative one, i.e. the positive value increased at least 4 times faster202020Those readers interested in the imple... |
there were cases like this subject, in which SS3 failed to predict “depression” due to the accumulated positive value not being able to exceed the negative one even although, in some cases, it was able to get very close. Note that the positive value gets really close to the negative one at around the 100th writing2727... | A |
Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor.
To address this convergence issue, | GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum.
To the best of our knowledge, this is the first work to introduce global momentum into sparse commun... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca... | A |
φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG is non-differentiable due to the presence of the ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT pseudo-norm in Eq. 3.
A way to overcome this is using ℒℒ\mathcal{L}caligraphic_L as the differentiable optimization function during training and φ¯¯𝜑\... | We set med=m(i)𝑚𝑒𝑑superscript𝑚𝑖med=m^{(i)}italic_m italic_e italic_d = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for utilizing fair comparison between the sparse activation functions.
Specifically for Extrema activation function we introduce a ‘border tolerance’ parameter to allow neuron ac... |
We choose values for d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for each activation function in such as way, to approximately have the same number of activations for fair comparison of the sparse activation functions. | We then pass 𝒔(i)superscript𝒔𝑖\bm{s}^{(i)}bold_italic_s start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and a sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in the sparse activation function ϕitalic-ϕ\phiitalic_ϕ resulting in the activation map 𝜶(... | The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ... | B |
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of sy... | B |
=ΣejBese3absentsubscript𝑒𝑗absentΣsuperscript𝐵𝑒superscript𝑠𝑒3\displaystyle=\overset{e_{j}}{\underset{}{\Sigma}}\,B^{e}\frac{s^{e}}{3}= start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_B st... | =S¯¯−1∗(M^¯T∗S^^∗Dr^¯)absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟\displaystyle=\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^{T}*%
\widehat{\widehat{S}}*\overline{\widehat{Dr}}\right)= over¯ start_ARG over¯ start_ARG italic_S end_ARG end_ARG start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∗ ( over¯ sta... | U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=(S¯¯−1∗(M^¯T∗S^^∗Dr^¯))∗U¯absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟¯𝑈\displaystyle=\left(\overline{\overline{S}}^{-1}... | U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end... | U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r e... | B |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | A |
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | A |
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... |
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural im... |
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information... |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ... | B |
The nodes with the K𝐾Kitalic_K highest scores are retained, while the remaining ones are dropped.
Since the top-K𝐾Kitalic_K selection is not differentiable, the scores are also used as a gating for the node features, allowing gradients to flow through the projection vector during backpropagation. | In particular, experimental results showed that NDP is computationally cheaper (in terms of both time and memory) than feature-based methods, while it achieves competitive performance on all the downstream tasks taken into account.
An important finding in our results indicates that topological methods are the only viab... | We recall that when using NDP a stride of 4 is obtained by applying two decimation matrices in cascade, 𝐒(1)𝐒(0)superscript𝐒1superscript𝐒0{\mathbf{S}}^{(1)}{\mathbf{S}}^{(0)}bold_S start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT bold_S start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT and 𝐒(3)𝐒(2)superscript𝐒3su... | We consider two tasks on graph-structured data: graph classification and graph signal classification.
The code used in all experiments is based on the Spektral library [45], and the code to replicate all experiments of this paper is publicly available at GitHub.222github.com/danielegrattarola/decimation-pooling | In particular, experimental results showed that NDP is computationally cheaper (in terms of both time and memory) than feature-based methods, while it achieves competitive performance on all the downstream tasks taken into account.
An important finding in our results indicates that topological methods are the only viab... | C |
NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the dec... | Current state-of-the-art methods directly map random forests into neural networks. The number of parameters of the resulting network is evaluated on all datasets with different numbers of training examples. The overall performance is shown in the last column.
Due to the stochastic process when training the random fores... | NRFI introduces imitation instead of direct mapping. In the following, a network architecture with 32323232 neurons in both hidden layers is selected.
The previous analysis has shown that this architecture is capable of imitating the random forests (see Figure 4 for details) across all datasets and different numbers of... | Here, we additionally include decision trees, support vector machines, random forests, and neural networks in the comparison. The evaluation is performed on all nine datasets, and results for different numbers of training examples are shown (increasing from left to right). The overall performance of each method is summ... | First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all... | C |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | D |
This paper is dedicated to giving an extensive overview of the current directions of research of these approaches, all of which are concerned with reducing the model size and/or improving inference efficiency while at the same time maintaining accuracy levels close to state-of-the-art models.
We have identified three m... | In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements.
We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza... | Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations.
The reduction in memory requirements are obvious: Using fewer bits for the weights results in a lower memory overhead for storing the corresponding model, and using fewer bits for the activ... | Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs.
While quantization approaches obviously reduce the memory footprint of a DNN, the selected weight representation potentially also facilitates faster inference using cheaper arithmetic operations. | Lin et al. (2016) consider fixed-point quantization of pre-trained full-precision DNNs.
They formulate a convex optimization problem to minimize the total number of bits required to store the weights and the activations under the constraint that the total output signal-to-quantization noise ratio is larger than a certa... | C |
Despite its widespread use in applications, little is known in terms of relationships between Vietoris-Rips barcodes and other metric invariants. For instance, whereas it is obvious that the right endpoint of any interval I𝐼Iitalic_I in barc∗VR(X)subscriptsuperscriptbarcVR∗𝑋\mathrm{barc}^{\mathrm{VR}}_{\ast}(X)roma... |
In particular, one can apply the homology functor to the Vietoris-Rips filtration of a metric space X𝑋Xitalic_X. This induces a persistence module (with T=ℝ>0𝑇subscriptℝabsent0T=\mathbb{R}_{>0}italic_T = blackboard_R start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT) where the morphisms are those induced by inclusions. As a... | One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding... | One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
| The persistent homology of the Vietoris-Rips filtration of a metric space provides a functorial way111Where for metric spaces X𝑋Xitalic_X and Y𝑌Yitalic_Y morphisms are given by 1111-Lipschitz maps ϕ:X→Y:italic-ϕ→𝑋𝑌\phi:X\rightarrow Yitalic_ϕ : italic_X → italic_Y, and for persistence modules V∗subscript𝑉V_{*}itali... | B |
C1: Remaining Cost
Looking at the main view (Figure 7(c), \raisebox{-.9pt} {1}⃝), we detect an area on the top of cluster C1 with slightly increased size for a few points (in comparison to the other points in the same cluster), which means there are high values of remaining cost in this small area. | C1: Remaining Cost
Looking at the main view (Figure 7(c), \raisebox{-.9pt} {1}⃝), we detect an area on the top of cluster C1 with slightly increased size for a few points (in comparison to the other points in the same cluster), which means there are high values of remaining cost in this small area. | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... | The black bars are always fixed, showing the average preservation for all points of the projection. For example, in Figure 4(c), the relatively tall black bars starting from the point k=20𝑘20k=20italic_k = 20 mean that, on average, neighborhoods of 20 points or more are well preserved. The same rationale applies to th... | This is usually a sign of a badly-optimized area that should not be trusted. To confirm that, we look at the KLD distribution (Figure 7(d)): the vast majority of points are located between 0.10.10.10.1 to 0.60.60.60.6 on the x𝑥xitalic_x-axis. This means that those were very well optimized (notice that the y𝑦yitalic_y... | D |
The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in ph... | The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in ph... |
Algorithms falling in this category are inspired by human social concepts, such as decision-making and ideas related to the expansion/competition of ideologies inside the society as ideology (Ideology Algorithm, IA, [466]), or political concepts such as the Imperialist Colony Algorithm (ICA, [467]). This category also... |
In this same line of reasoning, the largest subcategory of the second taxonomy (Differential Vector Movements guided by representative solutions) not only contains more than half of the reviewed algorithms (almost 60%), but it also comprises algorithms from all the different categories in the first taxonomy: Social Hu... | Tables 18, 19, 20, 21, 22, 23 and 24 show the different algorithms in this subcategory. An exemplary algorithm of this category that has been a major meta-heuristic solver in the history of the field is PSO [80]. In this solver, each solution or particle is guided by the global current best solution and the best soluti... | B |
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent y... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | C |
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca... | A range of studies analysed network traces for ingress filtering using IP address characteristics (Moore et al., 2006; Barford et al., 2006; Chen et al., 2008; Czyz et al., 2014; Dainotti et al., 2013), or by inspecting on-path network equipment reaction to unwanted traffic, (Yao et al., 2014). In addition to a limited... | How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing stu... | Recent work showed that even TCP traffic gets fragmented under certain conditions (Dai et al., 2021b). Fragmentation has long history of attacks which affect both the UDP and TCP traffic (Kent and Mogul, 1987; Herzberg and Shulman, 2013; Shulman and Waidner, 2014).
| Source IP address spoofing allows attackers to generate and send packets with a false source IP address impersonating other Internet hosts, e.g., to avoid detection and filtering of attack sources, to reflect traffic during Distributed Denial of Service (DDoS) attacks, to launch DNS cache poisoning, for spoofed managem... | C |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | A |
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We previously placed these separators in every second nonempty drum σi:=[iδ,(i+1)δ]×Balld−1(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=... | We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O(1)𝑂1O(1)italic_O ( ... | Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above.
The only nontrivial requirement is that T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBS... | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu... | However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1.
Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide,... | D |
Note that there is a difference between the free product in the category of semigroups and the free product in the category of monoids or groups.
In particular, in the semigroup free product (which we are exclusively concerned with in this paper) there is no amalgamation over the identity element of two monoids. Thus, ... | In the theory of automaton semigroups, the definition of automata used is often more restrictive than this, with Q𝑄Qitalic_Q required to be finite,
and δ𝛿\deltaitalic_δ required to be a total function. (Recall that the alphabet A𝐴Aitalic_A is, by definition, finite.) |
In more automata-theoretic settings, a finite automaton would be called a deterministic finite state, letter-to-letter (or synchronous) transducer (see for example [12, 13] for introductions on standard automata theory). However, the term automaton is standard in our algebraic setting (although often only complete aut... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | A |
In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: 𝒮rand∼uniform(0,1)similar-tosubscript𝒮𝑟𝑎𝑛𝑑uniform01\mathcal{S}_{rand}\sim\textit{uniform}(0,1)caligraphic_S start_POSTSUBSCRIPT italic_r italic_a italic_n italic_d en... |
To test if the changes in results were statistically significant, we performed Welch’s t-tests Welch (1938) on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch’s t-test over the Student’s t-test, because the latter assumes equal variances for predictions from different var... |
Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2’s test set. The percentage of overlap is defined as: | To perform the tests, we first randomly sample 5000500050005000 subsets of non-overlapping test instances. We then average the accuracy of each subset across 5555 runs, obtaining 5000500050005000 values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table 2, the p𝑝pitalic_p... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | A |
We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... | We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... |
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i... | B |
Weighted-average calculates the metrics for each label and finds their average weighted by support (the number of true instances for each label). The data set is a binary classification problem and contains 165 diseased and 138 healthy patients.
Hence, we choose micro-average to weight the importance of the largest cla... | Figure 2(a.1, a.2) presents the initial views of the 11 algorithms (and their models) currently implemented in StackGenVis.
Figure 2(a.1) uses boxplots to represent the performance of the currently unselected algorithms/models based on the metrics combination discussed previously. This compact visual representation pro... | We normalize the importance from 0 to 1 and use a two-hue color encoding from dark red to dark green to highlight the least to the most important features for our current stored stack, see Figure 4(b). The panel in Figure 4(c) uses a table heatmap view where data features are mapped to the y-axis (13 attributes, only 7... |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.... | (ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models;
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model explo... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | A |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | C |
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... |
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce... |
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe... | In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in... | Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base... | C |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | C |
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ... | Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | C |
We used a beam size of 4444 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5555 checkpoints for the Transformer Base setting and 20202020 checkpoints for the Transformer Big setting saved at intervals of 1,50015001,5001 , 500 training steps. We also conducted significance ... | Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne... |
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in... |
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduce... | As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ... | B |
a compact open of Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT because fi,ijsubscript𝑓𝑖subscript𝑖𝑗f_{i,i_{j}}italic_f start_POSTSUBSCRIPT italic_i , italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT is
spectral. Notice that fi,ij∘fi=fijsubscript𝑓𝑖subscript𝑖�... | open sets of the form fij−1(Kj)superscriptsubscript𝑓subscript𝑖𝑗1subscript𝐾𝑗f_{i_{j}}^{-1}(K_{j})italic_f start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_K start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) for 1≤j≤n... | I𝐼Iitalic_I is directed, there exists k∈I𝑘𝐼k\in Iitalic_k ∈ italic_I such that k≥i,j𝑘𝑖𝑗k\geq i,jitalic_k ≥ italic_i , italic_j.
Because fj∘fk,j=fksubscript𝑓𝑗subscript𝑓𝑘𝑗subscript𝑓𝑘f_{j}\circ f_{k,j}=f_{k}italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∘ italic_f start_POSTSUBSCRIPT italic_k , itali... | there is an index i𝑖iitalic_i above all ijsubscript𝑖𝑗i_{j}italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT for 1≤j≤n1𝑗𝑛1\leq j\leq n1 ≤ italic_j ≤ italic_n. Let
Kj′≜fi,ij−1(Kj)≜superscriptsubscript𝐾𝑗′superscriptsubscript𝑓𝑖subscript𝑖𝑗1subscript𝐾𝑗K_{j}^{\prime}\triangleq f_{i,i_{j}}^{-1}(K_{j})italic... | fi−1(Kj′)=fij−1(Kj)superscriptsubscript𝑓𝑖1superscriptsubscript𝐾𝑗′superscriptsubscript𝑓subscript𝑖𝑗1subscript𝐾𝑗f_{i}^{-1}(K_{j}^{\prime})=f_{i_{j}}^{-1}(K_{j})italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_K start_POSTSUBSCRIPT italic_j end_POSTSU... | D |
Second, the ordinal distortion is homogeneous as all its elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task anymore. Compared to the distortion parameters wi... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | Third, the ordinal distortion can be estimated using only a part of a distorted image. Unlike the semantic information, the distortion information is redundant in images, showing the central symmetry and mirror symmetry to the principal point. Consequently, the efficiency of rectification algorithms can be significantl... | In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... | B |
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
We set aside 20% of the samples as the test set and divide the rema... |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... | The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies.
The model is tra... | A |
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
| The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... |
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... | If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m... | A |
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... | Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | A |
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | A |
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | B |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | A |
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en... | In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020... | A |
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,SD=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | B |
These methods [1, 15, 16, 17, 18, 45, 46, 47] integrate image and attribute information to generate embeddings for unseen entities in KG embedding.
Their relational encoding modules, however, remain transductive and thus are not the primary focus of our study. |
To evaluate the performance of different methods in an open-world scenario, we restructure the entity alignment and entity prediction benchmarks. This involves sampling 20% of examples from each original testing set to create new entities. Subsequently, we transfer all triplets associated with these new entities to an... | Few-shot entity prediction methods, including MetaR [49] and its successor GEN [50], adopt a meta-learning approach. Unlike inductive relation prediction, these methods address unseen relations. The task setting differs from conventional entity prediction, where a support triplet set specific to a relation r𝑟ritalic_r... | In real-world KGs, the number of entities is not constant and new entities emerge frequently. To address this practical setting, we proposed open-world entity alignment, where a proportion of entities in the testing set are unseen to the model. We remove the relevant triplets from 𝒯1subscript𝒯1\mathcal{T}_{1}caligrap... | Out-of-KG entity prediction methods, such as MEAN [19], VN Network [20], and LAN [21], leverage logic rules to infer the missing relationships but do not generate unconditioned entity embeddings for other tasks. These methods share a similar task setting with ours, where all relations are known during training. The new... | B |
DKL[q(z|s,a,s′)∥p(z|s,a)]≤DKL[Q(z|s,a,s′)∥N(0,1)].D_{\rm KL}[q(z|s,a,s^{\prime})\|p(z|s,a)]\leq D_{\rm KL}[Q(z|s,a,s^{\prime})\|%
N(0,1)].italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT [ italic_q ( italic_z | italic_s , italic_a , italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ∥ italic_p ( italic_z | i... |
We implement a CVAE-based exploration algorithm by modifying the prior of VDM to a standard Gaussian444The code is released at https://github.com/Baichenjia/CAVE_NoisyMinist (for Noisy-Mnist) and https://github.com/Baichenjia/CVAE_exploration (for other tasks) for reproducibility and further improvement.. For Noisy-Mn... | The reason is that the prior p(z|s,a)𝑝conditional𝑧𝑠𝑎p(z|s,a)italic_p ( italic_z | italic_s , italic_a ) of VDM encodes the information of underlying MDP in training, while the prior of CVAE is fixed and does not contain any information. Thus the KL-divergence between the posterior and VDM-prior is easier to minimi... |
(ii) The sampling distributions of 𝔼z[logp(s′|s,a,z)]subscript𝔼𝑧delimited-[]𝑝conditionalsuperscript𝑠′𝑠𝑎𝑧\mathbb{E}_{z}[\log p(s^{\prime}|s,a,z)]blackboard_E start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT [ roman_log italic_p ( italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | italic_s , italic_a , ital... |
To illustrate the difference between CVAE and VDM, we simplify the model details and compare the architecture of CVAE and VDM in Fig. 14. We find the encoder in CVAE is similar to the posterior network in VDM, and the decoder in CVAE is similar to the generative network in VDM. The CVAE architecture does not include a... | B |
The number of coefficients |Am,n,1|=(m+nn)∈𝒪(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali... | Whatsoever, any answer to Questions 2 that is to be of practical relevance
must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1. | convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality.... | Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl... | Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
| D |
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + f... | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance.
Omitted proofs can be found in Appendix A. | A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark.
In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance ... | The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1].
However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t inves... | D |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | A |
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... |
This paper presents the NOT gate implementation of structural computers and the Reverse-Logic pair and double pair-based logic operation techniques of digital signals that can solve the problem of heating and aging of existing semiconductor computers. | If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve... | The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si... | Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... | A |
where x∈𝔽n𝑥superscript𝔽𝑛x\in\mathbb{F}^{n}italic_x ∈ blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is the state and A∈𝔽n×n𝐴superscript𝔽𝑛𝑛A\in\mathbb{F}^{n\times n}italic_A ∈ blackboard_F start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT is the state transition map represented as ... | When the dynamics is non-linear, the computation of the cycle set is a computationally hard problem. Apart from brute force computations, the work [26] gives an algorithmic procedure to estimate the cycle set of a non-linear dynamical system over finite fields by using the Koopman operator and constructing a reduced Ko... | The first statement of Theorem 3 does not imply an equivalence between the cycle structure of the permutation polynomial and the cycle set of the linear dynamics (19), and the former is a subset of the latter. This is because the linear dynamics evolve over a larger set 𝔽Nsuperscript𝔽𝑁\mathbb{F}^{N}blackboard_F star... | Initially, the Koopman operator framework was used extensively for dynamics over reals (or complex) state space, and the function space is infinite-dimensional, which leads to resorting to finite-dimensional numerical approximations of the Koopman operator [28, 29] for practical computations. In our setting of dynamica... | Irrespective of whether the dynamics (2) being linear or not, the Koopman operator 𝐊𝐊\mathbf{K}bold_K is a linear operator over the function space ℱ(𝔽n)ℱsuperscript𝔽𝑛\mathcal{F}(\mathbb{F}^{n})caligraphic_F ( blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ). This linearity of the Koopman operator... | A |
Figure 1: Boxplots of test accuracy for the different meta-learners, with 300 views and 25 features per view. The results are shown for all combinations of the correlation between features within the same view (ρwsubscript𝜌𝑤\rho_{w}italic_ρ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT), the correlation between fea... |
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the add... | The false positive rate in view selection for each of the meta-learners can be observed in Figure 3. Again ignoring the interpolating predictor for now, the ranking of the different meta-learners is similar to their ranking by TPR. Nonnegative ridge regression has the highest FPR, followed by the elastic net, lasso, ad... | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi... |
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge... | D |
The DepAD framework offers a significant advantage in providing meaningful explanations for identified anomalies, which plays a crucial role in understanding both the reported anomalies and the underlying data. In this section, we outline the process of interpreting an anomaly detected by a DepAD algorithm and demonstr... |
The interpretations of the top-3 anomalies identified by FBED-CART-PS are presented in Table 12. For scorpion, the three variables backbone, eggs and milk contribute most to the anomalousness. For variable backbone, 73% of the animals in the dataset follow the normal dependency; that is, if an animal has a tail, it wo... | A common way of examining dependency deviations in the dependency-based approach is to check the difference between the observed value and the expected value of an object, where the expected value is estimated based on the underlying dependency between variables [7, 4, 5]. Thus, dependency-based approach naturally lead... |
To interpret an anomaly detected by DepAD, we begin by identifying variables with substantial dependency deviations. This is achieved by comparing the observed values of variables with their corresponding expected values. A larger deviation indicates a higher contribution of that variable to the anomaly. Furthermore, ... | For example, as shown in Figure 12, in the dataset used in Example 1, a person a𝑎aitalic_a has been identified as an anomaly by a DepAD method. From the dependency learned by the DepAD method, given a height of 160cm, the expected weight is 64kg. The normal pattern here is height=160cm →→\rightarrow→ weight=64kg. Pers... | C |
Comparison with Oh & Iyengar [2021] While the authors in Oh & Iyengar [2021] provide sharper bounds by a factor of O~(d)~O𝑑\tilde{\mathrm{O}}(\sqrt{d})over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_d end_ARG ), they still retain the κ𝜅\kappaitalic_κ multiplicative factor in their regret bounds. Thei... |
A confidence set similar to Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in Eq (7) was recently proposed in Abeille et al. [2021] for the simpler logisitic bandit setting. Here, we extend its construction to the MNL setting. The set Et(δ)subscript𝐸𝑡𝛿E_{t}(\... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\... | Comparison with Abeille et al. [2021] Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
| D |
2) We propose a novel temporal action localization framework VSGN, which features two key components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). For effective feature aggregation, we design a cross-scale graph network for each level in xGPN with a hybrid module of a temporal branch and a gra... | 3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t... | Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s... | We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp... |
Besides evaluating all actions in general, we also provide average mAPs of short actions for VSGN as well as other methods that have detection results available. Here, we refer to action instances that are shorter than 30 seconds as short actions. On ActivityNet, there are 54.4% short actions, whereas on THUMOS, there... | A |
G4: Comparison of multi-stage generated hyperparameter sets in various granularities. An addition to G1 is that the positive or negative impact of performance should be measured during the creation of models through the multi-stage crossover and mutation procedure.
VisEvol should thus display both successful and underp... | (iv) control the evolutionary process by setting the number of models that will be used for crossover and mutation in each algorithm (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b)); and
(v) compare the performances of the best so far identified ensemble against the acti... | Li et al. [LCW∗18] found that once the ML expert has acquired all the results from an execution stage, he/she should analyze them with various perspectives and decide if the previously explored models’ performance match his/her needs. If not, then more stages should be involved in the process until his/her expectations... | G5: Extraction of an ultimate model or a voting ensemble with a side-by-side performance comparison. A comparison between the currently active ensemble against the optimal solution found until that point in time should be established in our tool to assist the extraction of a competitive and effective ensemble (R5).
|
From Figure 4(a), right, we see that only a few KNN, LR, and MLP models were better than the previous stages. Thus, we conclude that there is no further improvement, and it is hard to find better hyperparameter tuples. We skip the addition of models from S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCR... | C |
In the context of addressing the guidance problem for a large number of agents, considering the spatial distribution of swarm agents and directing it towards a desired steady-state distribution offers a computationally efficient approach. In this regard, both probabilistic and deterministic swarm guidance algorithms ar... | This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state.
The probabilistic guidance algorithm led to the dev... | The current literature covers a broad spectrum of methodologies for Markov chain synthesis, incorporating both heuristic approaches and optimization-based techniques [4, 5, 6]. Each method provides specialized algorithms tailored to the synthesis of Markov chains in alignment with specific objectives or constraints.
Ma... | Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi... | Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... | A |
The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos... | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | Similar to the previous section, we want to impose cycle consistency on the pairwise functional maps 𝒞ijsubscript𝒞𝑖𝑗\mathcal{C}_{ij}caligraphic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT.
We do so by defining a shape-to-universe functional map 𝒞isubscript𝒞𝑖\mathcal{C}_{i}caligraphic_C start_POSTS... | However, extracting a point-wise correspondence from a functional map matrix is not trivial [17, 57]. This is mainly because of the low-dimensionality of the functional map, and the fact that not every functional map matrix is a representation of a point-wise correspondence [51].
In [44], the authors simultaneously sol... | D |
Convert the coloring f:ΓC/∼→{0,1}f:\Gamma_{C}/\sim\rightarrow\{0,1\}italic_f : roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT / ∼ → { 0 , 1 } in a directed clique path tree of ΓCsubscriptΓ𝐶\Gamma_{C}roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT. | We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ... |
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly... | On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ... | Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati... | A |
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_... |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... | C |
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals. | See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018... | See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (... | variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes.
However, | Second, the functional optimization problem associated with the variational representation of F𝐹Fitalic_F can be solved by any supervised learning methods such as deep learning (LeCun et al., 2015; Goodfellow et al., 2016; Fan et al., 2019) and kernel methods
(Friedman et al., 2001; Shawe-Taylor et al., 2004), which o... | D |
Phase is a controller timing unit associated with the control of one or more movements, representing the permutation and combination of different traffic flows. At each phase, vehicles in the specific lanes can continue to drive. The 4-phase setting is the most common configuration in reality, but the number of phases ... | Definition 3 (Average Travel Time)
The travel time of a vehicle is the time discrepancy between entering and leaving a particular area. A vehicle from the origin to the destination (OD) is regarded as a travel. Average travel time of all vehicles in a road network is the most frequently used measure to evaluate the per... | Most conventional traffic signal control methods are designed based on fixed-time signal control [21], actuated control [22] or self-organizing traffic signal control [23]. These approaches rely on expert knowledge and often perform unsatisfactorily in complicated real-world situations. To solve this problem, several o... |
Reward. We define the reward for agent i𝑖iitalic_i as the negative of the queue length on incoming lanes. Note that optimizing queue length has been proved to be equivalent to optimizing average travel time in [38] under certain assumptions. Average travel time is a global criteria which cannot be optimized directly ... |
Following existing studies [46, 13, 40, 41, 14], we use the average travel time to evaluate the performance of different methods for traffic signal control. The average travel time indicates the overall traffic situation in an area over a period of time. For a detailed definition of average travel time, see Section 3.... | A |
Then it is a straightforward verification using Lemma 7.1 that
𝐳∗=Q0(I−UU𝖧)𝐲∗subscript𝐳subscript𝑄0𝐼𝑈superscript𝑈𝖧subscript𝐲\mathbf{z}_{*}\,=\,Q_{0}\,(I-U\,U^{{\mbox{\tiny$\mathsf{H}$}}})\,\mathbf{y}_{*}bold_z start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( itali... | m−r𝑚𝑟m-ritalic_m - italic_r additional vectors besides columns of Q~~𝑄\tilde{Q}over~ start_ARG italic_Q end_ARG.
The vectors Q0𝐮1,…,Q0𝐮m−rsubscript𝑄0subscript𝐮1…subscript𝑄0subscript𝐮𝑚𝑟Q_{0}\,\mathbf{u}_{1},\ldots,Q_{0}\,\mathbf{u}_{m-r}italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT bold_u start_POSTSUBSC... | set 𝐳∗=Q0(I−UU𝖧)𝐲∗subscript𝐳subscript𝑄0𝐼𝑈superscript𝑈𝖧subscript𝐲\mathbf{z}_{*}\,=\,Q_{0}\,(I-U\,U^{{\mbox{\tiny$\mathsf{H}$}}})\,\mathbf{y}_{*}bold_z start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_I - italic_U italic_U start_POSTSUPERSCRIPT sansserif_H e... | Arank-r𝐳=𝐛subscript𝐴rank-r𝐳𝐛A_{\mbox{\scriptsize rank-$r$}}\,\mathbf{z}\,=\,\mathbf{b}italic_A start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT bold_z = bold_b that is orthogonal to columns of
[Q0U,Q~]subscript𝑄0𝑈~𝑄[Q_{0}\,U,\,\tilde{Q}][ italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_U , over~ s... | Then it is a straightforward verification using Lemma 7.1 that
𝐳∗=Q0(I−UU𝖧)𝐲∗subscript𝐳subscript𝑄0𝐼𝑈superscript𝑈𝖧subscript𝐲\mathbf{z}_{*}\,=\,Q_{0}\,(I-U\,U^{{\mbox{\tiny$\mathsf{H}$}}})\,\mathbf{y}_{*}bold_z start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( itali... | C |
Online bin packing has a long history of study. The simplest algorithm is NextFit, which places an item into its single open bin when possible; otherwise, it closes the bin (does not use it anymore) and opens a new bin for the item. FirstFit is another simple heuristic that places an item into the first bin of suffici... |
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i... | In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity.
Ideally, one aims for a loss that is as small as o(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu... | These algorithms are variants of the classic Harmonic algorithm (?), which places items of approximately equal sizes, according to a harmonic sequence, in the same bin.
The currently best algorithm is the Advanced Harmonic (AH) algorithm, which has a competitive ratio of 1.57829 (?), whereas the best-known lower bound ... | To obtain the best theoretical performance, we can choose A𝐴Aitalic_A as the algorithm of the best known competitive ratio, that is Advanced Harmonic algorithm (?). However, as discussed in Section 2, such algorithms belong to a class that is tailored to worst-case competitive analysis, and do not tend to perform well... | C |
𝒞𝒜(S,P)=(ϕ(⋅,pi),pi)i=1,…,k.𝒞𝒜𝑆𝑃subscriptitalic-ϕ⋅subscript𝑝𝑖subscript𝑝𝑖𝑖1…𝑘\mathcal{CA}(S,P)=(\phi(\cdot,p_{i}),p_{i})_{i=1,\ldots,k}.caligraphic_C caligraphic_A ( italic_S , italic_P ) = ( italic_ϕ ( ⋅ , italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , italic_p start_POSTSUBSCRIPT italic_i e... | The main idea of mapping points to the neighborhood is to model a local manifold of data. One intuitive solution to determine neighbors is to use the K-nearest neighbors algorithm. The neighborhood of size k𝑘kitalic_k of p𝑝pitalic_p is defined as the k𝑘kitalic_k closest elements of p𝑝pitalic_p in X𝑋Xitalic_X. Ther... | Practically speaking, our approach transforms the embedding of point cloud obtained from the base model to parametrize the bijective function represented by the MLP network. This function aims to find a mapping between a canonical 2D patch to the 3D patch on the surface of the target mesh. We condition the positioning ... |
The proposed framework overcomes the limitations of previous methods. First, we theoretically solve the problem of stitching partial meshes since every chart is informed about its local neighborhood. Second, our method can easily fill the missing spaces in the final mesh by adding a new mapping for the region of inter... |
The transformation ϕitalic-ϕ\phiitalic_ϕ is modeled as a target network represented as MLP with weights Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT produced by the hypernetwork Tϕsubscript𝑇italic-ϕT_{\phi}italic_T start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT. Therefore, we c... | C |
in the sense that for any saddle point (𝐱*,𝐩*,𝐲*,𝐪*,𝐬*,𝐳*)superscript𝐱superscript𝐩superscript𝐲superscript𝐪superscript𝐬superscript𝐳({\bf x}^{*},{\bf p}^{*},{\bf y}^{*},{\bf q}^{*},{\bf s}^{*},{\bf z}^{*})( bold_x start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , bold_p start_POSTSUPERSCRIPT * end_POSTSUPERSCRIP... | To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
| The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ... | As it was noted above, the standard analysis of Mirror-Prox requires the feasible sets to be compact. Although we run Mirror-Prox algorithm on problem (54) with unconstrained variables 𝐬𝐬{\bf s}bold_s and 𝐳𝐳{\bf z}bold_z, we still can bound these variables according to Theorem 2.4.
|
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | B |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba... | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g... |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... | B |
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP... | Theorem 1.1
depends on p𝑝pitalic_p, q𝑞qitalic_q, K𝐾Kitalic_K and b𝑏bitalic_b (but, as usual, is independent of the size of the cover). Moreover, while the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover can grow with b𝑏bitalic_b (it is at least (b−1)(μ(K)+2)𝑏1𝜇𝐾2(b-1)(\mu(K)+2)( italic_b - ... | In this paper, we show that the gap observed for convex lattice sets occurs in the broad topological setting of triangulable spaces with a forbidden homological minor, a notion introduced by Wagner [37] as a higher-dimensional analogue of the familiar notion of graph minors [34].
| The support of a chain σ𝜎\sigmaitalic_σ, denoted supp(σ)supp𝜎\operatorname{supp}(\sigma)roman_supp ( italic_σ ), in a simplicial complex is the set of simplices with nonzero coefficients in σ𝜎\sigmaitalic_σ. We say that two chains σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have overlapping supports if there exists a sim... | We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor.
The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m... | B |
After the initial removal of features, as described in Section 4.2, we take a look into the radial tree that presents statistical information about the impact of the currently included features.
The core aim of this view is to examine the impact on various subspaces, since removing a feature might appear the right choi... |
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati... | Figure 3: Exploration of features with FeatureEnVi. The default slicing thresholds for the data space separate the instances into four quadrants that represent intervals of 25% predicted probability (see (a.1–a.4)). View (b) presents a table heatmap with five different feature selection techniques and their average val... | This hierarchical visualization exploits the connections of these features (see Fig. 3(d.1–d.4)) with the four subspaces we defined in Section 4.1, which is the inner layer. The top part highlighted in a rectangular red box is the whole data space with all the slices (text in bold), and it is currently active (black st... | Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S... | C |
‖e^c‖∞subscriptnormsubscript^𝑒𝑐\|\hat{e}_{c}\|_{\infty}∥ over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT
‖e^c‖2subscriptnormsubscript^𝑒𝑐2\|\hat{e}_{c}\|_{2}∥ over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∥ st... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... |
Figure 5: Position, velocity, acceleration, and maximal contour error resulting from optimization of the MPC parameters, comparing unconstrained BO optimization (solid lines) to BO optimization with additional constraint on the maximal tracking error, for infinity (left) and octagon(center) geometries. The right panel... | To reduce the number of times this experimental “oracle” is invoked, we employ Bayesian optimization (BO) [16, 17], which is an effective method for controller tuning [13, 18, 19] and optimization of industrial processes [20]. The constrained Bayesian optimization samples and learns both the objective function and the ... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | B |
To properly study bias mitigation, it is necessary to provide a definition of biased data and biased behavior in a model. We study bias in supervised classification i.e., the goal is to learn a function f:X→Y:𝑓→𝑋𝑌f:X\rightarrow Yitalic_f : italic_X → italic_Y which outputs a categorical target y∈Y𝑦𝑌y\in Yitalic_y... | We select hyperparameters based on the best unbiased validation set accuracy on each dataset, which is reflective of the unbiased test distribution. For all datasets and methods, we first perform a grid search over the learning rates ∈\in∈ {1e-3, 1e-4, 1e-5} and weight decays ∈\in∈ {0, 0.1, 1e-3, 1e-5}, and then tune t... | In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu... | Assuming access to the test distribution for model selection is unrealistic and can result in models being right for the wrong reasons [64]. Rather, it is ideal if the methods can generalize without being tuned on the test distribution and we study this ability by comparing models selected through varying tuning distri... | We can measure the robustness to such tendencies by intentionally introducing covariate shift e.g., with a test dataset distribution that differs from training or a metric that balances performance across groups. For our study, we use the mean per group accuracy/unbiased accuracy, which weighs all the groups equally. F... | D |
Figure 2: From intrusive skin electrodes [16] to off-shelf web cameras [17], gaze estimation is more flexible. Gaze estimation methods are also updated with the change of devices. We illustrate five kinds of gaze estimation methods. (1). Attached sensor-based methods. The method samples the electrical signal of skin e... |
Figure 2: From intrusive skin electrodes [16] to off-shelf web cameras [17], gaze estimation is more flexible. Gaze estimation methods are also updated with the change of devices. We illustrate five kinds of gaze estimation methods. (1). Attached sensor-based methods. The method samples the electrical signal of skin e... | Different from previous methods, appearance-based methods do not require dedicated devices for detecting geometric features.
They use image features such as image pixel [19] or deep features [17] to regress gaze. Various regression models have been used, e.g., neural networks [32], gaussian process regression [33], ada... | They require a time-consuming data collection for the specific subject. To reduce the number of training samples, Williams et al. introduce semi-supervised gaussian process regression methods [33].
Sugano et al. propose a method that combines gaze estimation with saliency [35]. | 2) A robust regression function to learn the mappings from appearance feature to human gaze. It is non-trivial to map the high-dimensional eye appearance to the low-dimensional gaze. Many regression functions have been used to regress gaze from appearance, e.g., local linear interpolation [21] and adaptive linear regre... | D |
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b... | This deep quantization technique presents many advantages. It ensures a lightweight representation that makes the real-world masked face recognition process a feasible task. Moreover, the masked regions vary from one face to another, which leads to informative images of different sizes. The proposed deep quantization a... | As presented in Fig. 1, the size of the extracted feature map defines the number of the feature vectors that will be used in the BoF layer. Here we refer by Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to the number of feature vectors extracted from the ithsuperscript𝑖𝑡ℎi^{t}hitalic_i ... |
The quantization is then applied to extract the histogram of a number of bins as presented in Section 4.3. Finally, MLP is applied to classify faces as presented in Section 4.4. In this experiment, the 10-fold cross-validation strategy is used to evaluate the recognition performance. The experiments are repeated ten t... | The basic idea of the classical BoF paradigm is to represent images as orderless sets of local features hariri2021deep . To get these sets, the first step is to extract local features from the training images, each feature represents a region from the image. Next, the whole features are quantized to compute a codebook.... | D |
)}_{i,j,j>0\vdash(i,j-1)<(i,j)\text{ checked}}\}\}under⏟ start_ARG roman_tail italic_t end_ARG start_POSTSUBSCRIPT italic_j > 0 assumed end_POSTSUBSCRIPT ⇒ italic_p start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ← under⏟ start_ARG italic_o start_POSTSUPERSCRIPT roman_R end_POSTSUPERSCRIPT . roman_rest italic_p start_POSTS... | If the processor issues a “get,” then the head of the input stream is consumed, recursing on its tail. Otherwise, the output stream is constructed recursively, first issuing the element received from the processor. It is clear that the program terminates by lexicographic induction on (i,j)𝑖𝑗(i,j)( italic_i , italic_j... |
The even-indexed substream retains the head of the input, but its tail is the odd-indexed substream of the input’s tail. The odd-indexed substream, on the other hand, is simply the even-indexed substream of the input’s tail. Operationally, the heads and tails of both substreams are computed on demand similar to a lazy... |
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i][z/x]Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita... | Such functions may consume finitely many elements of type A𝐴Aitalic_A from the input stream (the inductive part spA,Bμ[i]subscriptsuperscriptsp𝜇𝐴𝐵𝑖\operatorname{sp}^{\mu}_{A,B}[i]roman_sp start_POSTSUPERSCRIPT italic_μ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A , italic_B end_POSTSUBSCRIPT [ italic_i ]) bef... | A |
Rial et al. [13] proposed a provably secure anonymous AFP scheme based on the ideal-world/real-world paradigm.
Poh et al. [25] designed an innovative user-side AFP scheme based on the symmetric Chameleon encryption technique, which achieves significant gains in owner-side computing and communication efficiency. | Afterwards, Bianchi et al. [10] proposed a LUT-based AFP scheme without involving a Trusted Third Party (TTP) based on homomorphic encryption, which also implements AFP within the user-side framework.
Despite the fact that Problems 2 and 3 are solved in these works, Problem 1 is not mentioned. |
In this paper, facing these problems and challenges, we set out to solve them. First, to achieve data protection and access control, we adopt the lifted-ElGamal based PRE scheme, as discussed in [16, 17, 18, 19, 20], whose most prominent characteristic is that it satisfies the property of additive homomorphism. Then t... | Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However... | Rial et al. [13] proposed a provably secure anonymous AFP scheme based on the ideal-world/real-world paradigm.
Poh et al. [25] designed an innovative user-side AFP scheme based on the symmetric Chameleon encryption technique, which achieves significant gains in owner-side computing and communication efficiency. | A |
The attention coefficient αijsubscript𝛼𝑖𝑗\alpha_{ij}italic_α start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is calculated by the soft attention mechanism, while the pijsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is calculated by the hard attention mechanism. By mu... | Due to the strength in modeling relations on graph-structured data, GNN has been widely applied to various applications like neural machine translation Beck et al. (2018), semantic segmentation Qi et al. (2017), image classification Marino et al. (2017), situation recognition Li et al. (2017), recommendation Wu et al. ... |
GraphFM(-M): in the interaction aggregation component, we use a multi-head attention mechanism to learn the diversified polysemy of feature interactions in different semantic subspaces. To check its rationality, we use only one attention head when aggregating. |
To capture the diversified polysemy of feature interactions in different semantic subspaces Li et al. (2020) and also stabilize the learning process Vaswani et al. (2017); Veličković et al. (2018), we extend our mechanism to employ multi-head attention. | Specifically, to accommodate the polysemy of feature interactions in different semantic spaces, we utilize a multi-head attention mechanism Vaswani et al. (2017); Veličković et al. (2018).
Each layer of our proposed model produces higher-order interactions based on the existing ones and thus the highest-order of intera... | C |
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability o... |
Requiring access to a zeroth-order and domain oracle are mild assumptions, that were also implicitly assumed in one of the three FW-variants presented in Dvurechensky et al. [2022] when computing the step size according to the strategy from Pedregosa et al. [2020]; see 5 in Algorithm 4. The remaining two variants ensu... | We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refe... | We show that a small variation of the original Frank-Wolfe algorithm [Frank & Wolfe, 1956] with an open-loop step size of the form γt=2/(t+2)subscript𝛾𝑡2𝑡2\gamma_{t}=2/(t+2)italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 2 / ( italic_t + 2 ), where t𝑡titalic_t is the iteration count is all that is needed ... | the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020],
denoted by B-FW and B-AFW respectively. | A |
However, to be considered an efficient approximation algorithm in theory, ideally the dependence on all relevant parameters should be polynomial. Indeed, this has been a key property in the qualification of efficiency in parametrized complexity. The question whether there is a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-ap... |
In a distributed/parallel setting, the aforementioned “time” should be understood as the number of rounds. All the times listed above are a function of G𝐺Gitalic_G and ε𝜀\varepsilonitalic_ε, but for the sake of brevity we drop these parameters in the rest of this section. |
Instantiating our framework with state-of-the-art results for computing an O(1)𝑂1O(1)italic_O ( 1 )-approximate maximum matching in CONGEST and MPC, we obtain the results outlined in Table 1. In particular, our framework exponentially improves the dependence on 1/ε1𝜀1/\varepsilon1 / italic_ε in these models, hence ... |
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu... | Table 1: A summary of the running times in several different models, compared to the previous state-of-the-art, for computing a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate maximum matching. In the distributed setting, “running time” refers to the round complexity, while in the streaming setting it refers to th... | D |
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16].
Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv... | For directed networks, however, constructing a doubly stochastic mixing matrix usually requires a weight-balancing step, which could be costly when carried out in a distributed manner.
Therefore, the push-sum technique [17] was utilized to overcome this issue. | Specifically, the push-sum based subgradient method in [18] can be implemented over time-varying directed graphs, and linear convergence rates were achieved in [19, 20] for minimizing strongly convex and smooth objective functions by applying the push-sum technique to EXTRA.
| Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16].
Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv... | The Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method introduced in [24, 25] modified the gradient tracking methods to deal with directed network topologies without the push-sum technique.
The algorithm uses a row stochastic matrix to mix the local decision variables and a column stochastic matr... | A |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | Possible interesting areas for further research are related to the practical features that arise in the federated learning setup, such as asynchronous transmissions and information compression to minimize communication costs, among other issues. It is worth considering the use of the variance reduction technique in acc... |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre... | Certainly, we want to reduce the number of communications (or calls the regularizer gradient) as much as possible.
This is especially important when the problem (1) is a fairly personalized (λ≪Lmuch-less-than𝜆𝐿\lambda\ll Litalic_λ ≪ italic_L) and information from other nodes is not significant. To solve this problem ... | A |
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an... |
There is a rich polytope of possible equilibria to choose from, however, an MS must pick one at each time step. There are three competing properties which are important in this regard, exploitation, robustness, and exploration. For exploitation, maximum welfare equilibria appear to be useful. However, to prevent JPSRO... | We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt... |
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t... |
We compare against common MS including uniform, α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019; Muller et al., 2020), Projected Replicator Dynamics (PRD) (Lanctot et al., 2017) which is an NE approximator, and random vertex (coarse) correlated equilibrium (RV(C)CE) which randomly selects a solution on the vertices o... | A |
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a seri... | Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay... | The similarity function serves as a measure of the local sensitivity of the issued queries with respect to the replacement of the two datasets, by quantifying the extent to which they differ from each other with respect to the query q𝑞qitalic_q. The case of noise addition mechanisms provides a natural intuitive interp... | recently established a formal framework for understanding and analyzing adaptivity in data analysis, and introduced a general toolkit for provably preventing the harms of choosing queries adaptively—that is, as a function of the results of previous queries. This line of work has established that enforcing that computat... | Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a nume... | D |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali... |
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized... |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali... | We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not... | We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni... | C |
Early works [72, 15] attempted to match each foreground with the background using hand-crafted features, but their performance is limited by the representation ability of hand-crafted features. Specifically, Lalonde et al. [72] estimated the object information (e.g., size, orientation, lighting condition) and designed ... | To produce multiple reasonable placements, Zhang et al. [197] combined the foreground feature, background feature, and a random vector to predict the object placement. Moreover, they ensure the diversity of object placement by enforcing the pairwise distances between predicted placements to approach those between corre... | Zhu et al. [209] trained a composite image discriminator to predict the realism of composites by compositing each foreground with the background. This method is effective by using the realism of composite image to measure the foreground-background compatibility, but computing the realism of all composite images is very... | The existing deep image blending works [172, 198, 194] adopt the following evaluation metrics: 1) calculating realism score using the pretrained model [209] which reflects the realism of a composite image; 2) conducting user study by asking engaged users to select the most realistic images; 3) Zhang et al. [194] deem t... | Early deep learning based image harmonization methods target at making the harmonized images indistinguishable from real images. For instance, Zhu et al. [209] explored predicting the realism of an image using a CNN classifier. With such realism predictor, they learn the color transformation for the foreground to achie... | B |
where 𝐱cssuperscriptsubscript𝐱𝑐𝑠\mathbf{x}_{c}^{s}bold_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT is an abbreviation for the traffic speed sub-dataset 𝐱cspeedsubscriptsuperscript𝐱𝑠𝑝𝑒𝑒𝑑𝑐\mathbf{x}^{speed}_{c}bold_x start_POSTSUPERSCRIPT italic_s i... |
Inter-city correlations. Our results demonstrate that transfer learning leads to error reductions in all source-target pairs, as compared to using target data only. Notably, the largest reduction of approximately 15% is observed in the case of Shenzhen and Chongqing. These findings suggest that there exist sufficient ... | To address this problem, we utilize LSTM as the base model, which is similar to ST-net in MetaST [5], and adopt a multi-task learning approach. We select Beijing and Shanghai as the source cities for transfer learning tasks in cities with large map sizes, and Xi’an as the source city for the transfer learning tasks in ... | Table VII presents the results of our inter-city transfer learning experiments. Specifically, we report the results obtained by training our models using both full and 3-day target data, which correspond to the lower and upper bounds of errors, respectively. Furthermore, we also include the results of fine-tuning and R... | TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er... | D |
In this and the following section some of the models introduced above are experimentally investigated. They are evaluated and compared based on some general performance measures. Moreover, some general conclusions that can be used in future applications or research are derived.
| In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... | For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc... | An optimal interval estimator should satisfy some conditions. To assess the quality of the models, the HQ principle from Section 3.3 is adopted. First of all a model ought to be valid (or calibrated) in the sense of Eq. (2). The more a model deviates from being well calibrated, the less reliable it becomes since the re... | To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ... | C |
Table 3: Testing metrics (in %) of “our model (performance) +CP” and other baseline methods for the two-class “melody versus non-melody” classification task using POP909, viewing vocal melody and instrumental melody as “melody” and accompaniment as “non-melody”.
|
Figure 4: The melody/non-melody classification result for “POP909-596.mid” by (b) “skyline” \parencitechia01skyline, (c) Simonetta et al.’s CNN \parencitesimonettaCNW19 and (d) our model (performance) + CP. Directing attention to the red circled region within the pianoroll representation, it is evident that the CNN ba... | We provide three versions of the melody MIDI file for each original song, generated respectively by the skyline algorithm, Simonetta et al.’s CNN and “our model (performance) + CP”.
Taking “Clayderman_Yesterday_Once_More.mid” as an example, the melody generated by the skyline algorithm exhibits stiffness and lacks intr... | POP909 comprises piano covers of 909 pop songs compiled by \textcitepop909.555https://github.com/music-x-lab/POP909-Dataset It is the only dataset among the five that provides melody, non-melody labels for each note. Specifically, each note is labelled with one of the following three classes: vocal melody (piano notes ... | Table 3: Testing metrics (in %) of “our model (performance) +CP” and other baseline methods for the two-class “melody versus non-melody” classification task using POP909, viewing vocal melody and instrumental melody as “melody” and accompaniment as “non-melody”.
| A |
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSU... |
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t... | Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the... | To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and findi... | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen... | D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.