context stringlengths 250 3.95k | A stringlengths 250 5.12k | B stringlengths 250 3.78k | C stringlengths 250 5.56k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | \frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita... |
g2(x)fn′(x)=g1(x)fn(x)+g0(x)fn−1(x);subscript𝑔2𝑥superscriptsubscript𝑓𝑛′𝑥subscript𝑔1𝑥subscript𝑓𝑛𝑥subscript𝑔0𝑥subscript𝑓𝑛1𝑥\displaystyle g_{2}(x)f_{n}^{\prime}(x)=g_{1}(x)f_{n}(x)+g_{0}(x)f_{n-1}(x);italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_... | a1,n−1fn(x)=(a2,n−1+a3,n−1x)fn−1(x)−a4,n−1fn−2(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST... | (x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | C |
In other words, our algorithm initialises w:=gassign𝑤𝑔w:=gitalic_w := italic_g, u1:=1assignsubscript𝑢11u_{1}:=1italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := 1 and u2:=1assignsubscript𝑢21u_{2}:=1italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := 1 and multiplies w𝑤witalic_w, u1subscript𝑢1u_{1}italic_u start... |
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gicgrc−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSU... | The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
| does not yield an upper bound for the memory requirement in a theoretical analysis.
Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be... |
As for the simpler examples considered in the previous section, here to keep the presentation clear we do not write down explicit MSLP instructions, but instead determine the cost of Algorithm 3 while keeping track of the number of elements that an MSLP for this algorithm would need to keep in memory at any given time... | D |
It then follows from Lemma 1 that 1≤αiF≤α1superscriptsubscript𝛼𝑖𝐹𝛼1\leq\alpha_{i}^{F}\leq\alpha1 ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ≤ italic_α for all the local eigenvalues. Thus, Λ~h△=Λ~hfsuperscriptsubscript~Λℎ△superscriptsubscript~Λℎ𝑓\ti... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | C |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. | D |
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex... |
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work [6, 11] on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor). Our task is, to a point, a reverse engin... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... | A |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | A |
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic... | The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
|
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | D |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | D |
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a)
and the conditional reward function’s variance ... | We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm.
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters, | If the support of q(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
| For the more interesting case of unknown parameters,
we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions | We now describe in detail how to use the SMC-based posterior random measure pM(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali... | A |
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical ac... |
Table 1 shows basic patient information. Half of the patients are female and ages range from 17 to 66, with a mean age of 41.8 years. Body weight, according to BMI, is normal for half of the patients, four are overweight and one is obese. The mean BMI value is 26.9. Only one of the patients suffers from diabetes type ... | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | A |
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... |
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai... |
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)... |
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone... |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... | A |
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s... |
The relationship between cutwidth and pathwidth revealed by this direct reduction is best illustrated via a third graph parameter that we call second order cutwidth. To the best of our knowledge, this parameter has not explicitly been studied before. | One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr... | In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th... | A reason why this direct reduction from cutwidth to pathwidth has been overlooked might be that the literature on cutwidth and pathwidth approximation is focussed on more general approximation techniques (i. e., vertex and edge separators), which then yield approximation algorithms for these graph parameters. Another r... | A |
……\dots………\dots………\dots………\dots………\dots………\dots………\dots…y^^𝑦\hat{y}over^ start_ARG italic_y end_ARGJ𝐽Jitalic_Jy𝑦yitalic_yBackpropagationFeed-forward
Figure 2: A Convolutional Neural Network that calculates the LV area (y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG) from an MRI image (x𝑥xitalic_x). | The pyramidoid structure on top denotes the flow of the feed-forward calculations starting from input image x𝑥xitalic_x through the set of feature maps depicted as 3D rectangulars to the output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG.
The height and width of the set of feature maps is proportional to the height a... | The arrows at the bottom denote the flow of the backpropagation starting after the calculation of the loss using the cost function J𝐽Jitalic_J, the original output y𝑦yitalic_y and the predicted output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG.
This loss is backpropagated through the filters of the network adjustin... | Dashed lines denote a 2D convolutional layer with ReLU and Max-Pooling (which also reduces the height and width of the feature maps), the dotted line denotes the fully connected layer and the dash dotted lines at the end denote the sigmoid layer.
For visualization purposes only a few of the feature maps and filters are... | Additionally, convolutional layers create feature maps using shared weights that have a fixed number of parameters in contrast with fully connected layers, making them much faster.
VGG[17] is a simple CNN architecture that utilizes small convolutional filters (3×3333\times 33 × 3) and performance is increased by increa... | A |
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... |
Although prior works have proposed training predictive models for next-frame, future-frame, as well as combined future-frame and reward predictions in Atari games (Oh et al. (2015); Chiappa et al. (2017); Leibfried et al. (2016)), no prior work has successfully demonstrated model-based control via predictive models th... | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | A |
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods.
Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin... | D |
In the realm of mobile robotics research, the motion control of terrestrial robots across varied terrains is a complex endeavor. To enhance locomotion efficacy and elevate mobility, hybrid robots have been actively developed in the past decade [1]. These robots astutely choose the most suitable locomotion mode from a s... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
|
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking.... | This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks... | A |
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode... |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would... | D |
there were cases like this subject, in which SS3 failed to predict “depression” due to the accumulated positive value not being able to exceed the negative one even although, in some cases, it was able to get very close. Note that the positive value gets really close to the negative one at around the 100th writing2727... | In some cases, SS3 misclassified subjects as positive because, while it was true that the positive value changed at least 4 times more rapidly than the negative, the condition was mainly true only due to the negative change being very small.
For instance, if the change of the negative confidence value was 0.01, a reall... | This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (l... | the second one, denoted by SS3Δ, was more comprehensive and classified a subject as positive when the first case was met, or when the change of the positive slope was, at least, four times greater than the negative one, i.e. the positive value increased at least 4 times faster202020Those readers interested in the imple... |
there were cases like this subject, in which SS3 failed to predict “depression” due to the accumulated positive value not being able to exceed the negative one even although, in some cases, it was able to get very close. Note that the positive value gets really close to the negative one at around the 100th writing2727... | A |
Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor.
To address this convergence issue, | GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum.
To the best of our knowledge, this is the first work to introduce global momentum into sparse commun... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca... | A |
φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG is non-differentiable due to the presence of the ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT pseudo-norm in Eq. 3.
A way to overcome this is using ℒℒ\mathcal{L}caligraphic_L as the differentiable optimization function during training and φ¯¯𝜑\... | We set med=m(i)𝑚𝑒𝑑superscript𝑚𝑖med=m^{(i)}italic_m italic_e italic_d = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for utilizing fair comparison between the sparse activation functions.
Specifically for Extrema activation function we introduce a ‘border tolerance’ parameter to allow neuron ac... |
We choose values for d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for each activation function in such as way, to approximately have the same number of activations for fair comparison of the sparse activation functions. | We then pass 𝒔(i)superscript𝒔𝑖\bm{s}^{(i)}bold_italic_s start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and a sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in the sparse activation function ϕitalic-ϕ\phiitalic_ϕ resulting in the activation map 𝜶(... | The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ... | B |
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of sy... | B |
=ΣejBese3absentsubscript𝑒𝑗absentΣsuperscript𝐵𝑒superscript𝑠𝑒3\displaystyle=\overset{e_{j}}{\underset{}{\Sigma}}\,B^{e}\frac{s^{e}}{3}= start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_B st... | =S¯¯−1∗(M^¯T∗S^^∗Dr^¯)absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟\displaystyle=\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^{T}*%
\widehat{\widehat{S}}*\overline{\widehat{Dr}}\right)= over¯ start_ARG over¯ start_ARG italic_S end_ARG end_ARG start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∗ ( over¯ sta... | U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=(S¯¯−1∗(M^¯T∗S^^∗Dr^¯))∗U¯absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟¯𝑈\displaystyle=\left(\overline{\overline{S}}^{-1}... | U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end... | U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r e... | B |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | A |
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | A |
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... |
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural im... |
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information... |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ... | B |
The nodes with the K𝐾Kitalic_K highest scores are retained, while the remaining ones are dropped.
Since the top-K𝐾Kitalic_K selection is not differentiable, the scores are also used as a gating for the node features, allowing gradients to flow through the projection vector during backpropagation. | In particular, experimental results showed that NDP is computationally cheaper (in terms of both time and memory) than feature-based methods, while it achieves competitive performance on all the downstream tasks taken into account.
An important finding in our results indicates that topological methods are the only viab... | We recall that when using NDP a stride of 4 is obtained by applying two decimation matrices in cascade, 𝐒(1)𝐒(0)superscript𝐒1superscript𝐒0{\mathbf{S}}^{(1)}{\mathbf{S}}^{(0)}bold_S start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT bold_S start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT and 𝐒(3)𝐒(2)superscript𝐒3su... | We consider two tasks on graph-structured data: graph classification and graph signal classification.
The code used in all experiments is based on the Spektral library [45], and the code to replicate all experiments of this paper is publicly available at GitHub.222github.com/danielegrattarola/decimation-pooling | In particular, experimental results showed that NDP is computationally cheaper (in terms of both time and memory) than feature-based methods, while it achieves competitive performance on all the downstream tasks taken into account.
An important finding in our results indicates that topological methods are the only viab... | C |
NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the dec... | Current state-of-the-art methods directly map random forests into neural networks. The number of parameters of the resulting network is evaluated on all datasets with different numbers of training examples. The overall performance is shown in the last column.
Due to the stochastic process when training the random fores... | NRFI introduces imitation instead of direct mapping. In the following, a network architecture with 32323232 neurons in both hidden layers is selected.
The previous analysis has shown that this architecture is capable of imitating the random forests (see Figure 4 for details) across all datasets and different numbers of... | Here, we additionally include decision trees, support vector machines, random forests, and neural networks in the comparison. The evaluation is performed on all nine datasets, and results for different numbers of training examples are shown (increasing from left to right). The overall performance of each method is summ... | First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all... | C |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | D |
This paper is dedicated to giving an extensive overview of the current directions of research of these approaches, all of which are concerned with reducing the model size and/or improving inference efficiency while at the same time maintaining accuracy levels close to state-of-the-art models.
We have identified three m... | In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements.
We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza... | Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations.
The reduction in memory requirements are obvious: Using fewer bits for the weights results in a lower memory overhead for storing the corresponding model, and using fewer bits for the activ... | Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs.
While quantization approaches obviously reduce the memory footprint of a DNN, the selected weight representation potentially also facilitates faster inference using cheaper arithmetic operations. | Lin et al. (2016) consider fixed-point quantization of pre-trained full-precision DNNs.
They formulate a convex optimization problem to minimize the total number of bits required to store the weights and the activations under the constraint that the total output signal-to-quantization noise ratio is larger than a certa... | C |
Despite its widespread use in applications, little is known in terms of relationships between Vietoris-Rips barcodes and other metric invariants. For instance, whereas it is obvious that the right endpoint of any interval I𝐼Iitalic_I in barc∗VR(X)subscriptsuperscriptbarcVR∗𝑋\mathrm{barc}^{\mathrm{VR}}_{\ast}(X)roma... |
In particular, one can apply the homology functor to the Vietoris-Rips filtration of a metric space X𝑋Xitalic_X. This induces a persistence module (with T=ℝ>0𝑇subscriptℝabsent0T=\mathbb{R}_{>0}italic_T = blackboard_R start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT) where the morphisms are those induced by inclusions. As a... | One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding... | One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
| The persistent homology of the Vietoris-Rips filtration of a metric space provides a functorial way111Where for metric spaces X𝑋Xitalic_X and Y𝑌Yitalic_Y morphisms are given by 1111-Lipschitz maps ϕ:X→Y:italic-ϕ→𝑋𝑌\phi:X\rightarrow Yitalic_ϕ : italic_X → italic_Y, and for persistence modules V∗subscript𝑉V_{*}itali... | B |
C1: Remaining Cost
Looking at the main view (Figure 7(c), \raisebox{-.9pt} {1}⃝), we detect an area on the top of cluster C1 with slightly increased size for a few points (in comparison to the other points in the same cluster), which means there are high values of remaining cost in this small area. | C1: Remaining Cost
Looking at the main view (Figure 7(c), \raisebox{-.9pt} {1}⃝), we detect an area on the top of cluster C1 with slightly increased size for a few points (in comparison to the other points in the same cluster), which means there are high values of remaining cost in this small area. | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... | The black bars are always fixed, showing the average preservation for all points of the projection. For example, in Figure 4(c), the relatively tall black bars starting from the point k=20𝑘20k=20italic_k = 20 mean that, on average, neighborhoods of 20 points or more are well preserved. The same rationale applies to th... | This is usually a sign of a badly-optimized area that should not be trusted. To confirm that, we look at the KLD distribution (Figure 7(d)): the vast majority of points are located between 0.10.10.10.1 to 0.60.60.60.6 on the x𝑥xitalic_x-axis. This means that those were very well optimized (notice that the y𝑦yitalic_y... | D |
The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in ph... | The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in ph... |
Algorithms falling in this category are inspired by human social concepts, such as decision-making and ideas related to the expansion/competition of ideologies inside the society as ideology (Ideology Algorithm, IA, [466]), or political concepts such as the Imperialist Colony Algorithm (ICA, [467]). This category also... |
In this same line of reasoning, the largest subcategory of the second taxonomy (Differential Vector Movements guided by representative solutions) not only contains more than half of the reviewed algorithms (almost 60%), but it also comprises algorithms from all the different categories in the first taxonomy: Social Hu... | Tables 18, 19, 20, 21, 22, 23 and 24 show the different algorithms in this subcategory. An exemplary algorithm of this category that has been a major meta-heuristic solver in the history of the field is PSO [80]. In this solver, each solution or particle is guided by the global current best solution and the best soluti... | B |
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent y... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | C |
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca... | A range of studies analysed network traces for ingress filtering using IP address characteristics (Moore et al., 2006; Barford et al., 2006; Chen et al., 2008; Czyz et al., 2014; Dainotti et al., 2013), or by inspecting on-path network equipment reaction to unwanted traffic, (Yao et al., 2014). In addition to a limited... | How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing stu... | Recent work showed that even TCP traffic gets fragmented under certain conditions (Dai et al., 2021b). Fragmentation has long history of attacks which affect both the UDP and TCP traffic (Kent and Mogul, 1987; Herzberg and Shulman, 2013; Shulman and Waidner, 2014).
| Source IP address spoofing allows attackers to generate and send packets with a false source IP address impersonating other Internet hosts, e.g., to avoid detection and filtering of attack sources, to reflect traffic during Distributed Denial of Service (DDoS) attacks, to launch DNS cache poisoning, for spoofed managem... | C |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | A |
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We previously placed these separators in every second nonempty drum σi:=[iδ,(i+1)δ]×Balld−1(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=... | We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O(1)𝑂1O(1)italic_O ( ... | Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above.
The only nontrivial requirement is that T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBS... | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu... | However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1.
Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide,... | D |
Note that there is a difference between the free product in the category of semigroups and the free product in the category of monoids or groups.
In particular, in the semigroup free product (which we are exclusively concerned with in this paper) there is no amalgamation over the identity element of two monoids. Thus, ... | In the theory of automaton semigroups, the definition of automata used is often more restrictive than this, with Q𝑄Qitalic_Q required to be finite,
and δ𝛿\deltaitalic_δ required to be a total function. (Recall that the alphabet A𝐴Aitalic_A is, by definition, finite.) |
In more automata-theoretic settings, a finite automaton would be called a deterministic finite state, letter-to-letter (or synchronous) transducer (see for example [12, 13] for introductions on standard automata theory). However, the term automaton is standard in our algebraic setting (although often only complete aut... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | A |
In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: 𝒮rand∼uniform(0,1)similar-tosubscript𝒮𝑟𝑎𝑛𝑑uniform01\mathcal{S}_{rand}\sim\textit{uniform}(0,1)caligraphic_S start_POSTSUBSCRIPT italic_r italic_a italic_n italic_d en... |
To test if the changes in results were statistically significant, we performed Welch’s t-tests Welch (1938) on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch’s t-test over the Student’s t-test, because the latter assumes equal variances for predictions from different var... |
Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2’s test set. The percentage of overlap is defined as: | To perform the tests, we first randomly sample 5000500050005000 subsets of non-overlapping test instances. We then average the accuracy of each subset across 5555 runs, obtaining 5000500050005000 values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table 2, the p𝑝pitalic_p... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | A |
We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... | We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... |
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i... | B |
Weighted-average calculates the metrics for each label and finds their average weighted by support (the number of true instances for each label). The data set is a binary classification problem and contains 165 diseased and 138 healthy patients.
Hence, we choose micro-average to weight the importance of the largest cla... | Figure 2(a.1, a.2) presents the initial views of the 11 algorithms (and their models) currently implemented in StackGenVis.
Figure 2(a.1) uses boxplots to represent the performance of the currently unselected algorithms/models based on the metrics combination discussed previously. This compact visual representation pro... | We normalize the importance from 0 to 1 and use a two-hue color encoding from dark red to dark green to highlight the least to the most important features for our current stored stack, see Figure 4(b). The panel in Figure 4(c) uses a table heatmap view where data features are mapped to the y-axis (13 attributes, only 7... |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.... | (ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models;
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model explo... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | A |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | C |
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... |
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce... |
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe... | In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in... | Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base... | C |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | C |
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ... | Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | C |
We used a beam size of 4444 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5555 checkpoints for the Transformer Base setting and 20202020 checkpoints for the Transformer Big setting saved at intervals of 1,50015001,5001 , 500 training steps. We also conducted significance ... | Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne... |
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in... |
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduce... | As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ... | B |
a compact open of Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT because fi,ijsubscript𝑓𝑖subscript𝑖𝑗f_{i,i_{j}}italic_f start_POSTSUBSCRIPT italic_i , italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT is
spectral. Notice that fi,ij∘fi=fijsubscript𝑓𝑖subscript𝑖�... | open sets of the form fij−1(Kj)superscriptsubscript𝑓subscript𝑖𝑗1subscript𝐾𝑗f_{i_{j}}^{-1}(K_{j})italic_f start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_K start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) for 1≤j≤n... | I𝐼Iitalic_I is directed, there exists k∈I𝑘𝐼k\in Iitalic_k ∈ italic_I such that k≥i,j𝑘𝑖𝑗k\geq i,jitalic_k ≥ italic_i , italic_j.
Because fj∘fk,j=fksubscript𝑓𝑗subscript𝑓𝑘𝑗subscript𝑓𝑘f_{j}\circ f_{k,j}=f_{k}italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∘ italic_f start_POSTSUBSCRIPT italic_k , itali... | there is an index i𝑖iitalic_i above all ijsubscript𝑖𝑗i_{j}italic_i start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT for 1≤j≤n1𝑗𝑛1\leq j\leq n1 ≤ italic_j ≤ italic_n. Let
Kj′≜fi,ij−1(Kj)≜superscriptsubscript𝐾𝑗′superscriptsubscript𝑓𝑖subscript𝑖𝑗1subscript𝐾𝑗K_{j}^{\prime}\triangleq f_{i,i_{j}}^{-1}(K_{j})italic... | fi−1(Kj′)=fij−1(Kj)superscriptsubscript𝑓𝑖1superscriptsubscript𝐾𝑗′superscriptsubscript𝑓subscript𝑖𝑗1subscript𝐾𝑗f_{i}^{-1}(K_{j}^{\prime})=f_{i_{j}}^{-1}(K_{j})italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_K start_POSTSUBSCRIPT italic_j end_POSTSU... | D |
Second, the ordinal distortion is homogeneous as all its elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task anymore. Compared to the distortion parameters wi... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | Third, the ordinal distortion can be estimated using only a part of a distorted image. Unlike the semantic information, the distortion information is redundant in images, showing the central symmetry and mirror symmetry to the principal point. Consequently, the efficiency of rectification algorithms can be significantl... | In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... | B |
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
We set aside 20% of the samples as the test set and divide the rema... |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... | The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies.
The model is tra... | A |
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
| The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... |
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... | If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m... | A |
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... | Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | A |
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | A |
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | B |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | A |
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en... | In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020... | A |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 5