context
stringlengths
250
5.97k
A
stringlengths
250
5.11k
B
stringlengths
250
3.9k
C
stringlengths
250
8.2k
D
stringlengths
250
4.03k
label
stringclasses
4 values
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
+3dd⁢xxmd2d⁢x2F(a,b;c;z)+xmd3d⁢x3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}% \frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic...
2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR...
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
+xmd2d⁢x2F(a,b;c;z)];\displaystyle\quad\quad+x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)\Big{]};+ italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_F ( ...
A
The sets T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT are computed as described above in preparation for the first column clearing stage, but are subsequently computed via the recursion (3) (with increased memory quota relati...
To aid the exposition and analysis, Algorithm 3 refers to several subroutines, namely Algorithms 4–7. In an implementation the code for the Algorithms 4–7 would be inserted into Algorithm 3 in the lines where they are called. We present them as subroutines here to improve the readability of Algorithm 3. However, we ass...
Let us now explain the changes required when d𝑑ditalic_d is even. The main issue is that the formula (3) used to compute the sets of transvections Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT recursively throughout our implementation of the algorithm described by Taylor looks two steps b...
Although the described modifications are not complicated in and of themselves, they would introduce noticeable complications into our pseudocode and hence we have chosen to separate the d𝑑ditalic_d even case for the sake of clearer exposition, opting to simply point out and explain the changes instead of writing them...
The case where d𝑑ditalic_d is even is very similar, but requires a few changes that would complicate the pseudocode. So, for the clarity of our exposition, we analyse the case d𝑑ditalic_d odd here and then explain the differences for the case d𝑑ditalic_d even in the next subsection.
C
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
B
On the contrary, we may need to use a function θ𝜃\thetaitalic_θ of variable (b,c)𝑏𝑐(b,c)( italic_b , italic_c ); see the description of 𝖪𝗂𝗅𝗅Fsubscript𝖪𝗂𝗅𝗅𝐹\mathsf{Kill}_{F}sansserif_Kill start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT in subsection 3.1 for an example. As such, the flow of Rotate-and-Kill is ...
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
A
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
C
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′⁢(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp⁡(−uν)superscript𝑢𝜈...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th...
A
We consider two types of Ensemble Features: features accumulating crowd wisdom and averaging feature for the Tweet credit Scores. The former are extracted from the surface level while the latter comes from the low dimensional level of tweet embeddings; that in a way augments the sparse crowd at early stage.
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ...
CrowdWisdom. Similar to (liu2015real, ), the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  (liu2015real, ) use an extensive list of bipolar sentiments with a set of c...
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
B
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
A
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b], as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3.
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
C
This very low threshold for now serves to measure very basic movements and to check for validity of the data. Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity).
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
A
Further improvements of benchmark results could potentially be achieved by a number of additions to the processing pipeline. Our model demonstrates a learned preference for predicting fixations in central regions of images, but we expect performance gains from modeling the central bias in scene viewing explicitly Kümme...
The spatial allocation of attention when viewing natural images is commonly represented in the form of topographic saliency maps that depict which parts of a scene attract fixations reliably. Identifying the underlying properties of these regions would allow us to predict human fixation patterns and gain a deeper under...
Overcoming these issues requires a higher-level scene understanding that models object interactions and predicts implicit gaze and motion cues from static images. Robust object recognition could however be achieved through more recent classification networks as feature extractors Oyama and Yamanaka (2018) at the cost ...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
Figure 1: A visualization of four natural images with the corresponding empirical fixation maps, our model predictions, and estimated maps based on the work by Itti et al. (1998). The network proposed in this study was not trained on the stimuli shown here and thus exhibits its generalization ability to unseen instanc...
B
There is a (polynomial-time) O⁡(log⁡(opt)⁢log⁡(h))Ooptℎ\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(h))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_h ) )-approximation algorithm and an O⁡(log⁡(opt)⁢opt)Ooptopt\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\oper...
In this paper, we investigate the problem of computing the locality number (in the exact sense as well as fixed-parameter algorithms and approximations) and, by doing so, we establish an interesting connection to the graph parameters cutwidth and pathwidth with algorithmic implications for approximating cutwidth. In th...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th...
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u...
C
Each spectrogram was classified into normal or abnormal using a two layer CNN which had a modified loss function that maximizes sensitivity and specificity, along with a regularization parameter. The final classification of the signal was the average probability of all segment probabilities.
They used multi-scale discrete WT to facilitate the extraction of MI features at specific frequency resolutions and softmax regression to build a multi-class classifier based on the learned features. Their validation experiments show that their method performed better than previous methods in terms of sensitivity and s...
They introduced a task formulation that segments ECG into heartbeats to reduce the number of time steps per sequence. They also extended the RNNs with an attention mechanism that enables them to reason which heartbeats the RNNs focus on to make their decisions and achieved comparable to state-of-the-art performance usi...
Their method achieved 99.1% sensitivity and 91.6% specificity which are comparable to state-of-the-art methods on the task. Dominguez et al.[110] segmented the signals and preprocessed them using the neuromorphic auditory sensor[120] to decompose the audio information into frequency bands.
In their article Acharya et al.[85] trained a four layer CNN on AFDB, MITDB and CREI, to classify between normal, AF, atrial flutter and ventricular fibrillation. Without detecting the QRS they achieved comparable performance with previous state-of-the-art methods that were based on R-peak detection and feature enginee...
C
An example of such behavior can be observed in the game Kung Fu Master – after eliminating the current set of opponents, the game screen always looks the same (it contains only player’s character and the background). The game dispatches diverse sets of new opponents, which cannot be inferred from the visual observation...
Figure 3: Comparison with Rainbow and PPO. Each bar illustrates the number of interactions with environment required by Rainbow (left) or PPO (right) to achieve the same score as our method (SimPLe). The red line indicates the 100100100100K interactions threshold which is used by the our method.
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world mod...
The graphs are in the same format as Figure 3: each bar illustrates the number of interactions with environment required by Rainbow to achieve the same score as SimPLe (with stochastic discrete world model) using 100k steps in an environment with and without sticky actions.
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
C
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
C
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure...
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3...
A
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would...
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode...
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
A
On the other hand, graphs of accumulated confidence values over time (chunk-by-chunk or writing-by-writing) shown in Figures 6, 7 and 8 are intended to show how lexical evidence (learned from the training data and given by g⁢v𝑔𝑣gvitalic_g italic_v) is accumulated over time, for each class, and how it is used to decid...
However, EDD poses really challenging aspects to the “standard” machine learning field. The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its ration...
At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability. Unfortunately, to the best of our knowledge, there is no text classifier able to support thes...
In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detec...
In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD). The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification...
D
We have only considered a simple quadratic function optimization problem here. When it comes to deep model training, the objective functions are typically high-dimensional, non-convex, and characterized by numerous local minima and saddle points, which are much more complex than the above example.
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and al...
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai...
We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca...
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
B
φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG is non-differentiable due to the presence of the ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT pseudo-norm in Eq. 3. A way to overcome this is using ℒℒ\mathcal{L}caligraphic_L as the differentiable optimization function during training and φ¯¯𝜑\...
We set m⁢e⁢d=m(i)𝑚𝑒𝑑superscript𝑚𝑖med=m^{(i)}italic_m italic_e italic_d = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for utilizing fair comparison between the sparse activation functions. Specifically for Extrema activation function we introduce a ‘border tolerance’ parameter to allow neuron ac...
We choose values for d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for each activation function in such as way, to approximately have the same number of activations for fair comparison of the sparse activation functions.
We then pass 𝒔(i)superscript𝒔𝑖\bm{s}^{(i)}bold_italic_s start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and a sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in the sparse activation function ϕitalic-ϕ\phiitalic_ϕ resulting in the activation map 𝜶(...
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ...
B
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
Supposing that a UAV covers a round area below it with a field angle θ𝜃\thetaitalic_θ as shown in Fig. 1 (b). Thus the coverage of UAVisubscriptUAV𝑖{\rm UAV}_{i}roman_UAV start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is Di=π⁢(hi⁢tan⁢θ)2subscript𝐷𝑖𝜋superscriptsubscriptℎ𝑖tan𝜃2D_{i}=\pi(h_{i}{\rm tan}\theta)^{2}it...
In order to support as many users as possible, UAVs are required to enlarge coverage size, which is equal to enlarge the coverage proportion in the mission area. Higher altitude indicates larger coverage size as shown in Fig. 1 (c). The utility of coverage size is denoted as
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting. The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a hi...
D
value for f𝑓fitalic_f at the beginning of each timestep, so that the natural toroidal flux conserving boundary condition ((∇⟂f)|Γ=0evaluated-atsubscript∇perpendicular-to𝑓Γ0(\nabla_{\perp}f)|_{\Gamma}=0( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_f ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0)
𝐯˙=−𝐯⋅∇𝐯+1ρ⁢(−∇p−∇⋅𝝅¯+𝐉×𝐁)˙𝐯⋅𝐯∇𝐯1𝜌∇𝑝⋅∇¯𝝅𝐉𝐁\dot{\mathbf{v}}=-\mathbf{v}\cdot\nabla\mathbf{v}+\frac{1}{\rho}\left(-\nabla p% -\nabla\cdot\underline{\boldsymbol{\pi}}+\mathbf{J\times}\mathbf{B}\right)over˙ start_ARG bold_v end_ARG = - bold_v ⋅ ∇ bold_v + divide start_ARG 1 end_ARG start_ARG italic_ρ end_ARG ...
∫∇×𝐄θ⁢(𝐫,t)⋅𝑑𝐒=−∫𝐁˙ϕ⁢(𝐫,t)⋅𝑑𝐒⋅∇subscript𝐄𝜃𝐫𝑡differential-d𝐒⋅subscript˙𝐁italic-ϕ𝐫𝑡differential-d𝐒\displaystyle\int\,\nabla\times\mathbf{E}_{\theta}(\mathbf{r},t)\cdot d\mathbf% {S}=-\int\,\dot{\mathbf{B}}_{\phi}(\mathbf{r},t)\cdot d\mathbf{S}∫ ∇ × bold_E start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ...
𝐁˙=∇×(𝐯×𝐁)−∇×(η⁢∇×𝐁),∇⋅𝐁=0formulae-sequence˙𝐁∇𝐯𝐁∇𝜂∇𝐁⋅∇𝐁0\dot{\mathbf{B}}=\nabla\times\left(\mathbf{v}\times\mathbf{B}\right)-\nabla% \times\left(\eta\nabla\times\mathbf{B}\right),\,\,\nabla\cdot\mathbf{B}=0over˙ start_ARG bold_B end_ARG = ∇ × ( bold_v × bold_B ) - ∇ × ( italic_η ∇ × bold_B ) , ∇ ⋅ bold_B = 0
⇒∫𝐄θ⁢(𝐫,t)⋅𝑑𝐥=V⁢(t)=−Φ˙f⁢o⁢r⁢m⁢(t)⇒absent⋅subscript𝐄𝜃𝐫𝑡differential-d𝐥𝑉𝑡subscript˙Φ𝑓𝑜𝑟𝑚𝑡\displaystyle\Rightarrow\int\,\mathbf{E}_{\theta}(\mathbf{r},t)\cdot d\mathbf{% l}=V(t)=-\dot{\Phi}_{form}(t)⇒ ∫ bold_E start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_r , italic_t ) ⋅ italic_d bold_l = italic_...
B
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
C
This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
D
\left|\mathcal{A}\cap\mathcal{B}\right|}{\left|\mathcal{A}\right|+\left|% \mathcal{B}\right|},\ \ \ \ \textrm{and,}Dice coefficient , Dice ( caligraphic_A , caligraphic_B ) = 2 divide start_ARG | caligraphic_A ∩ caligraphic_B | end_ARG start_ARG | caligraphic_A | + | caligraphic_B | end_ARG , and,
Figure 14: A 5×5555\times 55 × 5 overlap scenario with (a) the ground truth, (b) the predicted binary masks, and (c) the overlap. In (a) and (b), black and white pixels denote the foreground and the background respectively. In (c), green, grey, blue, and red pixels denote TP, TN, FP, and FN pixels respectively.
The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pix...
where 𝜽ssubscript𝜽𝑠\bm{\theta}_{s}bold_italic_θ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and 𝜽asubscript𝜽𝑎\bm{\theta}_{a}bold_italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT denote the parameters of the segmentation and adversarial model, respectively. lb⁢c⁢esubscript𝑙𝑏𝑐𝑒l_{bce}italic_l start_PO...
Figure 13: Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., ...
A
Similarly to pooling operations in Convolutional Neural Networks (CNNs) that compute local summaries of neighboring pixels, we propose a pooling procedure that provides an effective coverage of the whole graph and reduces the number of nodes approximately by a factor of 2. This can be achieved by partitioning nodes in ...
This is similar to pooling in CNNs, where the maximum or the average is extracted from a small patch of neighboring pixels, which are assumed to be highly correlated and contain similar information. In the following, we formalize the problem of finding the optimal subset of vertices that can be used to represent the wh...
The rationale is that strongly connected nodes exchange a lot of information after a MP operation and, as a result, they are highly dependent and their features become similar. Therefore, one set alone can represent the whole graph sufficiently well.
Similarly to pooling operations in Convolutional Neural Networks (CNNs) that compute local summaries of neighboring pixels, we propose a pooling procedure that provides an effective coverage of the whole graph and reduces the number of nodes approximately by a factor of 2. This can be achieved by partitioning nodes in ...
As a result the graph collapses, becoming densely connected and losing its original structure. On the other hand, topological pooling methods can preserve the graph structure by operating on the whole adjacency matrix at once to compute the coarsened graphs and are not affected by uninformative node features.
B
We now compare the proposed method to state-of-the-art methods for mapping random forests into neural networks and classical machine learning classifiers such as random forests and support vector machines with a radial basis function kernel that have shown to be the best two classifiers across all UCI datasets (Fernán...
Fernández-Delgado et al. (2014) conduct extensive experiments comparing 179 classifiers on 121 UCI datasets (Dua & Graff, 2017). The authors show that random forests perform best, followed by support vector machines with a radial basis function kernel. Therefore, random forests are often considered as a reference for n...
SVM: Support vector machine (Chang & Lin, 2011) is a popular classifier that tries to find the best hyperplane that maximizes the margin between the classes. As evaluated by Fernández-Delgado et al. (2014), the best performance is achieved with a radial basis function kernel.
We now compare the proposed method to state-of-the-art methods for mapping random forests into neural networks and classical machine learning classifiers such as random forests and support vector machines with a radial basis function kernel that have shown to be the best two classifiers across all UCI datasets (Fernán...
The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes. Olson et al. (2018) evaluate the performance of modern...
B
In the latter two settings with unknown transition dynamics, all the existing algorithms (Neu et al., 2012; Rosenberg and Mansour, 2019a, b) follow the gradient direction with respect to the visitation measure, and thus, differ from most practical policy optimization algorithms. In comparison, OPPO is not restricted to...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
B
While domain-specific accelerators, such as Google’s TPU, excel in their specific performance, they are usually limited to a set of specific operations and are neither flexible in terms of data types nor sparse calculations. Furthermore, in particular for the TPU, experimentation is often hindered due to limitations in...
As this paper is mainly dedicated to giving a comprehensive literature overview of the current state of the art, an extensive evaluation of the many presented methods in Section 3 would be infeasible and it is also not within the scope of this paper.
We provide a comparison of various quantization approaches for DNNs using the CIFAR-100 data set in Section 5.1.1, followed by an evaluation of prediction quality for different types of pruned structures on the CIFAR-10 data set in Section 5.1.2. We evaluate the inference throughput of the compressed models on an ARM C...
In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements. We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza...
This paper is dedicated to giving an extensive overview of the current directions of research of these approaches, all of which are concerned with reducing the model size and/or improving inference efficiency while at the same time maintaining accuracy levels close to state-of-the-art models. We have identified three m...
A
}\}+\{v_{4},v_{5}\}+\{v_{5},v_{0}\},{ italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_v start_POST...
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\m...
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
D
In this section, we demonstrate how our tool can support users to better understand the general behavior of t-SNE and to validate the quality of t-SNE results by presenting a typical usage scenario and a more detailed use case, both based on data sets from the medical domain. This section follows the methodology from ...
Anna is a medical student who is enthusiastic about becoming a specialist in identifying and treating breast cancer. She heard about a DR algorithm called t-SNE, and she is eager to know if it can help her to identify cancer cells accurately. Personally, Anna does not completely trust the decisions made from automatic...
Next by looking back at the t-SNE overview, she identifies a red-colored instance positioned far away from the rest of the malignant points, which grabs her attention (Figure 6(a), bottom). She thinks it might be an error in the projection, and decides to examine it closer by selecting a few points around the potential...
Anna loads the data into t-viSNE and starts the hyper-parameter exploration with a grid search. After the execution, she sees several projections that accurately separate the two classes. As she does not have any special preference, she selects the top-left projection, because the projections are sorted from best to wo...
She decides, then, to use t-SNE to explore the Breast Cancer Wisconsin data set which she downloaded from the UCI machine learning repository [58]. The data set contains measurements for 699 breast cancer cases, labeled into benign or malignant cancer. The nine dimensions included in this data set are cytological chara...
A
Since the initial version of this paper in 2020, the field of nature and bio-inspired optimization algorithms has continuously evolved. During these last years, the lack of novelty, and bad comparisons, among others, are described as problems that have to be solved to keep the field in progress. As a result, in Subsec...
The constant evolution of the field leads to a significant issue: the lack of novelty in metaheuristics. However, researchers recognize the need to address this problem and have proposed methods to evaluate the novelty of new algorithms. This section shows different studies and guidelines to measure novelty, to design...
Lastly, Section 9 presents an analysis of metaheuristics based on studies, guidelines, and other works of a more theoretical nature that help to solve the problems of the field. We perform a brief review of recent studies that address good practices for designing metaheuristics and discussions from this perspective, a...
The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b...
Good practices for designing metaheuristics: It gathers several works that are guidelines for good practices related to research orientation to measure novelty [26], to measure similarity in metaheuristics [27], Metaheuristics “In the Large” (to support the development, analysis, and comparison of new approaches) [28],...
A
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi...
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
It should be emphasized that a large k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT frequently leads to capture the wrong information. After the transformation of GAE, the nearest neighbors are more likely to belong with the same cluster
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo...
B
∙∙\bullet∙ Limited stability. Current measurement studies use unstable infrastructures: volunteers running agents can reinstall computers or move to other networks (Mauch, 2013); misconfigured servers (Lone et al., 2018) (e.g., with open resolution or with faulty network stack) can be updated – all causing the network ...
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ...
What SMap improves. The infrastructure of SMap is more stable than those used in previous studies, e.g., we do not risk volunteers moving to other networks. Our measurements do not rely on misconfigurations in services which can be patched, blocking the measurements. The higher stability also allows for more accurate ...
The results of the ingress filtering measurements with SMap are summarised in Table 3. The techniques that we integrated into SMap (IPID, PMTUD, DNS lookup) were found applicable to more than 92% of the measured ASes. Using SMap we identified 80% of the ASes that do not enforce ingress filtering. In what follows we com...
Traceroute Active Measurements. We analyse the datasets from the traceroute measurements performed by the CAIDA Spoofer Project within the last year 2019, (Lone et al., 2017). The measurements identified 2,500 unique loops, of these 703 were provider ASes, and 1,780 customer ASes. The dataset found 688 ASes that do no...
B
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification ...
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
D
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic...
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segm...
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this ...
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu...
A
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13]. In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left,
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
C
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating ...
Following Selvaraju et al. (2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT trained on human attention maps...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p...
C
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as P...
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da...
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categorie...
For each topic, we identified a corresponding entry from the OPP-115 annotation scheme (Wilson et al., 2016), which was created by legal experts to label the contents of privacy policies. While Wilson et al. (2016) followed a bottom-up approach and identified different categories from analysis of data practices in priv...
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ...
C
The history manager saves the aforementioned manipulations or restores the previous saved step on demand. For our problematic point, we decide to remove it, and the metamodel performance increases as seen in Step 1 of StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performanc...
We normalize the importance from 0 to 1 and use a two-hue color encoding from dark red to dark green to highlight the least to the most important features for our current stored stack, see Figure 4(b). The panel in Figure 4(c) uses a table heatmap view where data features are mapped to the y-axis (13 attributes, only 7...
Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long ...
Data Features. For the next stage of the workflow, we focus on the data features. Three different feature selection approaches can be used to compute the importance of each feature for each model in the stack. Univariate feature importance is identical for all models, but different for each feature.
Permutation feature importance is measured by observing how random re-shuffling of each predictor influences model performance. Accuracy feature importance removes features one by one, similar to permutation, but then retrains each model by receiving only the accuracy as feedback. These last two approaches are very res...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
D
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
B
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base...
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated. Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, t...
After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ...
According to Theorem 1, only a subarray of CCA can be activated at a certain beam angle. Next, the relationship between the subarray and the beam angles is studied. The number and position of the activated elements determine the subarray. Assuming that the elements in the activated subarray are adjacent to each other ...
and the CCA scheme achieves higher SE than the UPA scheme obviously with different t-UAV number K𝐾Kitalic_K. The main reason is that the UPA with DREs can only receive/transmit the signal within a limited angular range at a certain time slot while the CCA does not have such limitation. It is also shown that the gap be...
A
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
B
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
C
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while...
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
The computation of depth-wise LSTM is the same as the conventional LSTM except that depth-wise LSTM connects stacked Transformer layers instead of tokens in a token sequence as in conventional LSTMs. The gate mechanisms in the original LSTM are to enhance its ability in capturing long-distance relations and to address ...
We explore the use of LSTMs to connect layers in stacked deep architectures for Transformers: we show how residual connections can be replaced by LSTMs connecting self-, cross- and masked self-attention layers. In contrast to standard LSTMs that process token sequences, we refer to the use of LSTMs in connecting stacke...
C
Alexandroff topology of the quasi-order ⊆isubscript𝑖\subseteq_{i}⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) and the fragment 𝖥≜𝖤𝖥𝖮⁢[σ]≜𝖥𝖤𝖥𝖮delimited-[]σ\mathsf{F}\triangleq\mathsf{EFO}[\upsigma]sansserif_F ≜ sansserif_EFO [ roman_σ ]. The Łoś-Tarski Theorem corresponds
\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}\right\rangle⟨ ⟦ sansserif_EFO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⟩ ⊆ ⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start...
\rrbracket_{\operatorname{Struct}(\upsigma)}\right\rangle=\left\langle% \llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struc...
\operatorname{Struct}(\upsigma)}\;.roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT = ⟦ sansserif_EFO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ....
\operatorname{Struct}(\upsigma)\right)roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( roman_Struct ( roman_σ ) ).
C
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective...
Figure 5: Comparison of two learning representations for distortion estimation, distortion parameter (left) and ordinal distortion (right). In contrast to the ambiguous relationship between the distortion distribution and distortion parameter, the proposed ordinal distortion displays an evident positive correlation to ...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
C
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
A
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d⁢(j,S)>...
  FAs¯←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{% \pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s...
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C...
Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa...
        do FA←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star...
C
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d...
D
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ...
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene...
In this experiment, we use the approach of aggregate query answering [37] to check the information utility of MuCo. We randomly generate 1,000 queries and calculate the average relative error rate for comparison. The sequence of the query is expressed in the following form SELECT SUM(salary) FROM Microdata
Results from Figure 10 show that the increase of l𝑙litalic_l lowers the information loss but raises the relative error rate. It is mainly because the number of tuples in each group increases with the growth of l𝑙litalic_l. On the one hand, in random output tables, the probabilities that tuples have to cover on the Q...
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and mo...
B
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
A
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
C
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020...
We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
D
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
B
We further evaluate decentRL alongside the open-world KG embedding method LAN [21] and the best-performing GNN-based method CompGCN [13] on FB15K-237 with new entities. As shown in Figure 5, decentRL significantly outperforms CompGCN in this setting. LAN, specifically designed for new entities, experiences minimal perf...
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da...
In Table 8, we present more detailed entity prediction results on open-world FB15K-237, considering the influence of different decoders. Our observations indicate that decentRL consistently outperforms the other methods across most metrics when using TransE and DistMult as decoders. Furthermore, we provide results on ...
Table 6 and Table 7 present the results for conventional entity prediction. decentRL demonstrates competitive or even superior performance when compared to state-of-the-art methods on the FB15K and WN18 benchmarks, showcasing its efficacy in entity prediction. While on the FB15K-237 and WN18RR datasets, the performanc...
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg...
B
We illustrate the results in Fig. 9. We observe that the episode length becomes longer over training time with the intrinsic reward estimated from VDM, as anticipated. We observe that our method reaches the episode length of 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with the minimum iterati...
Finally, to evaluate our proposed method in real-world tasks, we conduct experiments on the real-world robot arm to train a self-supervised exploration policy. We highlight that policy learning in a real robot arm needs to consider both the stochasticity in the robot system and the different dynamics corresponding to d...
At the beginning of each episode, we put three objects in the workspace. Using fewer objects makes the robot arm harder to interact with the objects by taking actions randomly. We use a set of 10 different objects for training and 5555 objects for testing. We follow [13] and use the Object-Interaction Frequency (OIF) ...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
We demonstrate the setup of the experiment in Fig. 10. The equipment mainly includes an RGB-D camera that provides the image-based observations, a UR5 robot arm that interacts with the environment, and different objects in front of the robot arm. An example of the RGB-D image is shown in Fig. 11. We develop a robot en...
A
Finally, we observe that Floater-Hormann interpolation performs better than multivariate cubic splines. It is comparable to 5t⁢hsuperscript5𝑡ℎ5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT-order splines, but reaches an accuracy of 10−7superscript10710^{-7}10 start_POSTSUPERSCRIPT - 7 end_POSTSUPER...
Finally, we observe that Floater-Hormann interpolation performs better than multivariate cubic splines. It is comparable to 5t⁢hsuperscript5𝑡ℎ5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT-order splines, but reaches an accuracy of 10−7superscript10710^{-7}10 start_POSTSUPERSCRIPT - 7 end_POSTSUPER...
Several improvements have been presented, including Floatman–Hormann interpolation [16, 38], that reach better approximation quality than splines. However, all of them share the above weaknesses (A,B,C), as we demonstrate in the numerical experiments of Section 8.
In contrast to previous approaches, such as Chebfun [32], multivariate splines [26], and Floater-Hormann interpolation [38], the present MIP algorithm achieves exponential approximation rates for the Runge function using only sub-exponentially many interpolation nodes.
The observations made in 2D remain valid. However, Floater-Hormann becomes indistinguishable from 5t⁢hsuperscript5𝑡ℎ5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT-order splines. Further, when considering the amount of coefficients/nodes required to determine the interpolant, plotted in the right p...
D
As a result, the sample complexity for estimating the Wasserstein distance W⁢(μ,ν)𝑊𝜇𝜈W(\mu,\nu)italic_W ( italic_μ , italic_ν ) up to ϵitalic-ϵ\epsilonitalic_ϵ sub-optimality gap is of order 𝒪~⁢(ϵd∨2)~𝒪superscriptitalic-ϵ𝑑2\tilde{\mathcal{O}}(\epsilon^{d\lor 2})over~ start_ARG caligraphic_O end_ARG ( italic_ϵ st...
Motivated by Example 1, we propose the projected Wasserstein distance in Definition 2 to improve the sample complexity. This distance can be viewed as a special IPM with the function space defined in (1), a collection of 1111-Lipschitz functions in composition with an orthogonal k𝑘kitalic_k-dimensional linear mapping.
The orthogonal constraint on the projection mapping A𝐴Aitalic_A is for normalization, such that any two different projection mappings have distinct projection directions. The projected Wasserstein distance can also be viewed as a special case of integral probability metric with the function space
The 1111-Wasserstein distance can be viewed as a special IPM with ℱ=Lip1ℱsubscriptLip1\mathcal{F}=\text{Lip}_{1}caligraphic_F = Lip start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, where the Rademacher complexity of ℱℱ\mathcal{F}caligraphic_F is given by [42, Example 4]:
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
A
Learning disentangled factors h∼qϕ⁢(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpreta...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
A
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
C
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
A finite group, GFsubscript𝐺𝐹G_{F}italic_G start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, can be generated from Fisubscript𝐹𝑖F_{i}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using composition as the group operation. In this section, we devise a procedure to compute the linear representation of the gro...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
A finite field, by definition, is a finite set, and the set of all permutation polynomials over the finite field forms a group under composition. Given a finite subset of such permutations, we can compute a group generated by this set. In this paper, we propose a representation of such a group using the concept of lin...
D
Stacked penalized logistic regression (StaPLR) (Van Loon \BOthers., \APACyear2020) is a method specifically developed to tackle the joint classification and view selection problem. Compared with a variant of the lasso for selecting groups of features (the so-called group lasso (M. Yuan \BBA Lin, \APACyear2007)), StaPLR...
In high-dimensional biomedical studies, a common goal is to create an accurate classification model using only a subset of the features (Y. Li \BOthers., \APACyear2018). A popular approach to this type of joint classification and feature selection problem is to apply penalized methods such as the lasso (Tibshirani, \AP...
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An exam...
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi...
A particular challenge of the aforementioned joint classification and view selection problem is its inherent trade-off between accuracy and sparsity. For example, the most accurate model may not perform the best in terms of view selection. In fact, the prediction-optimal amount of regularization causes the lasso to sel...
D
In this paper, we introduce DepAD, a versatile framework for dependency-based anomaly detection. DepAD offers a general approach to construct effective, scalable, and flexible anomaly detection algorithms by leveraging off-the-shelf feature selection techniques and supervised prediction models for various data types a...
Effectiveness: The two DepAD algorithms, FBED-CART-PS, and FBED-CART-Sum, demonstrate superior performance over nine state-of-the-art anomaly detection methods in the majority of cases. The two DepAD methods do not outperform wkNN. However, they show advantages over wkNN in higher dimensional datasets in terms of both...
We systematically and empirically study the performance of representative off-the-shelf techniques and their combinations in the DepAD framework. We identify two well-performing dependency-based methods. The two DepAD algorithms consistently outperform nine benchmark algorithms on 32 datasets.
In the subsection, we answer the question, i.e., compared with state-of-the-art anomaly detection methods, how is the performance of the instantiated DepAD algorithms? We choose the two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, to compare them with the nine state-of-the-art anomaly detection methods shown in Ta...
We compare two high-performing instantiations of DepAD, FBED-CART-PS and FBED-CART-Sum, against nine state-of-the-art anomaly detection methods across 32 commonly used datasets. The results demonstrate that DepAD algorithms consistently outperform existing methods in most cases. Moreover, the DepAD framework’s high int...
D
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, th...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star...
B
Multi-scale input. The magnification process may inevitably impair the information in the clip, thus the original video clip, which contains the original intact information, is also necessary. To take advantage of the complementary properties of both scales, we design a video stitching technique to piece them together...
Cross-scale correlations. The original clip and the magnified clip, albeit different, are highly correlated since they contain the same video content. If we can utilize their correlations and draw connections between their features, then the impaired information in the magnified clip can be rectified by the original cl...
Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period...
Multi-scale input. The magnification process may inevitably impair the information in the clip, thus the original video clip, which contains the original intact information, is also necessary. To take advantage of the complementary properties of both scales, we design a video stitching technique to piece them together...
Clip O and Clip U. In Table 5, we compare the performance when generating predictions only from Clip O, only from Clip U, and from both with the same well-trained VSGN model. We can see that the two clips still result in different performance even after their features are aggregated throughout the network. Clip O is be...
A
The user interface of VisEvol is structured as follows: (1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e));
After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in...
(2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and (3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana...
The user interface of VisEvol is structured as follows: (1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e));
(ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e)); (iii) during the detailed examination phase, zoo...
B
A comprehensive review of the broader category of multi-agent algorithms is presented in [33], while a survey specifically focusing on aerial swarm robotics is provided in [34]. Additionally, [35] offers an overview of existing swarm robotic applications. For swarm guidance purposes, certain deterministic algorithms ha...
the performance of the algorithm drops significantly if the current density distribution of the swarm cannot be estimated accurately. The time-inhomogeneous Markov chain approach to the probabilistic swarm guidance problem (PSG-IMC algorithm) is developed in [14] to minimize the number of state transitions. This algori...
In the context of addressing the guidance problem for a large number of agents, considering the spatial distribution of swarm agents and directing it towards a desired steady-state distribution offers a computationally efficient approach. In this regard, both probabilistic and deterministic swarm guidance algorithms ar...
and a complex communication architecture is not required for the estimation of the distribution. By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially f...
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state. The probabilistic guidance algorithm led to the dev...
B
For example, it has been demonstrated that explicitly modelling the low-dimensional structure of shape matching problems often allows to find global optima for a wide range of shape matching formulations [5]. It was also shown that learning suitable feature representations from shapes improves the matching performance...
While (near)-isometric shape matching has been studied extensively for the case of matching a pair of shapes, the isometric multi-shape matching problem, where an entire collection of (near-isometric) shapes is to be matched, is less explored. Important applications of isometric multi-shape matching include learning lo...
Moreover, when assuming (near)-isometries between shapes, efficient and powerful spectral approaches can be leveraged for shape matching [51]. Isometries describe classes of deformable shapes of the same type but in different poses, \eghumans or animals who are able to adopt a variety of poses. Potential applications f...
Alternatively, one could solve pairwise shape matching problems between all pairs of shapes in the shape collection. Although this way there is no bias, in general the resulting correspondences are not cycle-consistent. As such, matching shape A via shape B to shape C, may lead to a different correspondence than matchi...
In principle, any pairwise shape matching method can be used for matching a shape collection. To do so, one can select one of the shapes as reference, and then solve a sequence of pairwise shape matching problems between each of the remaining shapes and the reference. However, a major disadvantage is that such an appr...
B
On the side of path graphs, we believe that, compared to algorithms in [3, 22], our algorithm is simpler for several reasons: the overall treatment is shorter, the algorithm does not require complex data structures, its correctness is a consequence of the characterization in [1], and there are a few implementation deta...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly...
We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
B
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi...
Dolphins: this network consists of frequent associations between 62 dolphins in a community living off Doubtful Sound. In the Dolphins network, node denotes a dolphin, and edge stands for companionship dolphins0 ; dolphins1 ; dolphins2 . The network splits naturally into two large groups females and males dolphins1 ; ...
The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002F...
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi...
The ego-networks dataset contains more than 1000 ego-networks from Facebook, Twitter, and GooglePlus. In an ego-network, all the nodes are friends of one central user and the friendship groups or circles (depending on the platform) set by this user can be used as ground truth communities. The SNAP ego-networks are ope...
A
See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm. Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL⁢[p⁢(z),p⁢(z|x))...
To circumvent such intractability, variational inference turns to minimize the KL divergence between a variational posterior p𝑝pitalic_p and the true posterior p⁢(z|x)𝑝conditional𝑧𝑥p(z{\,|\,}x)italic_p ( italic_z | italic_x ) in (3.8) (Wainwright and Jordan, 2008; Blei et al., 2017), yielding the following distribu...
When ℳℳ\mathcal{M}caligraphic_M is specified by the level set of KL divergence, for any fixed θ𝜃\thetaitalic_θ, using Lagrangian duality, we can transform the inner problem in (3.7) into a KL divergence regularized distributional optimization problem as in (3.1) with g𝑔gitalic_g is replaced by ℓ⁢(⋅;θ)ℓ⋅𝜃\ell(\cdot;\...
The goal of GAN (Goodfellow et al., 2014) is to learn a generative model p𝑝pitalic_p that is close to a target distribution q𝑞qitalic_q, where p𝑝pitalic_p is defined by transforming a low dimensional noise via a neural network. Since the objective in (3.1) includes f𝑓fitalic_f-divergences as special cases, our dis...
In other words, posterior sampling with Langevin MCMC can be posed as a distributional optimization method. Furthermore, in addition to the KL divergence, F⁢(p)𝐹𝑝F(p)italic_F ( italic_p ) in (3.1) also incorporates other f𝑓fitalic_f-divergences (Csiszár, 1967).
D
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II.
Following existing studies [46, 13, 40, 41, 14], we use the average travel time to evaluate the performance of different methods for traffic signal control. The average travel time indicates the overall traffic situation in an area over a period of time. For a detailed definition of average travel time, see Section 3....
Most conventional traffic signal control methods are designed based on fixed-time signal control [21], actuated control [22] or self-organizing traffic signal control [23]. These approaches rely on expert knowledge and often perform unsatisfactorily in complicated real-world situations. To solve this problem, several o...
Definition 3 (Average Travel Time) The travel time of a vehicle is the time discrepancy between entering and leaving a particular area. A vehicle from the origin to the destination (OD) is regarded as a travel. Average travel time of all vehicles in a road network is the most frequently used measure to evaluate the per...
Reward. We define the reward for agent i𝑖iitalic_i as the negative of the queue length on incoming lanes. Note that optimizing queue length has been proved to be equivalent to optimizing average travel time in [38] under certain assumptions. Average travel time is a global criteria which cannot be optimized directly ...
A
~{}~{}>~{}~{}\|\mathbf{x}_{j}-\mathbf{x}_{*}\|_{2}∥ overroman_ˇ start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ ≥ ∥ overroman_ˇ start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT ∗ end_POS...
}\,\mathbf{f}(\mathbf{x}_{j})\,=\,\mathbf{0}bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = bo...
Since 𝐟⁢(𝐱j)≠ 0𝐟subscript𝐱𝑗 0\mathbf{f}(\mathbf{x}_{j})\,\neq\,\mathbf{0}bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ≠ bold_0, we have 𝐱j≠𝐱ˇjsubscript𝐱𝑗subscriptˇ𝐱𝑗\mathbf{x}_{j}\,\neq\,\check{\mathbf{x}}_{j}bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≠ overroman_ˇ start_ARG b...
we can assume (𝐱j−𝐱ˇj)/‖𝐱j−𝐱ˇj‖2subscript𝐱𝑗subscriptˇ𝐱𝑗subscriptnormsubscript𝐱𝑗subscriptˇ𝐱𝑗2(\mathbf{x}_{j}-\check{\mathbf{x}}_{j})/\|\mathbf{x}_{j}-\check{\mathbf{x}}_{j% }\|_{2}( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - overroman_ˇ start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_j end_P...
𝐟⁢(𝐱j)+J⁢(𝐱j)⁢(𝐱−𝐱j)=𝟎.𝐟subscript𝐱𝑗𝐽subscript𝐱𝑗𝐱subscript𝐱𝑗0\mathbf{f}(\mathbf{x}_{j})+J(\mathbf{x}_{j})\,(\mathbf{x}-\mathbf{x}_{j})~{}~{% }=~{}~{}\mathbf{0}.bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + italic_J ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ( bold_x - b...
B
\cdot 6+\frac{n}{20}\cdot 6.1}=\frac{6.1}{101.1}<0.061over^ start_ARG italic_d end_ARG ( italic_σ ) = divide start_ARG divide start_ARG italic_n end_ARG start_ARG 20 end_ARG ⋅ 6.1 end_ARG start_ARG divide start_ARG italic_n end_ARG start_ARG 2 end_ARG ⋅ 4 + divide start_ARG 9 italic_n end_ARG start_ARG 20 end_ARG ⋅ 6 +...
Our analysis of ProfilePacking, as stated in Theorem 3, in conjunction with the PAC-learnability of frequency predictions, can help obtain a sampling-based algorithm with an efficient tradeoff between the number of sampled items and its attained competitive ratio. More precisely, consider the setting in which the onlin...
In order to analyze the performance of an online algorithm, we will rely on the well-established framework of competitive analysis, which provides strict, theoretical performance guarantees against worst-case scenarios. In fact, as stated in (?), bin packing has served as “an early proving ground for this type of analy...
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n...
Following the influential work (?), we refer to the competitive ratio of an algorithm with an error-free prediction as the consistency of the algorithm, and to the competitive ratio with an adversarial prediction as its robustness. Several online optimization problems have been studied in this learning-augmented settin...
A
Practically speaking, our approach transforms the embedding of point cloud obtained from the base model to parametrize the bijective function represented by the MLP network. This function aims to find a mapping between a canonical 2D patch to the 3D patch on the surface of the target mesh. We condition the positioning ...
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
Patch-based approaches (Yang et al., 2018b; Groueix et al., 2018; Bednarik et al., 2020; Deng et al., 2020b) are much more flexible and enable modeling virtually any surfaces, including those with a non-disk topology. It is achieved using parametric mappings to transform 2D patches into a set of 3D shapes. The first d...
In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018...
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
C
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O⁢(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n...
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O⁢(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n...
Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t...
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O⁢(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate. Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constr...
D
And from the bijection we can deduce that ∩(Tw)<∩(Gw∧Ts)subscript𝑇𝑤subscript𝐺𝑤subscript𝑇𝑠\cap(T_{w})<\cap(G_{w}\wedge T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) < ∩ ( italic_G start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ∧ italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) for so...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
necessarily complete) G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) that admits a star spanning tree Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In the first part we present a formula to calculate ∩(Ts)subscript𝑇𝑠\cap(T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSU...
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
A
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP...
The support of a chain σ𝜎\sigmaitalic_σ, denoted supp⁡(σ)supp𝜎\operatorname{supp}(\sigma)roman_supp ( italic_σ ), in a simplicial complex is the set of simplices with nonzero coefficients in σ𝜎\sigmaitalic_σ. We say that two chains σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have overlapping supports if there exists a sim...
Theorem 1.1 depends on p𝑝pitalic_p, q𝑞qitalic_q, K𝐾Kitalic_K and b𝑏bitalic_b (but, as usual, is independent of the size of the cover). Moreover, while the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover can grow with b𝑏bitalic_b (it is at least (b−1)⁢(μ⁢(K)+2)𝑏1𝜇𝐾2(b-1)(\mu(K)+2)( italic_b - ...
In this paper, we show that the gap observed for convex lattice sets occurs in the broad topological setting of triangulable spaces with a forbidden homological minor, a notion introduced by Wagner [37] as a higher-dimensional analogue of the familiar notion of graph minors [34].
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
C
The radial tree representation and the graph visualization (see below) use a layout panel visible at the bottom of Fig. 1(c). It supports several interactions such as zooming, rotating, returning to the initial view, and expanding/retracting particular slices with the Toggle options. Zooming enables users to compare e...
The radial tree representation and the graph visualization (see below) use a layout panel visible at the bottom of Fig. 1(c). It supports several interactions such as zooming, rotating, returning to the initial view, and expanding/retracting particular slices with the Toggle options. Zooming enables users to compare e...
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
Returning to the initial view can also be a helpful shortcut during the interactive exploration. The toggle options assist users in collapsing unimportant data subspaces in particular cases or vice versa. Furthermore, this functionality facilitates FeatureEnVi to scale for data sets with many more features (see Section...
The radial tree had three collapsed data subspaces (a.2–a.4) except for All and Worst subspaces. We performed this action because there are too many features to be explored at once, and FeatureEnVi provides this capability to alter the layouts in order to scale for high-dimensional data sets. Basically, the core statis...
C
0\text{m/s}^{2},\,|u_{y}|\leq 20\text{m/s}^{2}\}italic_u ∈ caligraphic_U := { [ italic_u start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT | | italic_u start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT | ≤ 20 m/s...
We first optimize the performance of the simulated positioning system by adding a receding horizon MPCC stage where we pre-optimize the position and velocity references provided to the low level controller. This is enabled by the high repeatability of the system which results in run-to-run deviations of 3⁢μ⁢m3𝜇𝑚3\mu ...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system. We leverage the repeatability of the system, which is higher than the integrated encoder error of 3⁢μ⁢m3𝜇𝑚3\mu m3 italic_μ italic_m,
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
C
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI...
Results. In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicit...
Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain compara...
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI...
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ⁢(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betait...
B
They use the labeled data to supervise the gaze estimation network and design an adversarial module for semi-supervised learning. Given these features used for gaze estimation, the adversarial module tries to distinguish their source and the gaze estimation network aims to extract subject/dataset-invariant features to ...
Recasens et al. present an approach for following gaze in video by predicting where a person (in the video) is looking, even when the object is in a different frame [124]. They build a CNN to predict the gaze location in each frame and the probability containing the gazed object of each frame. Also, visual saliency sho...
Kothari et al.  [110] found the strong gaze-related geometric constraints when people ”look at each other” (LAEO). They estimate 3D and 2D landmarks in the images of LAEO dataset [113], and generate pseudo gaze annotation for gaze estimation. While it cannot bring competitive performance, therefore, they further integr...
Semi-supervised CNNs require both labeled and unlabeled images for optimizing networks. Wang et al. propose an adversarial learning approach to improve the model performance on the target subject/dataset [59]. As shown in Fig. 6, it requires labeled images in the training set as well as unlabeled images of the target s...
It is the most popular dataset for appearance-based gaze estimation methods. It contains a total of 213,659 images collected from 15 subjects. The images are collected in daily life over several months and there is no constraint for the head pose. MPIIGaze dataset provides both 2D and 3D gaze annotation. It also provid...
B
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face...
To evaluate the proposed method, we carried out experiments on very challenging masked face datasets. In the following, we present the datasets’ content and variations, the experimental results using the quantization of deep features obtained from three pre-trained models, and a comparative study with other state-of-t...
The quantization is then applied to extract the histogram of a number of bins as presented in Section 4.3. Finally, MLP is applied to classify faces as presented in Section 4.4. In this experiment, the 10-fold cross-validation strategy is used to evaluate the recognition performance. The experiments are repeated ten t...
This deep quantization technique presents many advantages. It ensures a lightweight representation that makes the real-world masked face recognition process a feasible task. Moreover, the masked regions vary from one face to another, which leads to informative images of different sizes. The proposed deep quantization a...
The rest of this paper is organized as follows: Section 2 presents the related works. In Section 3 we present the motivation and contribution of the paper. The proposed method is detailed in Section 4. Experimental results are presented in Section 5. Conclusion ends the paper.
D
\leftarrow\operatorname{partition}k\,(p,x)::(y:\exists m,n.\,k=m+n\land\mathrm% {list}[m]\otimes\mathrm{list}[n])italic_k ; ⋅ ; italic_p : roman_nat , italic_x : roman_list [ italic_k ] ⊢ italic_y ← roman_partition italic_k ( italic_p , italic_x ) : : ( italic_y : ∃ italic_m , italic_n . italic_k = italic_m + italic_n ...
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to c...
First, we define head and tail observations on streams of arbitrary depth. Since they are not recursive, we do not bother tracking the size superscript of the typing judgment, since they can be inlined. Moreover, we take the liberty to nest values (boxed and highlighted yellow), which can be expanded into SAX [PP20].
That is, we assume that we have definitions that (1) append two lists together and (2) partitions one by a pivot. Then, at a high level, quicksort is a size-preserving definition with the input list length as its termination measure. For brevity, we nest patterns (boxed and highlighted yellow), which can be expanded i...
In this section, we extend SAX [DPP20] with recursion and arithmetic refinements in the style of Das and Pfenning [DP20b]. SAX is a logic-based formalism and subsuming paradigm [Lev04] for concurrent functional programming that conceives call-by-need and call-by-value strategies as particular concurrent schedules [PP2...
C
Implement privacy-preserving access control. On the one hand, the cloud should be prevented from obtaining the private plaintext of the data it encounters, including the owner’s media content, the users’ fingerprints, and the LUTs. On the other hand, only users authorized by the owner can access the media content.
The whole FairCMS-II scheme is summarized as follows. First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 5. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\math...
First, the owner requires that the cloud not be able to obtain the plaintext about the media content and the LUTs, and that access to the media content is controlled by his/her authorization. Second, the owner asks for significant overhead savings from cloud media sharing. Third, the owner demands traitor tracing of us...
Protect the owner’s copyright. We need to embed the user’s fingerprint in the owner’s media content to enable traitor tracing. As long as an unfaithful user makes an unauthorized redistribution, he/she can be detected by the embedded fingerprint in the media content.
The whole FairCMS-I scheme is summarized as follows. First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 2. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\mathb...
C
Our experiments are conducted on three real-world datasets, two CTR benchmark datasets, and one recommender system dataset. Details of these datasets are illustrated in Table 1. The data preparation follows the strategy in Tian et al. (2023). We randomly split all the instances in 8:1:1 for training, validation, and te...
We compare GraphFM with four classes of state-of-the-art methods:(A) the linear approach that only uses individual features; (B) FM-based methods that consider second-order feature interactions; (C) DNN-based methods that model high-order feature interactions; (D) aggregation-based methods that update features’ repres...
We observe that GraphFM outperforms all the ablative methods, which proves the necessity of all these components in our model. The performance of GraphFM(-M) suffers from a sharp drop compared with GraphFM, proving that it is necessary to transform and aggregate the feature interactions in multiple semantic subspaces t...
We find that in the first layer, which models the second order feature interactions, these feature fields are hard to distinguish when selecting the beneficial interactions. This suggests that almost all the second-order feature interactions are useful, which also why we sample all of them in the first layer, i.e., m1=...
Our proposed GraphFM achieves best performance among all these four classes of methods on three datasets. The performance improvement of GraphFM compared with the three classes of methods (A, B, C) is especially significant, above 0.010.01\mathbf{0.01}bold_0.01-level. The aggregation-based methods including InterHAt, A...
A
For clarity we want to stress that any linear rate over polytopes has to depend also on the ambient dimension of the polytope; this applies to our linear rates and those in Table 1 established elsewhere (see Diakonikolas et al. [2020]). In contrast, the 𝒪⁢(1/ε)𝒪1𝜀\mathcal{O}(1/\varepsilon)caligraphic_O ( 1 / italic_...
the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020], denoted by B-FW and B-AFW respectively.
After publication of our initial draft, in a revision of their original work, Dvurechensky et al. [2022] added an analysis of the Away-step Frank-Wolfe algorithm which is complementary to ours (considering a slightly different setup and regimes) and was conducted independently; we have updated the tables to include th...
We show that a small variation of the original Frank-Wolfe algorithm [Frank & Wolfe, 1956] with an open-loop step size of the form γt=2/(t+2)subscript𝛾𝑡2𝑡2\gamma_{t}=2/(t+2)italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 2 / ( italic_t + 2 ), where t𝑡titalic_t is the iteration count is all that is needed ...
We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refe...
B
Informally speaking, the key observations are that in the former case, by Lemma 4.8, (a suffix of) the active path must form an odd cycle. A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices.
The rough idea of the proof is as follows. First, we observe that having a small number of short augmenting paths is a certificate for a good approximation, as formalized in Lemma 5.9. We use this observation to show in Lemma 5.10 that whenever we do not have a good approximation yet, we must find many augmenting paths...
Otherwise, we will find an augmentation and we have that an augmenting path satisfying one of the two desired properties has been found. This property is formalized in Observation 4.2 and the process for finding these odd cycles is formalized in Definition 4.3 and Lemma 4.4.
From this, we can inductively derive that eventually, either all {a1,…,ak}subscript𝑎1…subscript𝑎𝑘\{a_{1},\ldots,a_{k}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } form an odd cycle or an augmentation has been found involving some of these arcs. O...
Then, we argue that eventually, the odd cycle formed by {a1,…,aj}subscript𝑎1…subscript𝑎𝑗\{a_{1},\ldots,a_{j}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } can be used to extend a short active path to ajsubscript𝑎𝑗a_{j}italic_a start_POSTSUBSCRIP...
B
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compres...
Recently, several compression methods have been proposed for distributed and federated learning, including [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40]. Recent works have tried to combine the communication compression methods with decentralized optimization.
Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols. The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a...
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16]. Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv...
B
We develop multiple novel algorithms to solve decentralized personalized federated saddle-point problems. These methods (Algorithm 1 and Algorithm 2) are based on recent sliding technique [27, 28, 29] adapted to SPPs in a decentralized PFL. In addition, we present Algorithm 3 which used the randomized local method fro...
We divided our experiments into two parts: 1) toy experiments on strongly convex – strongly concave bilinear saddle point problems to verify the theoretical results and 2) adversarial training of neural networks to compare deterministic (Algorithm 1) and stochastic (Algorithm 3) approaches.
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
B
Kuhn Poker (Kuhn, 1950; Southey et al., 2009; Lanctot, 2014) is a zero-sum poker game with only two actions per player. The two-player variant is solvable with PSRO, however the three-player version benefits from JPSRO. The results in Figure 2(a) show rapid convergence to equilibrium.
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t...
We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt...
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
C
\right]}}=\underset{X\sim D}{\text{Cov}}\left({q}\left(X\right),{K}\left(X,v% \right)\right).italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT ) - italic_q ( italic_D ) = start_UNDERACCENT italic_X ∼ italic_D end_UNDERACCENT start_ARG blackboard_E end_ARG [ italic_K ( italic_X , italic_v ) italic_q...
\frac{{D}\left(v\,|\,x\right)}{{D}\left(v\right)}italic_K ( italic_x , italic_v ) ≔ divide start_ARG italic_D ( italic_x | italic_v ) end_ARG start_ARG italic_D ( italic_x ) end_ARG = divide start_ARG italic_D ( italic_v | italic_x ) end_ARG start_ARG italic_D ( italic_v ) end_ARG is the Bayes factor of x𝑥xitalic_x gi...
The second part is a direct result of the known variational representation of total variation distance and χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divergence, which are both f𝑓fitalic_f-divergences (see Equations 7.88 and 7.91 in Polyanskiy and Wu (2022) for more details).
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
B
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
A substantial theoretical framework has been built around the definition of kernelization [17, 22, 27, 29, 31]. It includes deep techniques for obtaining kernelization algorithms [10, 28, 39, 43], as well as tools for ruling out the existence of small kernelizations [11, 19, 23, 30, 32] under complexity-theoretic hypot...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
C
Although it is feasible to generate paired data using rendering technique, the rendered images have large domain gap with real images. When applying the model trained on rendered images to real images, the performances are usually significantly degraded. To overcome this drawback, Hong et al. [52] constructed paired d...
Figure 15: The visualization results of different shadow generation methods on DESOBA dataset [52]. From left to right in each row, we show the input composite image, the composite foreground mask, the generated results of ShadowGAN [203], MaskShadowGAN [54], ARShadowGAN [92], SGRNet [52], SGDiffusion [96], and the gro...
Some examples in DESOBA dataset are exhibited in the second row in Fig. 14, in which we show the composite image without foreground shadow, foreground object mask, and ground-truth image with foreground shadow. As mentioned in [52], manual shadow removal is extremely expensive.
ARShadowGAN [92] released a rendered dataset named Shadow-AR by inserting a foreground object into real background image and generating its corresponding shadow with rendering technique. Shadow-AR dataset contains 3,00030003,0003 , 000 quintuples, in which each quintuple consists of a composite image without foreground...
Figure 14: In the first row, we show two examples from Shadow-AR dataset [92], which is constructed based on rendered images. In the second row, we show two examples from DESOBA dataset [52], which is constructed based on real images. From left to right in each example, we show the composite image without foreground sh...
B
In order to address above challenges, this paper introduces CityNet, a multi-modal dataset comprising data from various cities and sources for smart city applications. Drawing inspiration from [13], we use the term “multi-modal” to reflect the diverse range of cities and sources from which CityNet is derived. In compa...
Comprehensiveness: Fig. 1(a), illustrates that CityNet comprises three types of raw data (mobility data, geographical data, and meteorological data) collected from seven different cities. Furthermore, we have processed the raw data into several sub-datasets (as shown in Fig. 1(b)) to to capture a wider range of urban p...
Figure 1: Architecture of CityNet.Left: Three raw data sources of CityNet. Middle: Schematic description of all 8 sub-datasets, whose sources are distinguished by color as shown in Fig. 1(a) and 1(b). Right: Decomposition of the data dimensions into cities and tasks. Directed curves indicate correlations to be discover...
Mobility data: The mobility data in CityNet primarily consists of taxi movements, which provide valuable insights into citizen activities and the state of the transportation network. For instance, region-wise taxi flows can reveal urban crowd movement patterns, while taxi pickup and idle driving data can serve as proxi...
Interrelationship: We have classified the sub-datasets into two categories: service data and context data, as depicted in Fig. 1(c). Service data pertains to the status of urban service providers (e.g. taxi companies), while context data refers to the urban environment (e.g. weather). Based on this categorization, we h...
A
where the samples yisubscript𝑦𝑖y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are drawn from the posterior distribution p⁢(y∗|𝐱∗,𝒟)𝑝conditionalsuperscript𝑦superscript𝐱𝒟p(y^{*}\,|\,\mathbf{x}^{*},\mathcal{D})italic_p ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSC...
Without the adversarial training, this model is similar to the one introduced by Khosravi et al. khosravi2014constructing . However, instead of training an ensemble of mean-variance estimators, an ensemble of point estimators is trained to predict y𝑦yitalic_y and in a second step a separate estimator σ^^𝜎\hat{\sigma}...
When making predictions, the conditional mean is again approximated by MC integration (12), i.e. one takes the average of multiple forward passes. The total predictive variance is given by the sum of the empirical variance of the ensemble and the variance predicted by the model itself:
Ensemble learning is a popular approach to enhance predictions by training multiple machine learning models and aggregating the individual predictions, for example by taking the mean krogh1996learning . In general one can consider ensemble methods as an intermediate step between Bayesian methods
Every ensemble allows for a naive construction of a prediction interval heskes1997practical when the aggregation strategy in Algorithm 2 is given by the arithmetic mean. By treating the predictions of the individual models in the ensemble as elements of a data sample, one can calculate the empirical mean and variance ...
C
MIDI files that specify 4/4 metre.222We note that the metre can be wrong due to errors in automatic music transcription, leading to noise in the data. Future work can be done to improve this. Moreover, future work can be done to use a more complicated token representation such as that proposed by \textciteashis19ismir ...
The Pop1K7 dataset developed by \textcitehsiao21aaai333https://github.com/YatingMusic/compound-word-transformer is composed of machine transcriptions of 1,747 audio recordings of piano covers (i.e., a new recording by someone other than the original artist or composer of a commercially released song) of Japanese anime,...
EMOPIA is a dataset of pop piano music collected recently by \textciteemopia from YouTube for research on emotion-related tasks.888https://annahung31.github.io/EMOPIA/ It has 1,087 clips (each around 30 seconds) segmented from 387 songs, covering Japanese anime, Korean & Western pop song covers, movie soundtracks and p...
POP909 comprises piano covers of 909 pop songs compiled by \textcitepop909.555https://github.com/music-x-lab/POP909-Dataset It is the only dataset among the five that provides melody, non-melody labels for each note. Specifically, each note is labelled with one of the following three classes: vocal melody (piano notes ...
We provide three versions of the melody MIDI file for each original song, generated respectively by the skyline algorithm, Simonetta et al.’s CNN and “our model (performance) + CP”. Taking “Clayderman_Yesterday_Once_More.mid” as an example, the melody generated by the skyline algorithm exhibits stiffness and lacks intr...
A
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c⁢(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c⁢(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSU...
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
Now, observe that if the block to the left is also of type A, then a respective block from Z⁢(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t...
To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and findi...
B