context
stringlengths
250
6.9k
A
stringlengths
250
3.69k
B
stringlengths
250
3.63k
C
stringlengths
250
5.11k
D
stringlengths
250
4.12k
label
stringclasses
4 values
{\prime}(x)}\left(h_{0}(x)\frac{f(x)}{f^{\prime}(x)}+h_{1}(x)\right)\right].roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / [ 1 + divide start_ARG 1 end_ARG start_ARG 2 italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCR...
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC...
g2⁢(x)⁢fn′⁢(x)=g1⁢(x)⁢fn⁢(x)+g0⁢(x)⁢fn−1⁢(x);subscript𝑔2𝑥superscriptsubscript𝑓𝑛′𝑥subscript𝑔1𝑥subscript𝑓𝑛𝑥subscript𝑔0𝑥subscript𝑓𝑛1𝑥\displaystyle g_{2}(x)f_{n}^{\prime}(x)=g_{1}(x)f_{n}(x)+g_{0}(x)f_{n-1}(x);italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_...
a1,n−1⁢fn⁢(x)=(a2,n−1+a3,n−1⁢x)⁢fn−1⁢(x)−a4,n−1⁢fn−2⁢(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST...
\frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita...
B
On the other hand, if the instruction Itsubscript𝐼𝑡I_{t}italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT was Show⁡(A)Show𝐴\operatorname{Show}(A)roman_Show ( italic_A ) then Eval⁡(S,M,s,t)Eval𝑆𝑀𝑠𝑡\operatorname{Eval}(S,M,s,t)roman_Eval ( italic_S , italic_M , italic_s , italic_t ) is defined to be the list ...
Instruction type (i) above simply copies an element already in memory to a different memory slot. These instructions can arguably be disregarded for the purpose of determining the length of an MSLP, because in a practical implementation they could be handled via relabelling.
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be...
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gi⁢c⁢gr⁢c−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSU...
A
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
A
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
D
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
story descriptions we manually constructed queries to retrieve the relevant tweets for 270 rumors with high impact. Our approach to query construction mainly follows [11]. For the news event instances (non-rumor examples), we make use of the manually constructed corpus from Mcminn et al. [21], which covers 500 real-wor...
We use the same dataset described in Section 5.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. Those rumors and news fall comparatively evenly in 8 different categories, namely Politics, Science, Attacks, Disaster, A...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
Training data for single tweet classification. Here we follow our assumption that an event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless 333the terminology subless indicates an eve...
B
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
B
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content (liu2015real, ; ma2015detect, ; zhao2015enquiring, ), since individual tweets are of...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work (castillo2011information, ; gupta2014tweetcred, ) on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor)...
For this task, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 3.2). Fo...
C
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}% )\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
C
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ...
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it...
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali...
A
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)). Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)).
B
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Further improvements of benchmark results could potentially be achieved by a number of additions to the processing pipeline. Our model demonstrates a learned preference for predicting fixations in central regions of images, but we expect performance gains from modeling the central bias in scene viewing explicitly Kümme...
For related visual tasks such as semantic segmentation, information distributed over convolutional layers at different levels of the hierarchy can aid the preservation of fine spatial details Hariharan et al. (2015); Long et al. (2015). The prediction of fixation density maps does not require accurate class boundaries ...
D
We next formally define the computational problems of computing the parameters defined above. By Loc, Cutwidth and Pathwidth, we denote the problems to check for a given word α𝛼\alphaitalic_α or graph G𝐺Gitalic_G and integer k∈ℕ𝑘ℕk\in\mathbb{N}italic_k ∈ blackboard_N, whether loc⁡(α)≤kloc𝛼𝑘\operatorname{\textsf{lo...
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
In this section, we discuss some examples that illustrate the concepts of marking sequences and the locality number, and we also discuss some word combinatorial properties related to the locality number. Note that for illustration purposes, the example words considered in this section are not necessarily condensed.
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
C
The same authors have also trained the previous CNN architecture for identifying shockable and non-shockable ventricular arrhythmias[104], identify CAD patients with FAN and INDB[105], classify CHF with CHFDB, NSTDB, FAN[106] and also tested its noise resistance with WT denoising[107].
They introduced a task formulation that segments ECG into heartbeats to reduce the number of time steps per sequence. They also extended the RNNs with an attention mechanism that enables them to reason which heartbeats the RNNs focus on to make their decisions and achieved comparable to state-of-the-art performance usi...
Zubair et al.[75] detected the R-peak using a non-linear transformation and formed a beat segment around it. Then, they used the segments to train a three layer 1D CNN with variable learning rate depending on the mean square error and achieved better results than previous state-of-the-art.
In their article Acharya et al.[85] trained a four layer CNN on AFDB, MITDB and CREI, to classify between normal, AF, atrial flutter and ventricular fibrillation. Without detecting the QRS they achieved comparable performance with previous state-of-the-art methods that were based on R-peak detection and feature enginee...
Their method achieved 99.1% sensitivity and 91.6% specificity which are comparable to state-of-the-art methods on the task. Dominguez et al.[110] segmented the signals and preprocessed them using the neuromorphic auditory sensor[120] to decompose the audio information into frequency bands.
A
Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Haf...
We presented SimPLe, a model-based reinforcement learning approach that operates directly on raw pixel observations and learns effective policies to play games in the Atari Learning Environment. Our experiments demonstrate that SimPLe learns to play many of the games with just 100100100100K interactions with the envir...
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol...
Our work advances the state-of-the-art in model-based reinforcement learning by introducing a system that, to our knowledge, is the first to successfully handle a variety of challenging games in the ALE benchmark. To that end, we experiment with several stochastic video prediction techniques, including a novel model b...
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
C
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
B
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking....
D
Johnson [18] proved that the competitive ratio of First-Fit and Best-Fit is 1.7. Many other algorithms with improved competitive ratios have been studied. The best known algorithm was introduced by Balogh et al. [6] and has a competitive ratio of at most 1.5783. Moreover, it is known that no online algorithm can achiev...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
maintains bins in the same order that they have been opened, and places an item into the first bin with enough free space; if no such bin exists, it opens a new bin. Best-Fit works similarly, except that it maintains bins in the non-increasing order of their level, where level of a bin is the total size of its items.
An instance of the online bin packing problem consists of a sequence of items with different sizes in the range (0,1]01(0,1]( 0 , 1 ], and the objective is to pack these items into a minimum number of bins, each with a capacity of 1. For each arriving item, the algorithm must place it in one of the current bins or ope...
Online bin packing has also been studied in the advice setting [12, 30, 2]. In particular, it is possible to achieve a competitive ratio of 1.4702 with only a constant number of (trusted) advice bits [2]. A restricted version of the bin packing problem, where items take sizes from a discrete set {1/k,2/k,…,1}1𝑘2𝑘…1\{...
D
Besides the limitations described in Subsection 5.2, e.g. those caused by not using other information than text for classification, another limitation in the present work is that we used words as the basic building blocks (i.e. each writing was processed as a Bag of Words) on which our approach begins to process other ...
In order to get a better understanding of the rationale behind the good behavior of our framework, it is important to go into more details on the mechanisms used to weight words. In Figure 4 we can empirically corroborate that the global value correctly captures the significance and discriminating power of words since,...
Since the dataset was highly unbalanced we optimized the penalty parameter, C𝐶Citalic_C (C>0)𝐶0(C>0)( italic_C > 0 ), and the class weight parameter w𝑤witalic_w (w≥1)𝑤1(w\geq 1)( italic_w ≥ 1 ) for SVM and LOGREG; for MNB only the class weight w𝑤witalic_w was varied, while for K𝐾Kitalic_KNN the K𝐾Kitalic_K param...
In the section “Analysis and Discussion” we could observe that the global value was a good estimator of word relevance for each category. We believe that this ability of global value to weight words could also play an important role as a feature selection method and, therefore, we will compare it against well-known fea...
That is, when g⁢v𝑔𝑣gvitalic_g italic_v is only applied to a word it outputs a vector in which each component is the global value of that word for each category cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For instance, following the above example, we have:
C
\frac{1}{2},k})bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_η divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ∑ start_POSTSUBSCRIPT italic_k ∈ [ italic_K ] end_POSTSUBSCRIPT caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide s...
We improve DEF-A by changing its local momentum to global momentum, getting a new method called GMC+. The detail of GMC+ is shown in Algorithm 2. We also adopt parameter server architecture for illustration. GMC+ can also be easily implemented on all-reduce frameworks.
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
The details of GMC implemented on the parameter server framework are shown in Algorithm 1. After updating 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT, the server in GMC will send 𝐰t+1−𝐰tsubscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}-{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_...
Since the server is typically the busiest node in parameter server architecture, we consider the communication cost on the server in our experiments. For DMSGD which doesn’t use any communication compression techniques, the communication cost on the server includes receiving vectors from the K𝐾Kitalic_K workers and se...
B
Although ReLU creates exact zeros (unlike its predecessors s⁢i⁢g⁢m⁢o⁢i⁢d𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes. The same a...
ϕ=R⁢e⁢L⁢U⁢(s)italic-ϕ𝑅𝑒𝐿𝑈𝑠\phi=ReLU(s)italic_ϕ = italic_R italic_e italic_L italic_U ( italic_s ). The ReLU activation function produces sparsely disconnected but internally dense areas as shown in Fig. 1LABEL:sub@subfig:relu instead of sparse spikes.
Recently, in k𝑘kitalic_k-Sparse Autoencoders [21] the authors used an activation function that applies thresholding until the k𝑘kitalic_k most active activations remain, however this non-linearity covers a limited area of the activation map by creating sparsely disconnected dense areas (Fig. 1LABEL:sub@subfig:topkabs...
Although ReLU creates exact zeros (unlike its predecessors s⁢i⁢g⁢m⁢o⁢i⁢d𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes. The same a...
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
B
With the rapid commercialization of UAVs, a lot of research has emerged in this field [16]. To efficiently deploy UAVs, studies have been made to find out UAV distribution on network graph [9] and a graphical model has been proposed for channels reuse [17]. The resource allocation of channel and time is also a hot are...
Typical wireless protocol 802.11b/g only provides limited channels for users, which is far more than enough for high-quality communication services [18]. To reduce the load in central system, making use of distributed available resources in networks turns out to be an ideal solution. Underlay Device-to-Device (D2D) co...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
Catastrophic natural and man-made disasters, such as earthquakes, typhoons, and wars, usually involve great loss of life and/or properties, historical interests in vast areas. Though sometimes unavoidable, the loss of life and property can be effectively reduced if proper disaster management has been implemented. Sinc...
To support the communication mission, all UAVs are required to cooperate and support the user communication in need. UAVs work above post-disaster area D𝐷Ditalic_D. If a user (User1subscriptUser1{\rm User}_{1}roman_User start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) needs to communicate with another user (User2subscriptUser...
A
_{0}r^{2}}\nabla f+\mathbf{q}_{i}+\mathbf{q}_{e}+\underline{\boldsymbol{\pi}}% \cdot\mathbf{v}\biggr{)}+ divide start_ARG italic_f start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG bold_v - divide s...
integral over the final expression for u˙t⁢o⁢t⁢a⁢lsubscript˙𝑢𝑡𝑜𝑡𝑎𝑙\dot{u}_{total}over˙ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_t italic_o italic_t italic_a italic_l end_POSTSUBSCRIPT over the system volume, and applying Gauss’s theorem, it can be seen how total
3.1.1, the continuous form of u˙T⁢o⁢t⁢a⁢lsubscript˙𝑢𝑇𝑜𝑡𝑎𝑙\dot{u}_{Total}over˙ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_T italic_o italic_t italic_a italic_l end_POSTSUBSCRIPT. The poloidal magnetic energy is expressed in terms of the element-centered gradient
In the expression for p¯˙isubscript˙¯𝑝𝑖\dot{\overline{p}}_{i}over˙ start_ARG over¯ start_ARG italic_p end_ARG end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, Q¯πsubscript¯𝑄𝜋\overline{Q}_{\pi}over¯ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT
\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)over^ start_ARG bold_P end_ARG = over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_ω end_ARG ), and the terms in the final set obv...
A
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
C
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
D
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add ...
Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al., 2017; Fu et al., 2019b; Lin et al., 2017a; Peng et al., 2017; Pohlen et al., 2017; Wojna et al., 2017; Zhang et al., 2018d). Recently in 2018, De...
V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images.  Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Simil...
Khosravan et al. (2019) proposed an adversarial training framework for pancreas segmentation from CT scans. Son et al. (2017) applied GANs for retinal image segmentation. Xue et al. (2018) used a fully convolutional network as a segmenter in the generative adversarial framework to segment brain tumors from MRI images....
The standard CE loss function and its weighted versions, as discussed in Section 4, have been applied to numerous medical image segmentation problems (Isensee et al., 2019; Li et al., 2019b; Lian et al., 2018; Ni et al., 2019; Nie et al., 2018; Oktay et al., 2018; Schlemper et al., 2019). However, Milletari et al. (20...
B
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task. The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to impro...
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods.
Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing. On the contrary, the topological pooling methods obtain an accuracy close to a classical CNN, with NDP significantly outperforming the other two techniques.
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
In Fig. 7, we report the training time for the five different pooling methods. As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods.
D
Mapping random forests into neural networks is already used in many applications such as network initialization (Humbird et al., 2019), camera localization (Massiceti et al., 2017), object detection (Reinders et al., 2018, 2019), or semantic segmentation (Richmond et al., 2016). State-of-the-art methods (Massiceti et a...
These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption. In this work, we address this issue by proposing an imitation learning-based me...
First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class. For each method, the average number of parameters of the generated networks across all...
Additionally, the experiment shows that the training is very robust to overfitting even when the number of parameters in the network increases. When combining the generated data and original data, the accuracy on Car and Covertype improves with an increasing number of training examples.
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim...
D
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
B
The authors hypothesize that identity mappings play an important role. They argue that it is easier to model identity mappings in ResNets by simply setting all the weights of the residual path to zero instead of simulating them by adapting the weights of several consecutive layers in an intertwined way.
InceptionNet (or, equivalently, GoogLeNet) (Szegedy et al., 2015) won the ILSVRC14 challenge with 6.7% Top-5 error with an even deeper architecture consisting of 22 layers. The main feature of this architecture is the inception module which combines the outputs of 1×1111\times 11 × 1, 3×3333\times 33 × 3, and 5×5555\ti...
This controller RNN is trained with reinforcement learning to generate well performing architectures using the validation error on a held-out validation set as a reward signal. However, the training effort is enormous since more than 10,000 training runs are required to achieve state-of-the-art performance on CIFAR-10.
In any case, the skip connections reduce the vanishing gradient problem during training and enable extremely deep architectures of up to 152 layers on ImageNet and even up to 1,000 layers on CIFAR-10. ResNet won the ILSVRC15 challenge with 3.6% Top-5 error.
Inspired by ResNets whose skip connections have shown to reduce the vanishing gradient problem, densely connected CNNs (DenseNets) introduced by Huang et al. (2017) drive this idea even further by connecting each layer to all previous layers. DenseNets are conceptually very similar to ResNets—instead of adding the outp...
C
so that γK⁢(g,f,t)∈Bs+δ⁢(X,L∞⁢(X))subscript𝛾𝐾𝑔𝑓𝑡subscript𝐵𝑠𝛿𝑋superscript𝐿𝑋\gamma_{K}(g,f,t)\in B_{s+\delta}(X,L^{\infty}(X))italic_γ start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ( italic_g , italic_f , italic_t ) ∈ italic_B start_POSTSUBSCRIPT italic_s + italic_δ end_POSTSUBSCRIPT ( italic_X , italic_L st...
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic...
By invoking the relationship between the Vietoris-Rips persistent homology and the strong filling radius, one can verify that the strong filling radii of two n𝑛nitalic_n-dimensional metric manifolds M𝑀Mitalic_M and N𝑁Nitalic_N are close if these two manifolds are similar in the Gromov-Hausdorff distance sense.
The goal of this section is to provide some partial results regarding the structure of barc∗VR⁢(⋅)subscriptsuperscriptbarcVR∗⋅\mathrm{barc}^{\mathrm{VR}}_{\ast}(\cdot)roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ⋅ ) for non-smooth spaces; see Figure 12. In ord...
Now, we recall the notion of filling radius, an invariant for closed connected manifolds introduced by Gromov [46, pg.8] in the course of proving the systolic inequality (see also [58] for a comprehensive treatment). It turns out to be that this notion can be a bridge between topological data analysis and differential...
D
Figure 2: Hyper-parameter exploration (presented in a dialog at the beginning of an analytical session), with 25 representative projections from a pool of 500 alternatives obtained through a grid search. Five quality metrics, plus their Quality Metrics Average (QMA), are also displayed to support the visual analysis. ...
The main view of the tool (Figure 1(f)) presents the t-SNE results as an interactive scatterplot, with specific mappings on the points’ colors and sizes (see Subsection 4.3 for details). There are four Interaction Modes (Figure 1(h)) for this view, as described next. The first (and default) mode—t-SNE Points Explorati...
The implemented views are a mix of adapted and improved classic techniques (e.g., our Shepard Heatmap and Adaptive Parallel Coordinates Plot (PCP)), new proposals (e.g., the Dimension Correlation view), and standard visual mappings with information that is usually hidden or lost after the projection is created (e.g., D...
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ...
Figure 1: Visual inspection of t-SNE results with t-viSNE: (a) a panel for uploading data sets, choosing between two execution modes (grid search or a single set of parameters), and storing new (or loading previous) executions; (b) overview of the results with data-specific labels encoded with categorical colors; (c) t...
A
Considering the classifications obtained in our study, we have critically examined the reviewed literature classification in the different taxonomies proposed in this work. The goal is to analyze if there is a relationship between the algorithms classified in the same category in one taxonomy and their classification ...
The first analysis focuses on taxonomies. Specifically, we provide several recommendations to improve research practices in this area. The growing number of nature-inspired proposals could be seen as a symptom of the active status of this field; however, its sharp evolution suggests that research efforts should be als...
Both taxonomies and the analysis provide a full overview of the situation of the bio-inspired optimization field. However, Figure 1 reflects the interest of research in this field, as the number of papers is in continuous growth of interest. We believe that it is essential to highlight and reflect on what is expected ...
The role of bio-inspired algorithms in competitions: Finally, we also stress on the fact that metaheuristic algorithms that have scored best in many competitions are far from being biologically inspired, although some of them retain their nature-inspired roots (mostly, DE) [44]. This fact was expected for the lack of g...
We should pause and reflect on which research directions should be pursued in the future in regard to bio-inspired optimization and related areas, as there are other remarkable fields to be noted as direct applications for bio-inspired optimization. In [3], the authors show a full discussion of the status of the field ...
A
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
B
This method detects lack of ingress filtering only on provider ASes (i.e., spoofable customer ASes cannot be detected). The study in (Lone et al., 2017) identified loops in 1,780 ASes, which is 3.2% of all the ASes, and 703 of the ASes were found spoofable. Although a valuable complementary technique for active probes ...
(Lichtblau et al., 2017) developed a methodology to passively detect spoofed packets in traces recorded at a European IXP connecting 700 networks. The limitation of this approach is that it requires cooperation of the IXP to perform the analysis over the traffic and applies only to networks connected to the IXP. Allow...
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we ...
The measurement methodology underlying SMap uses active probes, some sent from spoofed as well as from real source IP addresses to popular services on the tested networks. The spoofed source IP addresses belong to the tested networks (similarly to the Spoofer Project (Beverly and Bauer, 2005)). The idea behind our met...
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20...
A
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal...
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ...
A
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
B
While we define the congruence over Q∗superscript𝑄Q^{*}italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, we are only interested in the generated semigroup and let Σ(𝒜)=Q+/=𝒜\Sigma(\mathcal{A})=Q^{+}/{=_{\mathcal{A}}}roman_Σ ( caligraphic_A ) = italic_Q start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT / = start_POSTS...
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
Let S𝑆Sitalic_S be a (completely) self-similar semigroup. Then S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is (completely) self-similar. Furthermore, if S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t ...
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
A
Visual Question Answering (VQA) Antol et al. (2015), the task of answering questions about visual content, was proposed to facilitate the development of models with human-like visual and linguistic understanding. However, existing VQA models often exploit superficial statistical biases to produce responses, instead of ...
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
Following Selvaraju et al. (2019), we train HINT on the subset with human-based attention maps Das et al. (2017), which are available for 9% of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to 2×10−52superscript1052\times 10^{-5}2 × 10 start_POSTSUPERSCRIPT - 5 end_PO...
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P⁢(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ...
B
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w...
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil...
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an...
To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i...
We use the byte pair encoding tokenization technique utilized in RoBERTa and retain its cased vocabulary. We did not create a new vocabulary since the two vocabularies are not significantly different and any out-of-vocabulary words can be represented and tuned for the privacy domain using the byte pair encoding vocabu...
C
E2 added that, after some initial training period (because the system could be a bit overwhelming in the beginning), the power of visualization in StackGenVis for supporting the analytical process is impressive. E3 raised the question: “why not select the best, or a set of the best models of an algorithm, according to ...
We answered that the per-class performance is also a very important component, and exploratory visualization can assist in the selection process, as seen in Figure 2(b and c.1). The expert understood the importance of visualization in that situation, compared to not using it.
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c....
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th...
Selection of Algorithms and Models. Similar to the workflow described in section 4, we start by setting the most appropriate parameters for the problem (see Figure 6(a)). As the data set is very imbalanced, we emphasize g-mean over accuracy, and ROC AUC over precision and recall. Log loss is disabled because the inves...
A
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
D
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
A
The CCA codebook-based multi-UAV beam tracking scheme with TE awareness. Based on the designed codebook, by exploiting the Gaussian process (GP) tool, both the position and attitude of UAVs can be fast tracked for fast multiuser beam tracking along with dynamic TE estimation. Moreover, the estimated TE is leveraged to...
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe...
D
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are as required by Theorem 3.7.
Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever...
a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ. The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI...
Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital...
We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en...
C
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati...
at the mean-field limit with ϵ→0+→italic-ϵsuperscript0\epsilon\rightarrow 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and m→∞→𝑚m\rightarrow\inftyitalic_m → ∞. Such a correspondence allows us to use the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3....
The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019). In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
A
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ...
We applied joint Byte-Pair Encoding Sennrich et al. (2016) with 32⁢k32𝑘32k32 italic_k merging operations on all data sets to address the unknown word issue. We only kept sentences with a maximum of 256256256256 subword tokens for training. For fair comparison, we did not tune any hyperparameters but followed Vaswani e...
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ...
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
A
(thus ⟦ψ⊇Cn⟧𝒞={Cn}\llbracket\psi_{\supseteq C_{n}}\rrbracket_{\mathcal{C}}=\{C_{n}\}⟦ italic_ψ start_POSTSUBSCRIPT ⊇ italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT caligraphic_C end_POSTSUBSCRIPT = { italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }) and let
using Claim 4.3. For each n𝑛nitalic_n, let ψ⊇Cn∈𝖤𝖥𝖮⁢[σ𝒢]subscript𝜓subscript𝐶𝑛absent𝖤𝖥𝖮delimited-[]subscriptσ𝒢\psi_{\supseteq C_{n}}\in\mathsf{EFO}[\upsigma_{\mathcal{G}}]italic_ψ start_POSTSUBSCRIPT ⊇ italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∈ sansserif_EFO [ roman_σ start_P...
open set in τnsubscriptτ𝑛\uptau_{n}roman_τ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for some n𝑛nitalic_n that is definable in 𝖤𝖥𝖮⁢[σ𝒢]𝖤𝖥𝖮delimited-[]subscriptσ𝒢\mathsf{EFO}[\upsigma_{\mathcal{G}}]sansserif_EFO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ]. Thus the set of finite cycles...
the (τ⊆i,𝖤𝖥𝖮⁢[σ𝒢])subscriptτsubscript𝑖𝖤𝖥𝖮delimited-[]subscriptσ𝒢(\uptau_{\subseteq_{i}},\mathsf{EFO}[\upsigma_{\mathcal{G}}])( roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_EFO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ] ) preserv...
ψ⊇Pn∈𝖤𝖥𝖮⁢[σ𝒢]subscript𝜓subscript𝑃𝑛absent𝖤𝖥𝖮delimited-[]subscriptσ𝒢\psi_{\supseteq P_{n}}\in\mathsf{EFO}[\upsigma_{\mathcal{G}}]italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∈ sansserif_EFO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT...
D
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
(2) For each backbone network, the layer depths of VGG16, InceptionV3, and ResNet50 are 23, 159, and 168, respectively. These architectures represent the different extraction abilities of image features. As illustrated in Fig. 6, the distortion parameter estimation achieves the lowest error (0.15) using InceptionV3 as...
Global Perception Module: For the global perception module, its architecture can be divided into two sub-networks, a backbone network, and a header network. Specifically, the general representation of the global distortion context is extracted using the backbone network composed of convolutional layers. This represent...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
A
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies. The model is tra...
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy.
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. ...
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/. We set aside 20% of the samples as the test set and divide the rema...
D
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu...
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
C
The ways to deal with the convex cost functions with bounded or Lipschitz continuous (sub)gradients employ the boundness or Lipschitz continuity of the (sub)gradients, respectively ([4], [7], [13]-[17]). In [13], the gradients of local cost functions satisfy Lipschitz continuity, in which, the key step of analyzing the...
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
A
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, whi...
Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonym...
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ...
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
A
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
As shown in Table 3, all PointRend models achieve promising performance. Even without ensemble, our PointRend baseline, which yields 77.38 mAP, has already achieved 1st place on the test leaderboard. Note that several attempts, like BFP Pang et al. (2019) and EnrichFeat, give no improvements against PointRend baseline,...
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
B
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
C
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dy...
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio...
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are gen...
D
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,S⁢D=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
B
Out-of-KG entity prediction methods, such as MEAN [19], VN Network [20], and LAN [21], leverage logic rules to infer the missing relationships but do not generate unconditioned entity embeddings for other tasks. These methods share a similar task setting with ours, where all relations are known during training. The new...
Moreover, DAN introduces a distinctive attention mechanism that employs the neighbors of the central entity to evaluate the neighbors themselves. This collective voting mechanism helps mitigate bias and contributes to improved performance, even on traditional tasks. It also distinguishes DAN from other existing inducti...
We present the training procedure of decentRL for entity alignment in Algorithm 1. It is worth noting that decentRL does not rely on additional data such as pretrained KG embeddings or word embeddings. The algorithm first randomly initializes the DAN model, entity embeddings, and relation embeddings. The training proc...
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem...
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different down...
C
To evaluate the robustness of exploration methods, we conduct experiments on sticky Atari games, which introduce stochasticity in Atari games. Following [49], we use a parameter τ𝜏\tauitalic_τ to control the stickiness in Atari games. Specifically, in time step t𝑡titalic_t, the environment repeats the agent’s previou...
One reason to perform self-supervised exploration is to adapt the trained explorative agent in similar environments for exploration. To evaluate such adaptability, we conduct experiments on Super Mario. Super Mario has several levels of different scenarios. We take 5555 screenshots at each level when playing games, as...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
To evaluate the adaptability, we further adopt the policies learned from the Level 1111 to other levels. More specifically, for each method, we first save the last policy when training in the Level 1111, and then fine-tune such a policy in the Levels 2222 and 3333. Since the VDM and RFM methods perform the best in the ...
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to ...
A
The number of coefficients |Am,n,1|=(m+nn)∈𝒪⁢(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali...
Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
Whatsoever, any answer to Questions 2 that is to be of practical relevance must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1.
Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl...
A
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + f...
The finite-sample convergence of general IPMs between two empirical distributions was established. Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality.
In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance. Omitted proofs can be found in Appendix A.
The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1]. However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t inves...
A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark. In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance ...
C
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ⁢(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mod...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g...
B
This window operator calculates the connection between the pie and alpha, or beta, at A and B and transfers it to the right side (A AND B). In case of output, it is possible to measure by firing a laser onto a pie pin on the resulting side and checking whether it returns to either alpha or beta. The picture shows the c...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
D
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
The second statement of the theorem gives a necessary and sufficient condition for an element of the set ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT to be in ΣfsubscriptΣ𝑓\Sigma_{f}roman_Σ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT. If the choice of basis is as in (6), once the s...
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic...
Let the matrix representation of KF=𝐊|Wsubscript𝐾𝐹conditional𝐊𝑊K_{F}=\mathbf{K}|Witalic_K start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = bold_K | italic_W in ℬℬ\mathcal{B}caligraphic_B be denoted as M𝑀Mitalic_M. (The notation for matrix representation is explained in (8)). Analogous to the univariate case, the...
The first author would like to thank the Department of Electrical Engineering, Indian Institute of Technology - Bombay, as the work was done in full during his tenure as a Institue Post-Doctoral Fellow. The authors would also like to thank the reviewers for their suggestions in the proofs of Lemma 1, Proposition 1 and...
D
For each experimental condition, we simulate 100 multi-view data training sets. For each such data set, we randomly select 10 views. In 5 of those views, we determine all of the features to have a relationship with the outcome. In the other 5 views, we randomly determine 50% of the features to have a relationship with...
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An exam...
In MVS, the meta-learner takes as input the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z. To perform view selection, the meta-learner should be chosen such that it returns (potentially) sparse models. The matrix 𝒁𝒁\bm{Z}bold_italic_Z has a few special characteristics which can be exploited, and which...
Table 2: Results of applying MVS with different meta-learners to the colitis data. ANSV denotes the average number of selected views. H denotes the H measure (Hand, \APACyear2009). In computing the H measure we assume that the misclassification cost is the same for each class. Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end...
We apply multi-view stacking to each simulated training set, using logistic ridge regression as the base-learner. Once we obtain the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z, we apply the seven different meta-learners. To assess classification performance, we generate a matching test set of 1000 ob...
D
In Phase 2, a set of prediction models is trained, one for each variable, using its relevant variables as predictors and itself as the target variable. The goal is to build a set of prediction models G={g1,⋯,gm}Gsubscript𝑔1⋯subscript𝑔𝑚\textbf{G}=\left\{g_{1},\cdots,g_{m}\right\}G = { italic_g start_POSTSUBSCRIPT 1 e...
This phase offers several advantages to DepAD. Firstly, relevant variable selection can eliminate redundant and irrelevant variables from the prediction models, reducing the risk of overfitting and enhancing prediction reliability. Secondly, it speeds up model training and enhances scalability, especially for high-dime...
This phase can utilize off-the-shelf feature selection methods [29, 30] to identify the relevant variables. When choosing a feature selection method, the following factors should be considered: (1) The prediction models used in the prediction model training phase; (2) The interpretability of the selected variables; an...
In this paper, we introduce DepAD, a versatile framework for dependency-based anomaly detection. DepAD offers a general approach to construct effective, scalable, and flexible anomaly detection algorithms by leveraging off-the-shelf feature selection techniques and supervised prediction models for various data types a...
When selecting a predictive model for DepAD, two important aspects need be considered.Firstly, the method is versatile and accurate. Versatility means that a method can be used for various data types and deal with different relationship types. Secondly, since the training set may contain anomalies, the prediction mode...
D
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
D
Datasets and evaluation metrics. We present our experimental results on two representative datasets THUMOS-14 (THUMOS for short) [15] and ActivityNet-v1.3 (ActivityNet for short) [7]. THUMOS-14 contains 413 temporally annotated untrimmed videos with 20 action categories, in which 200 videos are for training and 213 vid...
Implementation Details. In order to achieve higher performance, some works directly process video frames and learn features for the task of temporal action localization (TAL) in an end-to-end fashion [24, 42]. However, this has humongous requirements for GPU memory and computational capability. Instead, we follow the ...
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s...
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce c...
We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp...
A
Thereafter in Section 5, we demonstrate the applicability and usefulness of VisEvol with another real-world data set focusing on biodegradation of molecules. Next, in Section 6, we review the feedback our VA tool obtained during the interview sessions by summing up the experts’ opinions and the limitations that guide u...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
One common focus of related work is the hyperparameter search for deep learning models. HyperTuner [LCW∗18] is an interactive VA system that enables hyperparameter search by using a multi-class confusion matrix for summarizing the predictions and setting user-defined ranges for multiple validation metrics to filter out...
Numerous techniques exist that try to solve this challenge, such as the well-known grid search, random search [BB12], and Bayesian optimization that belong to the generic type of sequential-based methods [BBBK11, SSW∗16]. Other proposed methods include bandit-based approaches [FKH18, LJD∗17], population-based methods [...
Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t...
D
Markov chains and consensus protocols share a close relationship. The rich theory of Markov chains has proven to be valuable in analyzing specific consensus protocols. Notable works such as [23, 24, 25, 26] have leveraged Markov chain theory to provide insights and analysis for consensus protocols.
Markov chains and consensus protocols share a close relationship. The rich theory of Markov chains has proven to be valuable in analyzing specific consensus protocols. Notable works such as [23, 24, 25, 26] have leveraged Markov chain theory to provide insights and analysis for consensus protocols.
Consensus protocols, in contrast to Markov chains, operate without the limitations of non-negative nodes and edges or the requirement for the sum of nodes to equal one [18]. This broader scope enables consensus protocols to address a significantly wider range of problem spaces. Therefore, there is a significant interes...
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
Consensus protocols form an important field of research that has a strong connection with Markov chains [18]. Consensus protocols are a set of rules used in distributed systems to achieve agreement among a group of agents on the value of a variable [19, 20, 21, 22].
B
A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so...
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5. From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con...
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar...
A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati...
A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so...
A
On the side of path graphs, we believe that compared to [3, 22], our algorithm provides a simpler and very shorter treatment (the whole explanation is in Section 4). Moreover, it does not need complex data structures while algorithm in [3] is based on PQR-trees and algorithm in [22] is a complex backtracking algorithm...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly...
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov...
B
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_...
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
D
For instance, 𝒳𝒳\mathcal{X}caligraphic_X can be a torus 𝕋dsuperscript𝕋𝑑\mathbb{T}^{d}blackboard_T start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, which can be viewed as the d𝑑ditalic_d-dimensional hypercube [0,1)dsuperscript01𝑑[0,1)^{d}[ 0 , 1 ) start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where the op...
We specialize to such a structure only for rigorous theoretical analysis, which also appears in other works involving the Wasserstein space (Gräf and Hielscher, 2015). Our results can be readily generalized to a general 𝒳𝒳\mathcal{X}caligraphic_X with extra technical care.
In other words, posterior sampling with Langevin MCMC can be posed as a distributional optimization method. Furthermore, in addition to the KL divergence, F⁢(p)𝐹𝑝F(p)italic_F ( italic_p ) in (3.1) also incorporates other f𝑓fitalic_f-divergences (Csiszár, 1967).
is a geodesic space equipped with a Riemannian metric. To simplify the presentation, in the sequel, we specialize to the case where 𝒳𝒳\mathcal{X}caligraphic_X is a compact subset of ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with a periodic boundary condition.
artifacts adopted only for theoretical analysis. We present the details of such a modified algorithm in Algorithm 2 in §A. Without these modifications, Algorithm 2 reduces to the general method proposed in Algorithm 1, a deterministic particle-based algorithm, which is more advisable for
A
-i}\big{)},≈ caligraphic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_o start_POSTSUBSCRIPT italic_i , italic_t + 1 end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_a start_POSTSUPERSCRIPT - italic_i end_...
Observation. Each agent has its own local observation, including the number of vehicles on each incoming lane and the current phase of the intersection, where phase is the part of the signal cycle allocated to any combination of traffic movements, as explained in Section 3.1. Observation of agent i𝑖iitalic_i is define...
Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards a...
Action. At time t𝑡titalic_t, each agent i𝑖iitalic_i chooses a phase 𝚙𝚙\mathtt{p}typewriter_p as its action aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, indicating the traffic signal should be set to phase 𝚙𝚙\mathtt{p}typewriter_p. Note that the phases may organize in a sequential ...
For an intersection, the incoming lanes refer to the lanes where the vehicles are about to enter the intersection. In real world, most intersections are equipped with 4-way entering approaches, but some are 3-way or 5-way intersections. A standard 4-way intersection is shown in Fig. 2, which consists of four approaches...
A
(\mathbf{x}_{k})\;\;\;\;\mbox{for}\;\;\;\;k\,=\,0,1,\ldotsbold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT = bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_J ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_f ( bold_x start_POSTSUBSCRI...
Jrank-r⁢(𝐱k)†⁢𝐟⁢(𝐱k)subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}\,\mathbf{f}(\mathbf{% x}_{k})italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTS...
𝐱k+1=𝐱k−𝐟𝐱⁢(𝐱k,t~)rank-3†⁢𝐟⁢(𝐱k,t~),k= 0,1,…formulae-sequencesubscript𝐱𝑘1subscript𝐱𝑘subscript𝐟𝐱superscriptsubscriptsubscript𝐱𝑘~𝑡rank-3†𝐟subscript𝐱𝑘~𝑡𝑘 01…\mathbf{x}_{k+1}~{}~{}=~{}~{}\mathbf{x}_{k}-\mathbf{f}_{\mathbf{x}}\big{(}% \mathbf{x}_{k},\tilde{t}\,\big{)}_{\mbox{\scriptsize rank-{3}}}^{\dag...
𝐱k+1=𝐱k−Jrank-r⁢(𝐱k)†⁢𝐟⁢(𝐱k)fork= 0,1,…formulae-sequencesubscript𝐱𝑘1subscript𝐱𝑘subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘for𝑘 01…\mathbf{x}_{k+1}~{}~{}=~{}~{}\mathbf{x}_{k}-J_{\mbox{\scriptsize rank-$r$}}(% \mathbf{x}_{k})^{\dagger}\,\mathbf{f}(\mathbf{x}_{k})\;\;\;\;\mbox{for}\;\;\;%
𝐱k+1=𝐱k−𝐟𝐱⁢(𝐱k)†⁢(𝐟⁢(𝐱k)−𝐛~)for⁢k= 0,1,⋯formulae-sequencesubscript𝐱𝑘1subscript𝐱𝑘subscript𝐟𝐱superscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘~𝐛for𝑘 01⋯\mathbf{x}_{k+1}~{}~{}=~{}~{}\mathbf{x}_{k}-\mathbf{f}_{\mathbf{x}}(\mathbf{x}% _{k})^{\dagger}(\mathbf{f}(\mathbf{x}_{k})-\tilde{\mathbf{b}})~{}~{}~{}~{}%
C
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin...
We will now use Lemma 2 to prove a more general result that incorporates the prediction error into the analysis. To this end, we will relate the cost of the packing of ProfilePacking to the packing that the algorithm would output if the prediction were error-free, which will allow us to apply the result of Lemma 2. Spe...
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small. The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given t...
In order to analyze the performance of an online algorithm, we will rely on the well-established framework of competitive analysis, which provides strict, theoretical performance guarantees against worst-case scenarios. In fact, as stated in (?), bin packing has served as “an early proving ground for this type of analy...
In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity. Ideally, one aims for a loss that is as small as o⁢(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu...
C
We examine the generative capabilities of the provided LoCondA model compared to the existing reference approaches. In this experiment, we follow the evaluation protocol provided in (Yang et al., 2019). We use standard measures for this task like Jensen-Shannon Divergence (JSD), coverage (COV), and minimum matching dis...
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018...
A
Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter probl...
Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem...
Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t...
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
Our paper technique can be generalized to non-smooth problems by using another variant of sliding procedure [34, 15, 23]. By using batching technique, the results can be generalized to stochastic saddle-point problems [15, 23]. Instead of the smooth convex-concave saddle-point problem we can consider general sum-type s...
A
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio...
A
(m+1)𝑚1(m+1)( italic_m + 1 )-tuples of ℱℱ\mathcal{F}caligraphic_F with nonempty intersection. In other words, πm+1⁢(ℱ)subscript𝜋𝑚1ℱ\pi_{m+1}(\mathcal{F})italic_π start_POSTSUBSCRIPT italic_m + 1 end_POSTSUBSCRIPT ( caligraphic_F ) is at least δ′=defρ/(m⁢tm+1)superscriptdefsuperscript𝛿′𝜌binomial𝑚𝑡𝑚1\delta^{\prim...
If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
Lemma 4.6 assumes that the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F has the property that for 0≤j<dimK0𝑗dimension𝐾0\leq j<\dim K0 ≤ italic_j < roman_dim italic_K and for every colorful subfamily 𝒢𝒢\mathcal{G}caligraphic_G of ℱℱ\mathcal{F}caligraphic_F, the j𝑗jitalic_jth reduced Betti number β~j⁢(⋂F∈�...
The rest of Section 4.1 is devoted to the proof of Lemma 4.2. The proof first handles the case k=m𝑘𝑚k=mitalic_k = italic_m, and then uses it to prove the case k<m𝑘𝑚k<mitalic_k < italic_m. Note that for k>m𝑘𝑚k>mitalic_k > italic_m the lemma is trivial, as the chain group contains only a trivial chain and we can ta...
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ...
A
In machine learning (ML), classification is a type of supervised learning where the primary goal is to predict the dependent variable—also known as the target or class label—of every data instance (e.g., rows in a table) given independent features of the data (e.g., columns in a table). Feature engineering is the proce...
In machine learning (ML), classification is a type of supervised learning where the primary goal is to predict the dependent variable—also known as the target or class label—of every data instance (e.g., rows in a table) given independent features of the data (e.g., columns in a table). Feature engineering is the proce...
The complex nature of feature engineering, occasionally declared as “black art” [2, 28], motivated us to concentrate our effort on addressing the three research questions mentioned above. In this paper, we present a visual analytics (VA) system, called FeatureEnVi (Feature Engineering Visualization, as seen in Fig. 1),...
The rest of this paper is organized as follows. In Section 2, we review automatic feature generation methods, then we continue with feature transformations, and finally, automated and visually-assisted selection of subsets of features. Afterwards, in Section 3, we describe the analytical tasks and design goals for app...
In general, feature engineering can be subdivided into four major processes: (a) feature ideation, (b) feature generation, (c) feature transformation, and (d) feature selection [7, 8]. Feature ideation is the process of coming up with entirely new features from the “raw” data. It is heavily subjective for most applicat...
D
As expected, adding the global tracking error constraint increases the traversal time, but maintains the maximal deviation within the bounds (see the table in 5). This tracking error constraint results in a dramatic 5-fold decrease of the maximum deviation ‖e^c‖∞subscriptnormsubscript^𝑒𝑐\|\hat{e}_{c}\|_{\infty}∥ ove...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
To reduce the number of times this experimental “oracle” is invoked, we employ Bayesian optimization (BO) [16, 17], which is an effective method for controller tuning [13, 18, 19] and optimization of industrial processes [20]. The constrained Bayesian optimization samples and learns both the objective function and the ...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
D
Group Upweighting (Up Wt) [55] attempts to mitigate the correlations between y𝑦yitalic_y and be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT by upweighting the minority patterns. Specifically, each sample (x,y𝑥𝑦x,yitalic_x , italic_y) is assig...
Distributionally Robust Optimization (DRO): DRO [22] minimizes the worst-case expected loss over potential test distributions. Often, such distributions are approximated by sampling from a uniform divergence ball around the train distribution [10, 23, 47]. However, this lacks structured priors about the potential shif...
Group DRO (GDRO) [55] provides DRO with the necessary prior that it must generalize to all groups. Similar to Up Wt, GDRO also uses y𝑦yitalic_y and be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT to create groups and has been shown to work well ...
Assuming access to the test distribution for model selection is unrealistic and can result in models being right for the wrong reasons [64]. Rather, it is ideal if the methods can generalize without being tuned on the test distribution and we study this ability by comparing models selected through varying tuning distri...
Hyperparameters for each method were chosen using a grid search with unbiased accuracy on each dataset’s validation set. To make this tractable, we first ran a grid search for the learning rate over {10−3,10−4,10−5}superscript103superscript104superscript105\{10^{-3},10^{-4},10^{-5}\}{ 10 start_POSTSUPERSCRIPT - 3 end_...
A
There is no protocol to regulate the cropping procedure. We provide a common cropping procedure here as an example. We let xi∈ℝ2subscript𝑥𝑖superscriptℝ2x_{i}\in\mathbb{R}^{2}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT be the x, y-coordinates of th...
Sugano et al. propose to rectify the eye image by rotating the virtual camera to point at the same reference point in the human face [37]. They assume that the captured eye image is a plane in 3D space, the rotation of the virtual camera can be performed as a perspective transformation on the image.
The z𝑧zitalic_z-axis 𝒛𝒄subscript𝒛𝒄\boldsymbol{z_{c}}bold_italic_z start_POSTSUBSCRIPT bold_italic_c end_POSTSUBSCRIPT of the rotated camera coordinate system is defined as the line from cameras to reference points, where the reference point is usually set as the face center or eye center. It means that the rotated...
Figure 13: A data rectification method [37]. The virtual camera is rotated so that the z𝑧zitalic_z-axis points at the reference point and the x𝑥xitalic_x-axis is parallel with the x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT-axis of the head coordinate system (HCS).
The geometric feature includes the angles between the pupil center as the reference point and the facial landmarks of the eyes and the tip of the nose. The detected facial landmarks can also be used for unsupervised gaze representation learning. Dubey et al.  [83] collect the face images from the web and annotate their...
A
Once the global histogram is computed, we pass to the classification stage to assign each test image to its identity. To do so, we apply the Multilayer perceptron classifier (MLP) where each face is represented by a term vector. Deep BoF network can be trained using back-propagation and gradient descent. Note that the ...
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine...
To evaluate the proposed method, we carried out experiments on very challenging masked face datasets. In the following, we present the datasets’ content and variations, the experimental results using the quantization of deep features obtained from three pre-trained models, and a comparative study with other state-of-t...
We have tested the face recognizer presented in luttrell2018deep that achieved a good recognition accuracy on two subsets of the FERET database phillips1998feret . This technique is based on transfer learning (TL) which employs pre-trained models and fine-tuning them to recognize masked faces from RMFRD and SMFRD dat...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
B
Γ′⊢C′::Δ\Gamma^{\prime}\vdash C^{\prime}::\Deltaroman_Γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊢ italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ Γ⊢C,C′::Δ\Gamma\vdash C,C^{\prime}::\Deltaroman_Γ ⊢ italic_C , italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which...
To review SAX, let us make observations about proof-theoretic polarity. In the sequent calculus, inference rules are either invertible—can be applied at any point in the proof search process, like the right rule for implication—or noninvertible, which can only be applied when the sequent “contains enough information,” ...
Configuration reduction →→\to→ is given as multiset rewriting rules [CS09] in Figure 4, which replace any subset of a configuration matching the left-hand side with the right-hand side. However, !!! indicates objects that persist across reductions. Principal cuts encountered in a configuration are resolved by passing ...
Now, let ΓΓ\Gammaroman_Γ and ΔΔ\Deltaroman_Δ be contexts that associate cell addresses to types. The configuration typing judgment given in Figure 3, Γ⊢C::Δ\Gamma\vdash C::\Deltaroman_Γ ⊢ italic_C : : roman_Δ, means that the objects in C𝐶Citalic_C are well-typed with sources in ΓΓ\Gammaroman_Γ and destinations in ΔΔ\...
C
We start by analyzing the situation where the owner attempts to passively attack with information gathered from the cloud. There are two ways for the owner to frame the k𝑘kitalic_k-th user, one is to embed the k𝑘kitalic_k-th user’s fingerprint into any media content, which requires the knowledge of 𝐛ksubscript𝐛𝑘\...
We then proceed to analyze the situation of active attack. In fact, the owner cannot launch any effective active attacks (the cloud does not cooperate due to the assumption honest-but-curious), because the direct interaction between the owner and the user in both schemes is strictly limited (there is only one round in ...
The threats considered in this paper come from three entities: users, the owner, and the cloud. First, users are assumed to be malicious, who could illegally redistribute the owner’s media content with the hope that this behavior will not be detected. Second, the owner is also assumed to be malicious, who may try to o...
We start by analyzing the situation where the owner attempts to passively attack with information gathered from the cloud. There are two ways for the owner to frame the k𝑘kitalic_k-th user, one is to embed the k𝑘kitalic_k-th user’s fingerprint into any media content, which requires the knowledge of 𝐛ksubscript𝐛𝑘\...
On the other hand, since the owner has no knowledge of the k𝑘kitalic_k-th user’s fingerprint 𝐛ksubscript𝐛𝑘\mathbf{b}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT (the user does not transmit any information to the owner other than to request authorization), the embedded 𝐆¯⁢𝐰k¯𝐆subscript𝐰𝑘\bar{\math...
A
Practically speaking, we found that our approach could achieve high performance when k=3𝑘3k=3italic_k = 3, m1subscript𝑚1m_{1}italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT equals the number of feature fields, which means that in the first layer, we model all pairs of feature interactions.
Note that although we did not present the statistics here, we also tested the influence of number of attention heads H𝐻Hitalic_H. The performance of using only one head, i.e., GraphFM(-M), is worse than that of using two, and more attention heads don not lead to improvement of performance but introduce much higher tim...
Figure 1: The overview of GraphFM. The input features are modeled as a graph, where nodes are feature fields, and edges are interactions. At each layer of GraphFM, the edges (beneficial interactions) are first selected by the interaction selection component.
It is worth mentioning that we have also tried to set a threshold to select the edges in the graph, i.e., setting a minimum value for the edge probability of cutting edges off. But the performance is not as good as using a fixed-degree graph. This is reasonable as the edge weights of different nodes’ neighbors are at d...
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
C
\mathcal{L}_{0}}\delta^{2}}{4\tilde{L}D^{2}}\right)^{\left\lceil(t-1)/2\right% \rceil}.italic_h ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ≤ italic_h ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ( 1 - divide start_ARG italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT cal...
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
When the domain 𝒳𝒳\mathcal{X}caligraphic_X is a polytope, one can obtain linear convergence in primal gap for a generalized self-concordant function using the well known Away-step Frank-Wolfe (AFW) algorithm [Guélat & Marcotte, 1986, Lacoste-Julien & Jaggi, 2015] shown in Algorithm 5
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes wh...
C
Let limit=def1/ε4superscriptdeflimit1superscript𝜀4\textsc{limit}\stackrel{{\scriptstyle\text{\tiny\rm def}}}{{=}}1/\varepsilon^{4}limit start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG def end_ARG end_RELOP 1 / italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT. If a structure 𝒮αsubscript𝒮𝛼\mathcal{S}_{\al...
Extend-Active-Paths scans over unmatched edges; note that matched edges are in the algorithm’s memory, so there is no new information to be gained by seeing a matched edge on the stream. Let {u,v}𝑢𝑣\{u,v\}{ italic_u , italic_v } be the current unmatched edge on the stream. Then, the algorithm considers separately g=...
[backgroundcolor=gray!15, linecolor=red!40!black] Informal description: This operation finds an augmenting path containing a given unmatched arc g𝑔gitalic_g (without seeing any new edge on the stream) and removes the vertices contained in the structures affected by this augmentation.
Figure 3: In this example, α𝛼\alphaitalic_α is a free node, black (full) single-segments are unmatched and black (full) double-segments are matched edges. Assume that the algorithm first explores the red (dashed) path from α𝛼\alphaitalic_α to a4subscript𝑎4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT. This ...
In a new pass, for each edge e={u,v}𝑒𝑢𝑣e=\{u,v\}italic_e = { italic_u , italic_v } in the stream, the algorithm checks whether the structure containing u𝑢uitalic_u and the structure containing v𝑣vitalic_v, if such structures exist, can augment over e𝑒eitalic_e. If it is possible, via Augment-and-Clean the algori...
B
Figure 1: Linear convergence of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, and B-CPP with b𝑏bitalic_b bit quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and Rand-k (k=5,10,20𝑘51020k=5,10,20italic_k = 5 , 10 , 20) compressors.
This is reasonable as the compression operator induces additional errors compared to the exact method, and these additional errors could slow down the convergence. Meanwhile, as the values of b𝑏bitalic_b or k𝑘kitalic_k increases, both CPP and B-CPP speed up since the compression errors decrease.
The existence of compression errors may result in inferior convergence performance compared to uncompressed or centralized algorithms. For example, the methods considered by [41, 42, 43, 44, 45, 46] can only guarantee to reach a neighborhood of the desired solutions when the compression errors exist. QDGD [47] achieves...
Although the additional compression errors slow down the convergence, our design for CPP guarantees that the impact on the convergence rate is relatively small. Therefore, CPP is much more communication-efficient than the exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method.
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compres...
A
SPPs cover a wider range of problems than minimization ones and has numerous important practical applications [6]. These include well-known and famous examples from game theory or optimal control [7]. In recent years, saddle point problems have become popular in several other respects.
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ...
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing [10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler...
B
Correlation is achieved via a trusted external entity (correlation device) which samples a joint action from a public CE joint distribution. Each player is given their action in secret. The properties of the CE means that no individual player is motivated to deviate from the suggested action. If there are deviation ac...
The set of (C)CEs forms a convex polytope, and therefore any strictly convex function could uniquely select amongst this set. The literature only provides one such example: MECE (Ortiz et al., 2007) which has a number of appealing properties, but was found to be slow to solve large games. There is a gap in the literatu...
There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve ...
Figure 1: The solution landscape for the traffic lights game. The solid polytope shows the space of CE joint strategies, and the dotted surface shows factorizable joint strategies. NEs are where the surface and polytope intersect. There are three unsatisfying NEs: mixed spends most of its time waiting and does not avoi...
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th...
B
Given η>0𝜂0\eta>0italic_η > 0 and a query q𝑞qitalic_q, the Gaussian mechanism with noise parameter η𝜂\etaitalic_η returns its empirical mean q⁢(s)𝑞𝑠{q}\left(s\right)italic_q ( italic_s ) after adding a random value, sampled from an unbiased Gaussian distribution with variance η2superscript𝜂2\eta^{2}italic_η start...
Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section ...
In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst. This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability ...
D
For each u∈χ−1⁢(𝖢˙)𝑢superscript𝜒1˙𝖢u\in\chi^{-1}(\mathsf{\dot{C}})italic_u ∈ italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) we perform a number of 𝒪⁢(n+m)𝒪𝑛𝑚\mathcal{O}(n+m)caligraphic_O ( italic_n + italic_m )-time operations and run the dynamic programming algo...
Note that the condition |NG⁢(F)|≤|C|+1subscript𝑁𝐺𝐹𝐶1|N_{G}(F)|\leq|C|+1| italic_N start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_F ) | ≤ | italic_C | + 1 trivially holds for any single-tree FVC. We will show that, given a reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), we can efficiently reduce to a s...
Similar to the algorithm from Lemma 5.8, we can use two (n+m,𝒪⁢(k5⁢z2))𝑛𝑚𝒪superscript𝑘5superscript𝑧2(n+m,\mathcal{O}(k^{5}z^{2}))( italic_n + italic_m , caligraphic_O ( italic_k start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )-universal sets to create a set of c...
Given a multigraph G𝐺Gitalic_G and coloring χ𝜒\chiitalic_χ of G𝐺Gitalic_G that properly colors some simple reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), a reducible FVC (C′,F′)superscript𝐶normal-′superscript𝐹normal-′(C^{\prime},F^{\prime})( italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_F st...
Using the previous lemmas the problem of finding a reducible single-tree FVC reduces to finding a coloring that properly colors a simple reducible FVC. We generate a set of colorings that is guaranteed to contain at least one such coloring. To generate this set we use the concept of a universal set.
D
Some other shadow generation methods are not designed for our task, i.e., generating shadow for the foreground object in a composite image, but they can be somehow adapted to our task. Mask-ShadowGAN [54] explored conducting shadow removal and shadow generation with unpaired data at the same time, which satisfies cycli...
SGRNet [52] designed a two-stage shadow generation network. In the first stage, foreground features and background features are interacted using cross-attention to predict a shadow mask. In the second stage, they predict shadow parameters which are used to darken the input composite image. Then, the darkened image is c...
Inoue et al. [57] developed a multi-task framework with two decoders accounting for depth map prediction and ambient occlusion map prediction respectively. ARShadowGAN [92] proposed an attention-guided residual network. The network predicts two attention maps for background shadow and occluder respectively, which are c...
Sheng et al. [134] designed a shadow generation network to generate soft shadow for foreground object with user control. They first predict ambient occlusion map, which is jointly used with user-provided light map to produce soft shadow mask. When adapted to our task, an environment light map needs to be inferred from ...
Some other shadow generation methods are not designed for our task, i.e., generating shadow for the foreground object in a composite image, but they can be somehow adapted to our task. Mask-ShadowGAN [54] explored conducting shadow removal and shadow generation with unpaired data at the same time, which satisfies cycli...
C
The Greedy algorithm, which does not consider any global optimization targets, performs the worst compared to LLD and LPA. Taking global optimization targets into consideration leads to a significant improvement in performance, with completion rates improving by 5%∼similar-to\sim∼20% and revenue increasing by 2%∼simil...
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A⁢(i,j)�...
Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To...
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
Our experimental results demonstrate that LPA outperforms LLD in most cases. This can be attributed to the fact that LPA optimizes the expected long-term revenues at each dispatching round, while LLD only focuses on the immediate reward. As a result, LPA is better suited for maximizing the total revenue of the system ...
D
One can immediately expect that, analogous to general mean-variance estimators with a Gaussian prediction interval, this procedure does not give optimal intervals for data sets that do not follow a normal distribution. One of the consequences is that this model might suffer from the validity problems discussed in Secti...
The idea behind deep ensembles lakshminarayanan2017simple is the same as for any ensemble technique: training multiple models to obtain a better and more robust prediction. The loss functions of most (deep) models have multiple local minima and by aggregating multiple models one hopes to take into account all these mi...
The choice of data sets in this comparative study was very broad and no specific properties were taken into account a priori. After comparing the results of the different models, it did become apparent that certain assumptions or properties can have a major influence on the performance of the models. The main examples ...
For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
A
In particular, inspired by the growing trend of treating MIDI music as a “language” in deep generative models for symbolic music \parencitehuang2018music,payne2019musenet,huang2020pop,musemorphose,musecoco, we employ a Transformer-based network pre-trained by a self-supervised training strategy called “masked language ...
\textcite musicbert presented MusicBERT, a PTM tailored for symbolic MIDI data. MusicBERT was trained on a non-public dataset of over one million multi-track MIDI pieces. The authors showcased the efficacy of MusicBERT by applying it to two generative music tasks, melody completion and accompaniment suggestion and two ...
Despite the fame of BERT, we are aware of only two publications that employ BERT-like PTMs for symbolic music classification \parencitetsai20ismir,musicbert. The first work \parencitetsai20ismir deals with optically scanned sheet music, while we use MIDI inputs.
To our best knowledge, the work of \textcitetsai20ismir represents the first attempt to use PTMs for symbolic-domain music classification. They showed that either a RoBERTa-based Transformer encoder PTM \parenciteroberta or a GPT2-based Transformer encoder PTM \parencitegpt2 outperform non-pre-trained baselines for a ...
Moreover, we consider two types of MIDI data and compare the performance of the resulting PTMs. Specifically, following \textciteoore2018time, we differentiate two types of MIDI files, MIDI scores, which are musical scoresheets rendered directly into MIDI with no dynamics and exactly according to the written metrical g...
B
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c⁢(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c⁢(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSU...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and findi...
Now, observe that if the block to the left is also of type A, then a respective block from Z⁢(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t...
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the...
A