context
stringlengths
250
5.66k
A
stringlengths
250
6.9k
B
stringlengths
250
8.2k
C
stringlengths
250
7.79k
D
stringlengths
250
4.85k
label
stringclasses
4 values
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
+xmd2d⁢x2F(a,b;c;z)];\displaystyle\quad\quad+x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)\Big{]};+ italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_F ( ...
2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR...
+3dd⁢xxmd2d⁢x2F(a,b;c;z)+xmd3d⁢x3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}% \frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic...
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
C
19S:=assign𝑆absentS:=italic_S := Composition of S𝑆Sitalic_S and the MSLP of hℓ−1⁢hr−1superscriptsubscriptℎℓ1superscriptsubscriptℎ𝑟1h_{\ell}^{-1}h_{r}^{-1}italic_h start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start...
The following lemma shows how to compute the matrices of the preprocessing step. Recall that ω𝜔\omegaitalic_ω is a primitive element of 𝔽q=𝔽pfsubscript𝔽𝑞subscript𝔽superscript𝑝𝑓\mathbb{F}_{q}=\mathbb{F}_{p^{f}}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT = blackboard_F start_POSTSUBSCRIPT italic_...
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
First we describe the preprocessing phase during which we initialize the memory of the MSLP to encode particular matrices which will be useful for expressing diagonal matrices as words independently of the given diagonal matrix. The constructed matrices can be reused for all diagonal matrices, and so further diagonal m...
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm.
C
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT.
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
C
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
A
Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve...
D
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
C
To overcome this issue, we set up a threshold 72 hours. We only consider the first candidate within 72 hours before or after the beginning time of the event as timestamp of human confirming rumors. On average the human editors of Snopes need 25.49 hours to verify the rumors and post it. Our system already achieves 87% ...
The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999htt...
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
B
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU...
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes...
to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector of ℳL⁢R2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
A
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b], as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3.
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
C
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week. For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable.
D
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai...
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe...
D
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for...
The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols ...
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap...
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTS...
Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the marking...
D
In their article Luo et al.[79] utilized quality assessment to remove low quality heartbeats, two median filters for removing power line noise, high-frequency noise and baseline drift. Then, they used a derivative-based algorithm to detect R-peaks and time windows to segment each heartbeat.
Most of the methods convert PCGs to images using spectrogram techniques. Rubin et al.[108] used a logistic regression hidden semi-markov model for segmenting the start of each heartbeat which then were transformed into spectrograms using Mel-Frequency Cepstral Coefficients (MFCCs).
Modified frequency slice WT was used to calculate the spectrogram of each heartbeat and a SDAE for extracting features from the spectrogram. Then, they created a classifier for four arrhythmias from the encoder of the SDAE and a softmax, achieving an overall accuracy of 97.5%.
Each spectrogram was classified into normal or abnormal using a two layer CNN which had a modified loss function that maximizes sensitivity and specificity, along with a regularization parameter. The final classification of the signal was the average probability of all segment probabilities.
In[111] the authors used Adaboost which was fed with spectrogram features from PCG and a CNN which was trained using cardiac cycles decomposed into four frequency bands. Finally, the outputs of the Adaboost and the CNN were combined to produce the final classification result using a simple decision rule.
B
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control. Our video models of Atari en...
Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans...
D
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
A
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3...
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure...
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
C
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem. Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size.
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem. Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size.
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat...
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode...
We first show how to find a Pareto-optimal strategy, when the advice encodes the hidden value, and thus can have unbounded size. Moreover, we study the competitiveness of the problem with only k𝑘kitalic_k bits of advice, for some fixed k𝑘kitalic_k, and
D
Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce [Dean & Ghemawat, 2008], giving the framework the capability of effectively processing very large volumes of data. In Algorithm 2 is shown the training process described earlier. Note that the line...
Otherwise, it can be omitted since, during classification, g⁢v𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries. It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents co...
It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5. Additionally, the classification is also i...
Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix be...
This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category. Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated.
A
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter...
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai...
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )...
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and al...
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
C
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output. DNNs were successful in utilizi...
Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24]. Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o...
Previous literature addressing this problem has focused on weight pruning from trained DNNs [11] and weight pruning during training [12]. Pruning minimizes the model capacity for use in environments with low computational capabilities, or low inference time requirements and helps reducing co-adaptation between neurons,...
In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14]. Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler...
After training, we consider 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculated during the feed-forward pass from Eq. 11) and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculat...
B
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg...
Supposing that a UAV covers a round area below it with a field angle θ𝜃\thetaitalic_θ as shown in Fig. 1 (b). Thus the coverage of UAVisubscriptUAV𝑖{\rm UAV}_{i}roman_UAV start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is Di=π⁢(hi⁢tan⁢θ)2subscript𝐷𝑖𝜋superscriptsubscriptℎ𝑖tan𝜃2D_{i}=\pi(h_{i}{\rm tan}\theta)^{2}it...
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting. The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a hi...
In order to support as many users as possible, UAVs are required to enlarge coverage size, which is equal to enlarge the coverage proportion in the mission area. Higher altitude indicates larger coverage size as shown in Fig. 1 (c). The utility of coverage size is denoted as
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
B
{z\phi}\hat{\mathbf{z}})+\frac{1}{\mu_{0}}\mathbf{B}\cdot\nabla f- [ italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r ∇ ⋅ ( italic_n bold_v ) + italic_ρ bold_v ⋅ ∇ ( italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) ] - ∇ ⋅ ( itali...
\mathbf{r}}+r\pi_{z\phi}\hat{\mathbf{z}}-\frac{f}{\mu_{0}}\mathbf{B}\bigg{)}- ∇ ⋅ ( italic_ρ italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r bold_v + italic_r italic_π start_POSTSUBSCRIPT italic_r italic_ϕ end_POSTSUBSCRIPT over^ start_ARG bold_r end_ARG + italic_r italic_π start_POSTSUBSCRIPT italic_z...
1}{r^{2}}\frac{\partial}{\partial z}(r^{2}\pi_{z\phi})=\frac{1}{r}\nabla\cdot(% r\pi_{r\phi}\hat{\mathbf{r}}+r\pi_{z\phi}\hat{\mathbf{z}})( ∇ ⋅ under¯ start_ARG bold_italic_π end_ARG ) start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_r end_ARG divide start_ARG ∂ end_ARG start...
\boldsymbol{\Gamma}over˙ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = divide start_ARG ∂ end_ARG start_ARG ∂ italic_t end_ARG ( ∫ ( italic_ρ italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) italic_d italic_V ) = - ∫ ( italic_ρ italic_v start_POSTSUBSCRIPT italic_ϕ end_PO...
{z\phi}\hat{\mathbf{z}})+\frac{1}{\mu_{0}}\mathbf{B}\cdot\nabla f- [ italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r ∇ ⋅ ( italic_n bold_v ) + italic_ρ bold_v ⋅ ∇ ( italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) ] - ∇ ⋅ ( itali...
A
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
A
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25].
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the DQN loss, ADAM optimizer was used[25].
D
\log(\hat{p})\right.\left.+(1-\alpha)\hat{p}^{\gamma}(1-p)\log(1-\hat{p})% \right).\end{split}start_ROW start_CELL roman_FL ( italic_p , over^ start_ARG italic_p end_ARG ) = - ( italic_α ( 1 - over^ start_ARG italic_p end_ARG ) start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT italic_p roman_log ( over^ start_ARG ital...
Two popular overlap-based measures used to evaluate segmentation performance are the Sørensen–Dice coefficient (also known as the Dice coefficient) and the Jaccard index (also known as the intersection over union or IoU). Given two sets 𝒜𝒜\mathcal{A}caligraphic_A and ℬℬ\mathcal{B}caligraphic_B, these metrics are def...
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d...
A significant problem in image segmentation (particularly in medical images) is to overcome class imbalance for which overlap measure based methods have shown reasonably good performance in overcoming the imbalance. In Section 5, we summarize the approaches which use new loss functions, particularly for medical image s...
Another popular loss function for image segmentation tasks is based on the Dice coefficient, which is essentially a measure of overlap between two samples and is equivalent to the F1 score. This measure ranges from 0 to 1, where a Dice coefficient of 1 denotes perfect and complete overlap. The Dice coefficient (DC) is ...
D
At each level l𝑙litalic_l, two vertices i𝑖iitalic_i and j𝑗jitalic_j are clustered together in a new vertex k𝑘kitalic_k. Then, a standard pooling operation (average or max pool) is applied to compute the node feature 𝐱k(l+1)subscriptsuperscript𝐱𝑙1𝑘{\mathbf{x}}^{(l+1)}_{k}bold_x start_POSTSUPERSCRIPT ( italic_l +...
Similarly to pooling operations in Convolutional Neural Networks (CNNs) that compute local summaries of neighboring pixels, we propose a pooling procedure that provides an effective coverage of the whole graph and reduces the number of nodes approximately by a factor of 2. This can be achieved by partitioning nodes in ...
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
Second, GRACLUS pooling adds “fake” nodes so that they can be exactly halved at each pooling step; this not only injects noisy information in the graph signal, but also increases the computational complexity in the GNN. Finally, clustering depends on the initial ordering of the nodes, which hampers stability and reprod...
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are ...
C
Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages. Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ...
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
RF: Random forest (Breiman, 2001) is an ensemble-based method consisting of multiple decision trees. Each decision tree is trained on a different randomly selected subset of features and samples. The classifier follows the same overall setup, i.e., 500500500500 decision trees and a maximum depth of ten.
Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features. Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features...
Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018). The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ...
C
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
A
Machine learning is a key technology in the 21st century and the main contributing factor for many recent performance boosts in computer vision, natural language processing, speech recognition and signal processing. Today, the main application domain and comfort zone of machine learning applications is the “virtual wor...
However, in real-world applications the computing infrastructure during the operation phase is typically limited, which effectively rules out most of the current resource-hungry machine learning approaches. There are several key challenges—illustrated in Figure 1—which have to be jointly considered to facilitate machin...
Machine learning is a key technology in the 21st century and the main contributing factor for many recent performance boosts in computer vision, natural language processing, speech recognition and signal processing. Today, the main application domain and comfort zone of machine learning applications is the “virtual wor...
We furthermore point out that hardware properties and the corresponding computational efficiency form a large fraction of resource efficiency. This highlights the need to consider particular hardware targets when searching for resource-efficient machine learning models.
However, we are currently witnessing a transition of machine learning moving into “the wild”, where most prominent examples are autonomous navigation for personal transport and delivery services, and the Internet of Things (IoT). Evidently, this trend opens several real-world challenges for machine learning engineers.
D
}\}+\{v_{4},v_{5}\}+\{v_{5},v_{0}\},{ italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_v start_POST...
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\m...
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
B
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ...
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ...
We give extra support to the user by providing the results of 5 quality measures for each representative projection: neighborhood hit (NH), trustworthiness (T), continuity (C), normalized stress (S), and Shepard diagram correlation (SDC), accompanied by the quality metrics average (QMA). They are shown as a grayscale h...
Significantly-different t-SNE projections can be generated from the same data set, due to its well-known sensitivity to hyper-parameter settings [14]. We propose to support users in finding a good t-SNE projection for their data by using visual exploration, as follows. A Grid Search mode (Figure 1(a)) initiates a syste...
Figure 2: Hyper-parameter exploration (presented in a dialog at the beginning of an analytical session), with 25 representative projections from a pool of 500 alternatives obtained through a grid search. Five quality metrics, plus their Quality Metrics Average (QMA), are also displayed to support the visual analysis. ...
D
Another popular option of creating new solutions relies on stigmergy, namely, an indirect communication and coordination between the different solutions or agents used to create new solutions. This communication is usually done using an intermediate structure, with information obtained from the different solutions, us...
Table 31 lists the reviewed algorithms that employ stigmergy when creating new solutions. This is a reduced list when comparing with preceding categories, with the majority of the algorithms relying on Swarm Intelligence among insects (similarly to ACO). However, other algorithms inspired in physics have also a stigmer...
It has not been until relatively recent times that the community has embraced the need for arranging the myriad of existing bio-inspired algorithms and classifying them under principled, coherent criteria. In 2013, [74] presented a classification of meta-heuristic algorithms as per their biological inspiration that di...
The combining method can be specific for the problem to be solved or instead, be conceived for a more general family of problems. In fact, combining methods are usually devised to be adaptable to many different solution representations. As mentioned before, the most popular algorithm in this category is GA [98]. Howeve...
When inspecting the influential approaches from a higher perspective, two are the categories whose algorithms have been more frequently used to create new nature-based algorithms. The first one is Swarm Intelligence: about 14% of all studied nature-inspired algorithms are variations of SI algorithms (PSO, ACO, and ABC...
A
where φ⁢(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12⁢A~⁢D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 e...
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
B
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremente...
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this eva...
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
B
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missin...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
The first model in this domain [7] employed SVMs with one-vs-one comparisons between all classes. SVM classifiers project the data into a higher dimensional space using a kernel function and then find a linear separator in that space that gives the largest distance between the two classes compared while minimizing the ...
While SVMs are standard machine learning, NNs have recently turned out more powerful, so the first step is to use them on this task instead of SVMs. In the classification task, the networks are evaluated by the similarity between the odor class label (1-5) and the network’s output class label prediction given the unlab...
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on...
B
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
A
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ...
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ...
A
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
In order to truly assess if existing methods are using relevant regions to produce correct answers, we use our proposed metric: Correctly Predicted but Improperly Grounded (CPIG). If the CPIG values are large, then it implies that large portion of correctly predicted samples were not properly grounded. Fig. A4 shows %...
Following Selvaraju et al. (2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT trained on human attention maps...
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible...
C
Prior work in privacy and human-computer interaction establishes the motivation for studying these documents. Although most internet users are concerned about privacy (Madden, 2017), Rudolph et al. (2018) reports that a significant number do not make the effort to read privacy notices because they perceive them to be ...
To build the PrivaSeer corpus, we create a pipeline concentrating on focused crawling Chakrabarti et al. (1999); Diligenti et al. (2000) of privacy policy documents. We used Common Crawl,222https://commoncrawl.org/ described below, to gather seed URLs to privacy policies on the web. We filtered the Common Crawl URLs to...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ...
We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with...
A
Figure 3(a) is a t-SNE projection [61] of the instances (MDS [22] and UMAP [31] are also available in order to empower the users with various perspectives for the same problem, based on the DR guidelines from Schneider et al. [47]). The point size is based on the predictive accuracy calculated using all the chosen mode...
Figure 3: The data space projection with the importance of each instance measured by the accuracy achieved by the stack models (a). The parallel coordinates plot view for the exploration of the values of the features (b); a problematic case is highlighted in red with values being null (‘4’ has no meaning for Ca). (c.1)...
Selection of Algorithms and Models. Similar to the workflow described in section 4, we start by setting the most appropriate parameters for the problem (see Figure 6(a)). As the data set is very imbalanced, we emphasize g-mean over accuracy, and ROC AUC over precision and recall. Log loss is disabled because the inves...
The Ca attribute, for example, has a range of 0–3, but by selection we can see five points with Ca values of ‘4’, see Figure 3(b). These values can be considered as unknown and should be further examined. One of these points belongs to the healthy class (due to the olive color) but is very small in Figure 3(c.1)—meani...
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS...
C
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
A
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d...
A
The CCA codebook-based multi-UAV beam tracking scheme with TE awareness. Based on the designed codebook, by exploiting the Gaussian process (GP) tool, both the position and attitude of UAVs can be fast tracked for fast multiuser beam tracking along with dynamic TE estimation. Moreover, the estimated TE is leveraged to...
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base...
A
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
Related one-variable fragments in which we have only a unary relational vocabulary and the main quantification is ∃Sx⁢ϕ⁢(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ...
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper, which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_...
The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element.
D
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
A
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne...
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ...
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
A
φ𝜑\varphiitalic_φ is closed under homomorphisms. Therefore B~⊧φmodels~𝐵𝜑\widetilde{B}\models\varphiover~ start_ARG italic_B end_ARG ⊧ italic_φ because A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG and B~~𝐵\widetilde{B}over~ start_ARG italic_B end_ARG are n𝑛nitalic_n-elementary equivalent. Finally, B~→B→C→~𝐵�...
the corresponding Alexandroff topologies: X≜⟨X,τ→,𝖥𝖮⁢[σ]⟩≜𝑋𝑋subscriptτ→𝖥𝖮delimited-[]σX\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}[\upsigma]\right\rangleitalic_X ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ and for n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ blackboard_N, l...
Let (⟨Xi,τi,𝖥𝖮⁢[σi]⟩)i∈Isubscriptsubscript𝑋𝑖subscriptτ𝑖𝖥𝖮delimited-[]subscriptσ𝑖𝑖𝐼(\left\langle X_{i},\uptau_{i},\mathsf{FO}[\upsigma_{i}]\right\rangle)_{i\in I}( ⟨ italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , sansserif_FO [ roman_σ start_P...
that ⟦𝖥⟧X\llbracket\mathsf{F}\rrbracket_{X}⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is a base of ⟨τ≤∩⟦𝖥𝖮[σ]⟧X⟩\left\langle\uptau_{\leq}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRI...
of the topological spaces Yn≜⟨X,τ→,𝖥𝖮n⁢[σ]⟩≜subscript𝑌𝑛𝑋subscriptτ→subscript𝖥𝖮𝑛delimited-[]σY_{n}\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}_{n}[\upsigma]\right\rangleitalic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO start...
D
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in distortion rectification. However, two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as...
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like...
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify...
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective...
A
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/. We set aside 20% of the samples as the test set and divide the rema...
If we avoid these tricks, these methods may suffer from severe performance degradation. For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective.
We compare SNGM with four baselines: MSGD, LARS [34], EXTRAP-SGD [19] and CLARS [12]. For LARS, EXTRAP-SGD and CLARS, we adopt the open source code 222https://github.com/NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch 333http://proceedings.mlr.press/v119/lin20b.html 444https://github.com/slowbull/largebatch
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD. The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework.
D
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu...
D
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
C
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
D
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti...
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
B
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
B
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en...
A
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ...
B
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem...
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different down...
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex...
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr...
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem...
A
Figure 13: Result comparison of different settings of k𝑘kitalic_k when calculating the intrinsic reward rkisubscriptsuperscript𝑟𝑖𝑘r^{i}_{k}italic_r start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT in sticky Atari games.
One may curious about whether other variational models can be used in exploration. In this section, we discuss the basic latent-variable models, i.e., variational auto-encoder (VAE), and applying Conditional VAE (CVAE) in modeling the multimodality and stochasticity of dynamics. Considering a typical VAE in modeling a...
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch...
In Introduction, the two-roads MDP and Noisy-Mnist illustrate what latent variables can be in specific MDPs. The next-state relies on the latent variables contained in the underlying dynamics. However, we do not need to choose or describe the latent variables manually in practice. Considering the learning objective of ...
In this paper, we propose the Variational Dynamic Model (VDM), which models the multimodality and stochasticity of the dynamics explicitly based on conditional variational inference. VDM considers the environmental state-action transition as a conditional generative process by generating the next-state prediction unde...
A
The number of coefficients |Am,n,1|=(m+nn)∈𝒪⁢(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali...
Whatsoever, any answer to Questions 2 that is to be of practical relevance must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1.
Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl...
Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
C
^{\prime})\leq B_{\nu}.≤ italic_B start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT , roman_sup start_POSTSUBSCRIPT italic_y , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ supp ( italic_ν ) end_POSTSUBSCRIPT d ( italic_y , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≤ italic_B start_POSTSUBSCRIPT ital...
The supports of μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are denoted as supp⁢(μ)supp𝜇\text{supp}(\mu)supp ( italic_μ ) and supp⁢(ν)supp𝜈\text{supp}(\nu)supp ( italic_ν ), respectively. We assume that both μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are unknown distributions, and the supports of them belong to the metric space (ℝd,d)s...
Assumption 1(II) does not hold when distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν have unbounded supports. In that case, we restrict the target distribution in a bounded support such that the probability of locating in such support is relatively large.
However, the diameter cannot be chosen arbitrarily large since otherwise the sample complexity bound will become too conservative. For instance, when the distribution μ𝜇\muitalic_μ is known to be sub-Gaussian with parameter σ𝜎\sigmaitalic_σ, we restrict the support to be (𝔼μ⁢[X]−2⁢log⁡(1/η)⁢σ,𝔼μ⁢[X]+2⁢log⁡(1/η)⁢σ)s...
The supports of target distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν have finite diameters, Bμsubscript𝐵𝜇B_{\mu}italic_B start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT and Bνsubscript𝐵𝜈B_{\nu}italic_B start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT, respectively:
B
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
B
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
B
Any permutation polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) decomposes the finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT into sets containing mutually exclusive orbits, with the cardinality of each set being equal to the cycle length of the elements in that se...
Univariate polynomials f⁢(x):𝔽→𝔽:𝑓𝑥→𝔽𝔽f(x):\mathbb{F}\to\mathbb{F}italic_f ( italic_x ) : blackboard_F → blackboard_F that induces a bijection over the field 𝔽𝔽\mathbb{F}blackboard_F are called permutation polynomials (in short, PP) and have been studied extensively in the literature. For instance, given a gene...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields. Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few. Conditions for such families of maps to...
Given an n𝑛nitalic_n-dimensional vector space 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over finite field 𝔽𝔽\mathbb{F}blackboard_F, maps F:𝔽n→𝔽n:𝐹→superscript𝔽𝑛superscript𝔽𝑛F:\mathbb{F}^{n}\to\mathbb{F}^{n}italic_F : blackboard_F start_POSTSUPERSCRIPT ita...
C
In this study, we evaluated the performance of the different meta-learners across a variety of settings, including high-dimensional and highly correlated settings. Most of these settings were not easy problems, as evident by the absolute accuracy values obtained by the meta-learners. Additionally we considered two rea...
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An exam...
The results of applying MVS with the seven different meta-learners to the colitis data can be observed in Table 2. In terms of raw test accuracy the nonnegative lasso is the best performing meta-learner, followed by the nonnegative elastic net and the nonnegative adaptive lasso. In terms of AUC and H, the best performi...
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Hou...
The nonnegative elastic net is particularly suitable if it is important to the research that, out of a set of correlated features, more than one should be selected. If this is not of particular importance, the nonnegative lasso and nonnegative adaptive lasso can provide even sparser models.
D
We propose a dependency-based anomaly detection framework, DepAD, to provide a general approach to dependency-based anomaly detection. DepAD offers a holistic approach to guide the development of dependency-based anomaly detection methods. DepAD is effective and adaptable, utilizing off-the-shelf techniques for diverse...
We compare two high-performing instantiations of DepAD, FBED-CART-PS and FBED-CART-Sum, against nine state-of-the-art anomaly detection methods across 32 commonly used datasets. The results demonstrate that DepAD algorithms consistently outperform existing methods in most cases. Moreover, the DepAD framework’s high int...
We systematically and empirically study the performance of representative off-the-shelf techniques and their combinations in the DepAD framework. We identify two well-performing dependency-based methods. The two DepAD algorithms consistently outperform nine benchmark algorithms on 32 datasets.
The overall running time of the two DepAD algorithms and the nine benchmark methods are presented in Table 11. In general, the two DepAD algorithms have high efficiency. In the nine benchmark methods, FastABOD, ALSO, SOD and COMBN could not finish in four hours on some datasets.
To address these gaps, this paper introduces a Dependency-based Anomaly Detection framework (DepAD) to provide a general approach to dependency-based anomaly detection. For each phase of the DepAD framework, this paper analyzes what and how to utilize the off-the-shelf techniques in the context of anomaly detection. We...
B
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, th...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
D
Figure 3: Video self-stitching (VSS). a) Snippet-level features are extracted for the entire video. b) Long video is cut into multiple short clips. c) Each video clip is up-scaled along the temporal dimension. d) Original clip (green dots) and up-scaled clip (orange dots) are stitched into one feature sequence with a ...
Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period...
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates ...
The video self-stitching (VSS) component transforms a video into multi-scale input for the network. As illustrated in Fig. 3, it takes a video sequence, extracts snippet-level features, cuts into multiple short clips if it is long, up-scales each short clip along the temporal dimension, and stitches together each pair ...
Figure 3: Video self-stitching (VSS). a) Snippet-level features are extracted for the entire video. b) Long video is cut into multiple short clips. c) Each video clip is up-scaled along the temporal dimension. d) Original clip (green dots) and up-scaled clip (orange dots) are stitched into one feature sequence with a ...
C
Support for (1) selecting proper validation metrics for balanced and imbalanced data sets and (2) directing the experts’ attention to different classes for the given problem constitute two of the critical open challenges in ML. For instance, accuracy is preferred to the g-mean metric for a balanced data set [BDA13].
In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algo...
In another example, a medical expert might focus more on eliminating false-negative predictions than false-positives (e.g., a patient being actually ill but predicted as healthy) with a bad impact on the latter. However, this trade-off is necessary when considering a person’s life.
Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters. T...
However, \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C3}}}}⃝ achieves better results for the precision metric. In the grid-based view (d.1), LR, RF, and GradB algorithms appear more powerful than other algorithms that are more diverse due to the good predictions of hard-to-classify instances.
B
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
We then present a decentralized Markov-chain synthesis (DSMC) algorithm based on the proposed consensus protocol and we prove that the resulting DSMC algorithm satisfies these mild conditions. This result is employed to prove that the resulting Markov chain has a desired steady-state distribution and that all initial d...
we propose the decentralized state-dependent Markov chain synthesis (DSMC) algorithm that achieves convergence to the desired distribution with an exponential rate and minimal state transitions. Additionally, we present a shortest path algorithm that can be integrated with the DSMC algorithm, as utilized in [7, 14, 15]...
Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen...
B
𝐞⁢(xi)=distg⁢e⁢o⁢(xj,xj∗)diam⁢(𝒳j),𝐞subscript𝑥𝑖subscriptdist𝑔𝑒𝑜subscript𝑥𝑗superscriptsubscript𝑥𝑗diamsubscript𝒳𝑗\displaystyle\mathbf{e}(x_{i})=\frac{\text{dist}_{geo}(x_{j},x_{j}^{*})}{\text% {diam}(\mathcal{X}_{j})}\,,bold_e ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG di...
Our method shows state-of-the-art results on this dataset, see Fig. 2 and Tab. 2. While the PCK curves between ours, ZoomOut+Sync and HiPPI in Fig. 2 are close, the AUC in Tab. 2 shows that our performance is still superior by a small margin. Qualitative results can be found in the supplementary material.
We compare our method against several recent state-of-the-art methods, including the pairwise matching approach ZoomOut [47], the two-stage approach ZoomOut+Sync that performs synchronisation to achieve cycle consistency in the results produced by ZoomOut, as well as the multi-matching methods HiPPI [9] and ConsistentZ...
In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut r...
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ...
B
The main goal of our paper is: given a graph G𝐺Gitalic_G, find a (directed) clique path tree of G𝐺Gitalic_G or say that G𝐺Gitalic_G is not a (directed) path graph. To reach our purpose, we follow the same way in [18], by decomposing recursively G𝐺Gitalic_G by clique separators.
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a directed path graph, 𝑈𝑝𝑝𝑒𝑟C=(u1,u2,…,ur)subscript𝑈𝑝𝑝�...
A clique is a clique separator if its removal disconnects the graph in at least two connected components. A graph with no clique separator is called atom. For example, every cycle has no clique separator, and the butterfly/hourglass graph has two cliques and it is an atom. In [18] it is proved that an atom is a path g...
A chordal graph G𝐺Gitalic_G is a path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and there exists f:ΓC→[s]normal-:𝑓normal-→subscriptnormal-Γ...
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and the γisubscript𝛾𝑖\gamma_{i}italic_γ start_PO...
B
Given (n,P,Θ,Π)𝑛𝑃ΘΠ(n,P,\Theta,\Pi)( italic_n , italic_P , roman_Θ , roman_Π ), we can generate a random adjacency matrix A𝐴Aitalic_A under DCMM. For convenience, we denote the DCMM model as D⁢C⁢M⁢M⁢(n,P,Θ,Π)𝐷𝐶𝑀𝑀𝑛𝑃ΘΠDCMM(n,P,\Theta,\Pi)italic_D italic_C italic_M italic_M ( italic_n , italic_P , roman_Θ , roman...
In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-network...
This paper makes one major contribution: modified SLIM methods to mixed membership community detection under the DCMM model. When dealing with large networks in practice, we apply Mixed-SLIMa⁢p⁢p⁢r⁢osubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o ...
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random...
In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm.
D
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation. In each iteration, variati...
Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity. Therefore, in this scenario, variational ...
To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods. I...
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe...
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
D
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II.
Reward. We define the reward for agent i𝑖iitalic_i as the negative of the queue length on incoming lanes. Note that optimizing queue length has been proved to be equivalent to optimizing average travel time in [38] under certain assumptions. Average travel time is a global criteria which cannot be optimized directly ...
Definition 3 (Average Travel Time) The travel time of a vehicle is the time discrepancy between entering and leaving a particular area. A vehicle from the origin to the destination (OD) is regarded as a travel. Average travel time of all vehicles in a road network is the most frequently used measure to evaluate the per...
Most conventional traffic signal control methods are designed based on fixed-time signal control [21], actuated control [22] or self-organizing traffic signal control [23]. These approaches rely on expert knowledge and often perform unsatisfactorily in complicated real-world situations. To solve this problem, several o...
Following existing studies [46, 13, 40, 41, 14], we use the average travel time to evaluate the performance of different methods for traffic signal control. The average travel time indicates the overall traffic situation in an area over a period of time. For a detailed definition of average travel time, see Section 3....
D
+1}-\mathbf{x}_{k}\|_{2}≤ ( italic_h start_POSTSUPERSCRIPT italic_j - 1 end_POSTSUPERSCRIPT + ⋯ + italic_h + 1 ) ∥ bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
<11−h∥⁢𝐱k+1−𝐱k∥2bra11ℎsubscript𝐱𝑘1evaluated-atsubscript𝐱𝑘2\displaystyle~{}~{}<~{}~{}\mbox{$\frac{1}{1-h}$}\,\|\mathbf{x}_{k+1}-\mathbf{x% }_{k}\|_{2}< divide start_ARG 1 end_ARG start_ARG 1 - italic_h end_ARG ∥ bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_k end_POS...
≤‖𝐱k+j−𝐱k+j−1‖2+⋯+‖𝐱k+1−𝐱k‖2absentsubscriptnormsubscript𝐱𝑘𝑗subscript𝐱𝑘𝑗12⋯subscriptnormsubscript𝐱𝑘1subscript𝐱𝑘2\displaystyle~{}~{}\leq~{}~{}\|\mathbf{x}_{k+j}-\mathbf{x}_{k+j-1}\|_{2}+% \cdots+\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{2}≤ ∥ bold_x start_POSTSUBSCRIPT italic_k + italic_j end_POSTSUBSCRIPT - bol...
\mathbf{x}_{k}\|_{2}~{}~{}\leq~{}~{}\mbox{$\frac{1}{1-h}$}\,h^{2^{k}}≤ divide start_ARG 1 end_ARG start_ARG 1 - italic_h end_ARG ∥ bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ divide start_ARG 1 end_ARG star...
\,\delta\,h^{k}∥ bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - over^ start_ARG bold_x end_ARG ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ divide start_ARG 1 end_ARG start_ARG 1 - italic_h end_ARG ∥ bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIP...
A
In order to analyze the performance of an online algorithm, we will rely on the well-established framework of competitive analysis, which provides strict, theoretical performance guarantees against worst-case scenarios. In fact, as stated in (?), bin packing has served as “an early proving ground for this type of analy...
In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity. Ideally, one aims for a loss that is as small as o⁢(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu...
While the standard online framework assumes that the algorithm has no information on the input sequence, a recently emerged and very active direction in Machine Learning seeks to leverage predictions on the input. More precisely, the algorithm has access to some machine-learned information on the input, which, however...
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i...
Online bin packing has also been studied under the advice complexity model (?, ?, ?), in which the online algorithm has access to some error-free information on the input called advice. The objective is to quantify the tradeoffs between the competitive ratio and the size of the advice (i.e., the number of bits in the b...
B
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
To address the problem mentioned above, most of the methods extend the Chamfer loss function of basic AtlasNet with additional terms. Bednarik et al. (2020) added terms to prevent patch collapse, reduce patch overlap and calculate the exact surface properties analytically rather than approximating them. Deng et al. (20...
Practically speaking, our approach transforms the embedding of point cloud obtained from the base model to parametrize the bijective function represented by the MLP network. This function aims to find a mapping between a canonical 2D patch to the 3D patch on the surface of the target mesh. We condition the positioning ...
Although modifications proposed by Bednarik et al. (2020) and Deng et al. (2020b) improve the quality of results, their objective is to fix deformations caused by the stitching of individual mappings. We postulate that by enforcing the local consistency of patch vertices within the objective function of a model, we ca...
D
The Mirror-prox algorithm can be performed in a decentralized manner, however, it is not known whether its optimality is preserved. In this paper, we prove that Mirror-prox remains optimal even in a decentralized case w.r.t. the dependence on the desired accuracy ε𝜀\varepsilonitalic_ε and condition number χ𝜒\chiitali...
Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter probl...
We demonstrate the performance of the DMP algorithm on different network architectures with different conditional number χ𝜒\chiitalic_χ: complete graph, star graph, cycle graph and the Erdős-Rényi random graphs with the probability of edge creation p=0.5𝑝0.5p=0.5italic_p = 0.5 and p=0.4𝑝0.4p=0.4italic_p = 0.4 under...
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem...
A
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
In the case that we can find some non-star spanning tree T𝑇Titalic_T of G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Tital...
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
D
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm⁢(ℱ)≥δ⁢(|ℱ|m)...
One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in...
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit...
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ...
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and...
A
A customized beeswarm plot could facilitate selecting groups of instances and then explaining why some instances migrated. DR methods could also be helpful here, as noted by E3. Also, he proposed to include additional filtering options for all metrics.
Perception and cognition problems could emerge as the number of slices increases. We believe that four slices are already a good start to explore the vast majority of the data space because users will often focus on particular areas of interest. The two interactive thresholds are a key component here because they allow...
Visualization and interaction. E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv...
Workflow. All experts commented that the workflow of FeatureEnVi is straightforward, because it is mainly linear despite involving optional iterative steps. E2 stated that feature engineering is usually very time consuming, especially without the support of a system like ours. E3 also agreed with us that the features h...
The punchcard visualization could allow users to return at any step and follow another path (E2). Finally, E1 mentioned that it could be useful if our system supported custom transformation and generation of features for users to experiment with. We intend to implement the above functionalities.
D
0\text{m/s}^{2},\,|u_{y}|\leq 20\text{m/s}^{2}\}italic_u ∈ caligraphic_U := { [ italic_u start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT | | italic_u start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT | ≤ 20 m/s...
We first optimize the performance of the simulated positioning system by adding a receding horizon MPCC stage where we pre-optimize the position and velocity references provided to the low level controller. This is enabled by the high repeatability of the system which results in run-to-run deviations of 3⁢μ⁢m3𝜇𝑚3\mu ...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system. We leverage the repeatability of the system, which is higher than the integrated encoder error of 3⁢μ⁢m3𝜇𝑚3\mu m3 italic_μ italic_m,
D
Methods are typically highly sensitive to hyperparameter choices, and papers report numbers on systems in which the hyperparameters were tuned using the test set distribution [18, 50, 64]. In the real world, biases may stem from multiple factors and may change in different environments, making this setup unrealistic. ...
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA ...
Figure 1: Current bias mitigation systems are tested on simple datasets that are easy to analyze, but do not offer challenges present in realistic cases. Addressing this, we propose the Biased MNISTv1 dataset which is easy to analyze, yet is reflective of real world challenges since it contains multiple sources of bias...
We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u...
In addition, we posit that the commonly used benchmarks are not challenging enough to test generalization to realistic scenarios. For example CelebA and Colored MNIST, two of the most widely used benchmarks, contain a single bias variable to mitigate: gender and color respectively. It is unclear how well methods would ...
D
Wu et al. collect the MagicEyes dataset using IR cameras [123]. They propose EyeNet, a neural network that solves multiple heterogeneous tasks related to eye gaze estimation for an off-axis camera setting. They use the CNN to model 3333D cornea and 3333D pupil and estimate the gaze from these two 3333D models. Lemley e...
Figure 2: From intrusive skin electrodes [16] to off-shelf web cameras [17], gaze estimation is more flexible. Gaze estimation methods are also updated with the change of devices. We illustrate five kinds of gaze estimation methods. (1). Attached sensor-based methods. The method samples the electrical signal of skin e...
Deep learning have been used in many computer vision tasks and demonstrated outstanding performance. Zhang et al. propose the first CNN-based gaze estimation method to regress gaze directions from eye images [17]. They use a simple CNN and the performance surpasses most of the conventional appearance-based approaches. ...
Convolutional neural networks have been widely used in many compute vision tasks [88]. They also demonstrate superior performance in the field of gaze estimation. In this section, we first review the existing gaze estimation methods from the learning strategy perspective, i.e., the supervised CNNs and the semi-/self-/u...
Tab. I summarizes the existing CNN-based gaze estimation methods. Note that many methods do not specify a platform [17, 56]. Thus, we categorize these methods into the platform of ”computer”. In general, there is an increasing trend in developing supervised or semi-/self-/un-supervised CNN structures to estimate gaze....
D
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
simonyan2014very is trained on the ImageNet dataset which has over 14 million images and 1000 classes. Its name VGG-16 comes from the fact that it has 16 layers. It contains different layers including convolutional layers, Max Pooling layers, Activation layers, and Fully Connected (fc) layers. There are 13 convolution...
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b...
has been successfully employed for image classification tasks krizhevsky2017imagenet . This deep model is pre-trained on a few millions of images from the ImageNet database through eight learned layers, five convolutional layers and three fully-connected layers. The last fully-connected layer allows to classify one tho...
C
Assuming F∈[𝚪]𝐹delimited-[]𝚪F\in[\bm{\Gamma}]italic_F ∈ [ bold_Γ ], we want to show F,C,C′∈⟦𝚫⟧F,C,C^{\prime}\in\llbracket\bm{\Delta}\rrbracketitalic_F , italic_C , italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ ⟦ bold_Δ ⟧. By induction on the first premise, F,C∈⟦𝚪′⟧F,C\in\llbracket\bm{\Gamma}^{\prime}\rrbr...
By induction on the configuration typing derivation D𝐷Ditalic_D, the empty and join cases are discharged by Lemma 7. The object typing cases are covered by Lemma 6, noting that ⦇Γ⦈delimited-⦇⦈Γ\llparenthesis\Gamma\rrparenthesis⦇ roman_Γ ⦈ persists across the semantic sequent due to memory cell persistence and monotoni...
Now, let F∈⦇A⦈≜F∈⦇A⦈nF\in\llparenthesis A\rrparenthesis\triangleq F\in\llparenthesis A% \rrparenthesis_{n}italic_F ∈ ⦇ italic_A ⦈ ≜ italic_F ∈ ⦇ italic_A ⦈ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for some n𝑛nitalic_n—intuitively, all of the (syntactic) types we have considered so far are defined by a lexicograp...
\operatorname{!cell}a\,Kroman_proc italic_a ( bold_case italic_a start_POSTSUPERSCRIPT roman_W end_POSTSUPERSCRIPT italic_K ) → start_OPFUNCTION ! roman_cell end_OPFUNCTION italic_a italic_K, so we invoke part 2 on SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivation ...
We prove parts 2 and 3 simultaneously by lexicographic induction on SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivation D𝐷Ditalic_D then the part number, yielding induction hypotheses I⁢H2⁢(derivation)𝐼subscript𝐻2derivationIH_{2}(\text{derivation})italic_I italic...
A
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
Finally, the comparison between the two proposed schemes and the existing relevant schemes is summarized in Table I. As can be seen therein, the two proposed schemes FairCMS-I and FairCMS-II have advantages over the existing works. In addition, the two proposed schemes offer owners the flexibility to choose. If the sec...
Finally, we conduct a comparative experiment to evaluate the proposed schemes against their relevant existing counterparts, and the results are displayed in Fig. 15. For FairCMS-I and FairCMS-II, we measure the time overhead of Part 2 as it is executed once for each user. For the other schemes, we evaluate their prima...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
B
Table 2: Performance comparison of different methods on three datasets. The four model classes (A, B, C, D) are defined in Section 5.1.1. The last two columns are average improvements of our proposed model GraphFM compared with corresponding base models (“+”: increase, “-”: decrease).
We observe that GraphFM outperforms all the ablative methods, which proves the necessity of all these components in our model. The performance of GraphFM(-M) suffers from a sharp drop compared with GraphFM, proving that it is necessary to transform and aggregate the feature interactions in multiple semantic subspaces t...
Our proposed GraphFM achieves best performance among all these four classes of methods on three datasets. The performance improvement of GraphFM compared with the three classes of methods (A, B, C) is especially significant, above 0.010.01\mathbf{0.01}bold_0.01-level. The aggregation-based methods including InterHAt, A...
This section presents an empirical investigation of the performance of GraphFM on two CTR benchmark datasets and a recommender system dataset. The experimental settings are described, followed by comparisons with other state-of-the-art methods. An ablation study is also conducted to verify the importance of each compo...
Table 2: Performance comparison of different methods on three datasets. The four model classes (A, B, C, D) are defined in Section 5.1.1. The last two columns are average improvements of our proposed model GraphFM compared with corresponding base models (“+”: increase, “-”: decrease).
B
Note that there are no formal convergence guarantees for this algorithm when applied to Problem (1.1). All figures show the evolution of the h⁢(𝐱t)ℎsubscript𝐱𝑡h(\mathbf{x}_{t})italic_h ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and g⁢(𝐱t)𝑔subscript𝐱𝑡g(\mathbf{x}_{t})italic_g ( bold_x start_POSTSUB...
The only other algorithm that is sometimes faster is the Away-step Frank-Wolfe variant, which however depends on an active set and can therefore induce up a quadratic time and memory overhead, potentially rendering the method inattractive for very large-scale settings.
For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5 if the Frank-Wolfe gap g⁢(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ) is greater than the away gap ⟨∇f⁢(𝐱t),𝐚t−𝐱t⟩∇𝑓subscript𝐱𝑡subscript𝐚𝑡subscript𝐱𝑡\left\langle\nabla f(\mathbf{x}_{t}),\m...
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
In the classical analysis of Newton’s method, when the Hessian of f𝑓fitalic_f is assumed to be Lipschitz continuous and the function is strongly convex, one arrives at a convergence rate for the algorithm that depends on the Euclidean structure of ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic...
A
In an ideal situation, our algorithm visits a1subscript𝑎1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT from α𝛼\alphaitalic_α, then a2subscript𝑎2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT along (α,a1)𝛼subscript𝑎1(\alpha,a_{1})( italic_α , italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) and so ...
In our algorithm descriptions, we mainly consider directed edges, i.e., arcs. The structures are used to keep track of the directed alternating paths found by the algorithm. Intuitively, upon discovering an odd cycle along a certain direction, we implicitly learn of the existence of alternating paths along both directi...
Informally speaking, the key observations are that in the former case, by Lemma 4.8, (a suffix of) the active path must form an odd cycle. A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices.
Informally speaking, we can think that these arcs were already “dealt with earlier” and can be ignored in the future steps of the algorithm444This idea is somewhat similar to contracting blossoms but we need to keep track of the lengths of the paths..
The rough idea of the proof is as follows. First, we observe that having a small number of short augmenting paths is a certificate for a good approximation, as formalized in Lemma 5.9. We use this observation to show in Lemma 5.10 that whenever we do not have a good approximation yet, we must find many augmenting paths...
C
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the...
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun...
In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP...
In the second part of this paper, we propose a broadcast-like CPP algorithm (B-CPP) that allows for asynchronous updates of the agents: at every iteration of the algorithm, only a subset of the agents wake up to perform prescribed updates. Thus, B-CPP is more flexible, and due to its broadcast nature, it can further sa...
In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25]. In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies.
A
We develop multiple novel algorithms to solve decentralized personalized federated saddle-point problems. These methods (Algorithm 1 and Algorithm 2) are based on recent sliding technique [27, 28, 29] adapted to SPPs in a decentralized PFL. In addition, we present Algorithm 3 which used the randomized local method fro...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
We divided our experiments into two parts: 1) toy experiments on strongly convex – strongly concave bilinear saddle point problems to verify the theoretical results and 2) adversarial training of neural networks to compare deterministic (Algorithm 1) and stochastic (Algorithm 3) approaches.
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler...
D
The solution concepts discussed so far apply to normal form (NF) games, and therefore are sometimes prefixed as such in the literature (NFCE and NFCCE) to disambiguate them from their extensive form (EF) counterparts (EFCE (von Stengel & Forges, 2008) and EFCCE (Farina et al., 2019a)). This distinction is important bec...
Policy-Space Response Oracles (PSRO) (Lanctot et al., 2017) (Algorithm 1) is an iterative population based training method for multi-agent learning that generalizes other well known algorithms such as fictitious play (FP) (Brown, 1951), fictitious self play (FSP) (Heinrich et al., 2015) and double oracle (DO) (McMahan...
Recent success in tackling two-player, constant-sum games (Silver et al., 2016; Vinyals et al., 2019) has outpaced progress in n-player, general-sum games despite a lot of interest (Jaderberg et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019; Lockhart et al., 2020; Gray et al., 2020; Anthony et al., 2020). One ...
Outside of normal form (NF) games, this problem setting arises in multi-agent training when dealing with empirical games (also called meta-games), where a game payoff tensor is populated with expected outcomes between agents playing an extensive form (EF) game, for example the StarCraft League (Vinyals et al., 2019) a...
JPSRO (Algorithm 2) is a novel extension to Policy-Space Response Oracles (PSRO) (Lanctot et al., 2017) (Algorithm 1) with full mixed joint policies to enable coordination among policies. Although a conceptually straightforward extension, careful attention is needed to a) develop suitable best response (BR) operators, ...
A
\right]}}=\underset{X\sim D}{\text{Cov}}\left({q}\left(X\right),{K}\left(X,v% \right)\right).italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT ) - italic_q ( italic_D ) = start_UNDERACCENT italic_X ∼ italic_D end_UNDERACCENT start_ARG blackboard_E end_ARG [ italic_K ( italic_X , italic_v ) italic_q...
The second part is a direct result of the known variational representation of total variation distance and χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divergence, which are both f𝑓fitalic_f-divergences (see Equations 7.88 and 7.91 in Polyanskiy and Wu (2022) for more details).
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it...
\frac{{D}\left(v\,|\,x\right)}{{D}\left(v\right)}italic_K ( italic_x , italic_v ) ≔ divide start_ARG italic_D ( italic_x | italic_v ) end_ARG start_ARG italic_D ( italic_x ) end_ARG = divide start_ARG italic_D ( italic_v | italic_x ) end_ARG start_ARG italic_D ( italic_v ) end_ARG is the Bayes factor of x𝑥xitalic_x gi...
A
In fact, we prove a slightly stronger statement. If a graph G𝐺Gitalic_G can be reduced to a graph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by iteratively removing z𝑧zitalic_z-antlers, each of width at most k𝑘kitalic_k, and the sum of the widths of this sequence of antlers is t𝑡...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
The remainder of the paper is organized as follows. After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3. We present structural properties of antlers and how they combine in Section 4. In Section 5 we show how color coding can be used to find a large feedb...
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets). We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler...
As described in Section 1, our algorithm aims to identify vertices in antlers using color coding. To allow a relatively small family of colorings to identify an entire antler structure (C,F)𝐶𝐹(C,F)( italic_C , italic_F ) with |C|≤k𝐶𝑘|C|\leq k| italic_C | ≤ italic_k, we need to bound |F|𝐹|F|| italic_F | in terms of...
C
In S-FOSD dataset, Zhang et al. [193] segment one foreground object from a real image and fill its bounding box with image mean values to get the background. For each background image, the foreground object from the same image is deemed as ground-truth. In R-FOSD dataset, Zhang et al. [193] collect images from Internet...
In S-FOSD dataset, Zhang et al. [193] segment one foreground object from a real image and fill its bounding box with image mean values to get the background. For each background image, the foreground object from the same image is deemed as ground-truth. In R-FOSD dataset, Zhang et al. [193] collect images from Internet...
In comparison, S-FOSD dataset is low-cost and highly scalable, but has neither complete background nor ground-truth negative samples. R-FOSD dataset has complete background image with accurately annotated positive and negative foregrounds, but is unscalable due to the high annotation cost.
We evaluate different methods on S-FOSD dataset and R-FOSD dataset [193]. Specifically, we train on S-FOSD training set, while testing on S-FOSD test set and R-FOSD test set. The retrieval results of CFO [206], UFO [207], GALA [210], FFR [175], and DiscoFOS [193] are shown in Fig. 18. The results show that DiscoFOS can...
Figure 18: The visualization results of different foreground object search methods CFO [206], UFO [207], GALA [210], FFR [175], DiscoFOS [193] on S-FOSD (top) and R-FOSD (bottom) datasets. On R-FOSD test set, green (resp., red) box is used to indicate the foreground with compatible (resp., incompatible) label.
B
The Greedy algorithm, which does not consider any global optimization targets, performs the worst compared to LLD and LPA. Taking global optimization targets into consideration leads to a significant improvement in performance, with completion rates improving by 5%∼similar-to\sim∼20% and revenue increasing by 2%∼simil...
Our experimental results demonstrate that LPA outperforms LLD in most cases. This can be attributed to the fact that LPA optimizes the expected long-term revenues at each dispatching round, while LLD only focuses on the immediate reward. As a result, LPA is better suited for maximizing the total revenue of the system ...
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To...
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A⁢(i,j)�...
A
There are two main factors that contribute to this problem. First of all, as mentioned before, due to the complexity of present-day data sets, common assumptions can be violated. A simple example is the case of empirical averaging (16), where the standard deviation will only give rise to calibrated intervals if the re...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
Methods for uncertainty quantification in classification and regression problems usually differ substantially. Many traditional classification methods produce probability estimates, which are used as a starting point for uncertainty quantification, out-of-distribution detection and open-set recognition 9040673 . In re...
In the context of classification problems, where especially the former issue plays a role guo2017calibration , a wide variety of calibration methods is available: Platt scaling, temperature scaling, isotonic regression, etc. In general these methods take the output distribution of the trained predictor and modify it su...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
C
We use this dataset for the emotion classification task. As Tab. 1 shows, the average length of the pieces in the EMOPIA dataset is the shortest among the five, since they are actually clips manually selected by dedicated annotators \parenciteemopia to ensure that each performance expresses a single emotion.
Similar to text, a piece of music in MIDI can be considered as a sequence of musical events or “tokens”. However, what makes music different is that musical notes are associated with a temporal length (i.e., note duration) and multiple notes can be played at the same time. Therefore, to represent music, we need note-r...
These constitute the main ideas of the CP representation \parencitehsiao21aaai, which has at least the following two advantages over its REMI counterpart: 1) the number of time steps needed to represent a MIDI piece is much reduced, since the tokens are merged into a “super token” (a.k.a. a “compound word” \parencitehs...
In the literature, a variety of token representations for MIDI have been proposed, differing in many aspects such as the MIDI data being considered (e.g., melody \parenciteMagenta, lead sheet \parencitejazzTransformer20ismir, piano \parencitehuang2018music and multi-track music \parencitepayne2019musenet,multitrackmusi...
While each time step corresponds to a single token in REMI, each time step would correspond to a super token that assembles four tokens in total in CP. Without such a token grouping, the sequence length (in terms of the number of time steps) of REMI is longer than that of CP (in this example, 16 versus 4). Please note ...
A
The λ𝜆\lambdaitalic_λ-backbone coloring of G𝐺Gitalic_G with backbone H𝐻Hitalic_H is defined as a function c:V⁢(G)→ℕ+normal-:𝑐normal-→𝑉𝐺subscriptℕc\colon V(G)\to\mathbb{N}_{+}italic_c : italic_V ( italic_G ) → blackboard_N start_POSTSUBSCRIPT + end_POSTSUBSCRIPT such that
The lemma above can be applied to the λ𝜆\lambdaitalic_λ-backbone coloring problem directly. Note that here we can extend the notation C1subscript𝐶1C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, C2subscript𝐶2C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT used before only for trees – however this time i...
This, in turn, combined with another much simpler algorithm allows us to show that we can find in polynomial time a λ𝜆\lambdaitalic_λ-backbone coloring for G𝐺Gitalic_G with backbone forest F𝐹Fitalic_F that uses at most Δ2⁢(F)⁢⌈log⁡n⌉superscriptΔ2𝐹𝑛\Delta^{2}(F)\lceil\log{n}\rceilroman_Δ start_POSTSUPERSCRIPT 2 en...
Note that it differs from the vertex coloring problem in important ways: in an optimal λ𝜆\lambdaitalic_λ-backbone coloring (i.e. using a minimum number of colors) the ordering of the colors matters, therefore we might observe that some smaller colors are not used while the larger ones are in use.
We will color F𝐹Fitalic_F by assigning colors to Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT first, and then to Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBS...
C