context
stringlengths
250
7.19k
A
stringlengths
250
4.85k
B
stringlengths
250
5.1k
C
stringlengths
250
8.2k
D
stringlengths
250
5.11k
label
stringclasses
4 values
The two ratios of derivatives are obtained by setting Rnm⁢(x)=0superscriptsubscript𝑅𝑛𝑚𝑥0R_{n}^{m}(x)=0italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = 0 in (29) and (30), then dividing both equations through Rnm′⁢(x)superscriptsuperscriptsubsc...
computed from Rnm⁢(x)/Rnm′⁢(x)=f⁢(x)/f′⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R sta...
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE...
Since Rnm⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥R_{n}^{m}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) is a polynomial of order n𝑛nitalic_n, the (n+1)𝑛1(n+1)( italic_n + 1 )st derivatives
The Newton’s Method of third order convergence is implemented for Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios
D
\ldots,d.italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := { italic_t start_POSTSUBSCRIPT italic_i ( italic_i - 1 ) end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) ∣ roman_ℓ = 0 , … , italic_f - 1 } for italic_i = 2 , … , italic_d .
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
A total of four MSLP instructions (group multiplications or inversions) are required, and only one memory slot is needed in addition to the two memory slots used to permanently store the input elements g,h𝑔ℎg,hitalic_g , italic_h. In other words, there exists an MSLP S𝑆Sitalic_S with memory quota b=3𝑏3b=3italic_b = ...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
B
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscrip...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
D
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
B
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
B
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ...
D
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
. As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
C
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
D
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b], as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3.
C
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
D
In this work, we adopted KLD as an objective function and produced fixation density maps as output from our proposed network. This training setup is particularly sensitive to false negative predictions and thus the appropriate choice for applications aimed at salient target detection Bylinskii et al. (2018). Defining ...
A prerequisite for the successful application of deep learning techniques is a wealth of annotated data. Fortunately, the growing interest in developing and evaluating fixation models has lead to the release of large-scale eye tracking datasets such as MIT1003 Judd et al. (2009), CAT2000 Borji and Itti (2015), DUT-OMRO...
A quantitative comparison of results on independent test datasets was carried out to characterize how well our proposed network generalizes to unseen images. Here, we were mainly interested in estimating human eye movements and regarded mouse tracking measurements merely as a substitute for attention. The final outcome...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
B
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for...
The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols ...
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTS...
Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the marking...
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap...
C
A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label. The CNN classification was propagated through the minimum spanning tree of the graph.
Lekadir et al.[224] used a patched-based four layer CNN for characterization of plaque composition in carotid ultrasound images. Experiments done by the authors showed that the model achieved better pixel-based accuracy than single-scale and multi-scale SVMs.
They applied the mean of a series of Gabor filters with varying frequencies and sigma values to the output of the network to determine whether a pixel represents a vessel or not. Besides finding that the optimal filters vary between channels, the authors also state the ‘need’ of enforcing the networks to align with hum...
In[175] the authors used a CNN to learn the features and a PCA-based nearest neighbor search utilized to estimate the local structure distribution. Besides demonstrating good results they argue that it is important for CNN to incorporate information regarding the tree structure in terms of accuracy.
Amongst their experiments they found that rotational and scaling data augmentations did not help increase accuracy, attributing it to interpolation altering pixel intensities which is problematic due to the sensitivity of CNN to pixel distribution patterns.
D
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an import...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
In this paper our focus was to demonstrate the capability and generality of SimPLe only across a suite of Atari games, however, we believe similar methods can be applied to other environments and tasks which is one of our main directions for future work. As a long-term challenge, we believe that model-based reinforcem...
D
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
A high level overview of these combined methods is shown in Fig. 1. Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D. An example of the respective outputs of some of the m𝑚mita...
Figure 1: High level overview of a feed-forward pass of the combined methods. xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ...
B
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
In the realm of mobile robotics research, the motion control of terrestrial robots across varied terrains is a complex endeavor. To enhance locomotion efficacy and elevate mobility, hybrid robots have been actively developed in the past decade [1]. These robots astutely choose the most suitable locomotion mode from a s...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure...
A
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O⁢(log⁡n)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration.
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\...
Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the...
bins include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS...
The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm...
D
In Section 4 the proposed framework is compared to state-of-the-art methods used in a recent early depression detection task. Section 5 goes into details of the main contributions of our approach by analyzing quantitative and qualitative aspects of the proposed framework. Finally, Section 6 summarizes the main conclusi...
A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as “early classification”, in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy.
The analysis of sequential data is a very active research area that addresses problems where data is processed naturally as sequences or can be better modeled that way, such as sentiment analysis, machine translation, video analytics, speech recognition, and time series processing.
To put the previous points in context, it is important to note that ERD is essentially a problem of analysis of sequential data. That is, unlike traditional supervised learning problems where learning and classification are done on “complete” objects, here classification (or both) must be done on “partial” objects whic...
Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce [Dean & Ghemawat, 2008], giving the framework the capability of effectively processing very large volumes of data. In Algorithm 2 is shown the training process described earlier. Note that the line...
B
Due to the presence of compressed error, naively compressing the communicated vectors in DSGD or DMSGD will damage the convergence, especially when the compression ratio is high. The most representative technique designed to tackle this issue is error feedback (Stich et al., 2018; Karimireddy et al., 2019), also called...
In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019). There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (...
The error feedback technique keeps the compressed error into the error residual on each worker and incorporates the error residual into the next update. Error feedback based sparse communication methods have been widely adopted by recent communication compression methods and achieved better performance than quantizatio...
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse commun...
Sparsification methods, which are also called sparse communication methods, select only a few components of the vector for communicating with the server or the other workers. The most widely used sparsification compressor adopted in sparse communication methods is top-s𝑠sitalic_s, where each worker selects s𝑠sitalic_...
B
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
Applying dropout at the activations in order to correct weights that have overshot, especially when they are initialized with high values. However, the effect of dropout on SANs would generally be negative since SANs have much less weights than DNNs thus need less regularization.
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
C
where κ𝜅\kappaitalic_κ is the index which decides the influence of overlap. Since D¯isubscript¯𝐷𝑖\bar{D}_{i}over¯ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT must satisfy that D¯i>0subscript¯𝐷𝑖0\bar{D}_{i}>0over¯ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRI...
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting. The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a hi...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
When there are numbers of UAVs in the network, it is possible for the coverage areas of different UAVs to overlap. When a UAV overlaps with another, they will not support all users but share the mission. The users in the overlaps will be served randomly with equal probability by each UAV. Fig. 2 presents the overlaps b...
In order to support as many users as possible, UAVs are required to enlarge coverage size, which is equal to enlarge the coverage proportion in the mission area. Higher altitude indicates larger coverage size as shown in Fig. 1 (c). The utility of coverage size is denoted as
D
are fired at t=tc⁢o⁢m⁢p=45⁢μ𝑡subscript𝑡𝑐𝑜𝑚𝑝45μt=t_{comp}=45\upmuitalic_t = italic_t start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 45 roman_μs, and the total compression current in the external coils rises over ∼20⁢μsimilar-toabsent20μ\sim 20\upmu∼ 20 roman_μs to its peak, for
is evaluated along the same chords, producing a reasonable match to the experimental data. For this shot (and simulation), with Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV,
Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV, of around 850kA, so that the total combined levitation and compression current is around 1111MA at the time peak compression,
and ψc⁢o⁢m⁢p⁢(𝐫,t)subscript𝜓𝑐𝑜𝑚𝑝𝐫𝑡\psi_{comp}(\mathbf{r},\,t)italic_ψ start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT ( bold_r , italic_t ), which pertain to the peak levitation/compression currents, are scaled over time according to the experimentally measured
is shown in figure 20. For this shot (and simulation), Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV and tc⁢o⁢m⁢p=45⁢μsubscript𝑡𝑐𝑜𝑚𝑝45μt_{comp}=45\upmuitalic_t start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBS...
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
A
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
B
Creating large 2D and 3D publicly available medical benchmark datasets for semantic image segmentation such as the Medical Segmentation Decathlon (Simpson et al., 2019). Medical imaging datasets are typically much smaller in size than natural image datasets (Jin et al., 2020), and the curation of larger public dataset...
A possible solution to address the paucity of sufficient annotated medical data is the development and use of physics based imaging simulators, the outputs of which can be used to train segmentation models and augment existing segmentation datasets. Several platforms (Marion et al., 2011; Glatard et al., 2013) as well...
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutio...
A
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are ...
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are ...
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig...
Fig. 9(c) shows that NMF produces graphs that are very dense, as a consequence of the multiplication with the dense soft-assignment matrix to construct the coarsened graph. Finally, Fig. 9(d) shows that NDP produces coarsened graphs that are sparse and preserve well the topology of the original graph.
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig...
C
SVM: Support vector machine (Chang & Lin, 2011) is a popular classifier that tries to find the best hyperplane that maximizes the margin between the classes. As evaluated by Fernández-Delgado et al. (2014), the best performance is achieved with a radial basis function kernel.
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features. Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features...
RF: Random forest (Breiman, 2001) is an ensemble-based method consisting of multiple decision trees. Each decision tree is trained on a different randomly selected subset of features and samples. The classifier follows the same overall setup, i.e., 500500500500 decision trees and a maximum depth of ten.
Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018). The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ...
C
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2⁢|𝒜|⁢T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG |...
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
C
They formulate an expected loss with respect to the distribution over the stochastic binary gates. By incorporating an expected ℓ0superscriptℓ0\ell^{0}roman_ℓ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT-norm regularizer over the weights, the probability parameters associated with these gates are encouraged to be close ...
To enable the use of the reparameterization trick, a continuous relaxation of the binary gates using a modified binary Gumbel-softmax distribution is used (Jang et al., 2017). They show that their approach can be used for structured sparsity by associating the stochastic gates to entire structures such as channels.
Wu et al. (2018a) performed mixed-precision quantization using similar NAS concepts to those used by Liu et al. (2019a) and Cai et al. (2019). They introduce gates for every layer that determine the number of bits used for quantization, and they perform continuous stochastic optimization of probability parameters assoc...
By injecting additive noise to the deterministic weights before rounding, one can compute probabilities of the weights being rounded to specific values in a predefined discrete set. Subsequently, these probabilities are used to differentiably round the weights using the Gumbel-softmax approximation (Jang et al., 2017).
They formulate an expected loss with respect to the distribution over the stochastic binary gates. By incorporating an expected ℓ0superscriptℓ0\ell^{0}roman_ℓ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT-norm regularizer over the weights, the probability parameters associated with these gates are encouraged to be close ...
A
Take any embedding of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT into ℝ4superscriptℝ4\mathbb{R}^{4}blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT and let ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0 be small. Consider the boundary Cϵsubscript𝐶italic-ϵC_{\epsilon}italic_C st...
The reader familiar with concepts from applied algebraic topology will have noticed that the definition of strong filling radius of an n𝑛nitalic_n-dimensional metric manifold coincides with (one half of) the maximal persistence of its associated Vietoris-Rips persistence module. In fact, for each nonnegative integer k...
Given a closed connected n𝑛nitalic_n-dimensional metric manifold M𝑀Mitalic_M and a field 𝔽𝔽\mathbb{F}blackboard_F, we define the strong filling radius sFillRad⁢(M;𝔽)sFillRad𝑀𝔽\mathrm{sFillRad}(M;\mathbb{F})roman_sFillRad ( italic_M ; blackboard_F ) as half the length of the largest interval in the n𝑛nitalic_n-t...
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic...
By invoking the relationship between the Vietoris-Rips persistent homology and the strong filling radius, one can verify that the strong filling radii of two n𝑛nitalic_n-dimensional metric manifolds M𝑀Mitalic_M and N𝑁Nitalic_N are close if these two manifolds are similar in the Gromov-Hausdorff distance sense.
D
Some attempts to enrich scatterplots with automatically-derived statistical descriptions of patterns [38, 39, 40] have shown that static mappings may be useful in simple scenarios, but the complex relations between low- and high-dimensional space in non-linear projections cannot be well represented.
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ...
Cytosplore [11] is an example of tools that use t-SNE for visual data exploration within a specific domain: single-cell analysis with mass cytometry data. Apart from showing a t-SNE projection of the data, Cytosplore is also supported by a domain-specific clustering technique which serves as the base for the rest of th...
In such cases, interactive visual interfaces are necessary, as noted by Sacha et al. [15] in their survey on interaction techniques for DR. Interactive solutions for specific domains such as text [19, 20] and images [41, 7] use inherent characteristics of the data in order to explain layouts, however, they are not easi...
Labels   In order to better explain the contribution of t-viSNE, the data sets used in our use cases contain predefined labels, which is not the case in general when using unsupervised learning techniques, such as t-SNE. There is no restriction, however, to having labels when using t-viSNE; one might use the results of...
C
The combining method can be specific for the problem to be solved or instead, be conceived for a more general family of problems. In fact, combining methods are usually devised to be adaptable to many different solution representations. As mentioned before, the most popular algorithm in this category is GA [98]. Howeve...
The second and third most influential algorithms are GA, a very classic algorithm, and DE, a well-known algorithm whose natural inspiration resides only in the evolution of a population. Both have been used by around 5% of all reviewed nature-inspired algorithms, and they are the most representative approach in the Evo...
Another popular option of creating new solutions relies on stigmergy, namely, an indirect communication and coordination between the different solutions or agents used to create new solutions. This communication is usually done using an intermediate structure, with information obtained from the different solutions, us...
Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous solution. The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations. This soluti...
Solution creation, in which new solutions are not generated by mutation/movement of a single reference solution, but instead by combining several solutions (so there is not only a single parent solution), or other similar mechanism. Two approaches can be utilized for creating new solutions. The first one is by combinat...
B
In this paper, matrices and vectors are represented by uppercase and lowercase letters respectively. A graph is represented as 𝒢=(𝒱,ℰ,𝒲)𝒢𝒱ℰ𝒲\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})caligraphic_G = ( caligraphic_V , caligraphic_E , caligraphic_W ) and |⋅||\cdot|| ⋅ | is the size of some set. Vectors whose ...
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propo...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
A
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
Methodology. The core idea of the Path MTU Discovery (PMTUD) based tool is to send the ICMP Packet too Big (PTB) message from a spoofed source IP address, belonging to the tested network, and in the 8 bytes payload of the ICMP to insert the real IP address belonging to the prober. If the network does not enforce ingres...
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P...
Methodology. We send a DNS request to the tested network from a spoofed IP address belonging to the tested network. If the network does not enforce ingress filtering, the request will arrive at the DNS resolver on that network. A query from a spoofed source IP address will cause the response to be sent to the IP addres...
B
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missin...
The first model in this domain [7] employed SVMs with one-vs-one comparisons between all classes. SVM classifiers project the data into a higher dimensional space using a kernel function and then find a linear separator in that space that gives the largest distance between the two classes compared while minimizing the ...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on...
While SVMs are standard machine learning, NNs have recently turned out more powerful, so the first step is to use them on this task instead of SVMs. In the classification task, the networks are evaluated by the similarity between the odor class label (1-5) and the network’s output class label prediction given the unlab...
A
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic...
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu...
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segm...
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this ...
B
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13]. In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left,
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
C
Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t...
We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance o...
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ...
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible...
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian...
A
We crawled the 3.9 million selected URLs using Scrapy444https://scrapy.org/ for about 48 hours between the 4th and 10th of August 2019, for a few hours each day. 3.2 million URLs were successfully crawled, henceforth referred to as candidate privacy policies, while 0.4 million led to error pages and 0.3 million URLs w...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an...
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier ...
D
Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems. E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could onl...
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS...
In numerous Kaggle competitions [20], stacking ensembles led to award-winning results. But, when studying such ensembles, it is very hard to understand why specific instances, features, algorithms, and models were selected instead of others. Indeed, one of the major challenges in stacking is to select the best combinat...
Limitations. Efficiency and scalability were the major concerns raised by all the experts. The inherent computational burden of stacking multiple models still remains, as such complex ensemble learning methods need sufficient resources. Also, the use of VA in between levels makes this even worse. We believe that, with ...
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
C
where ℒDit⁢r⁢a⁢i⁢n⁢(θ)subscriptℒsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛𝜃\mathcal{L}_{D_{i}^{train}}(\theta)caligraphic_L start_POSTSUBSCRIPT italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_θ ) and ℒDiv...
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d...
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali...
B
&\text{subject to}&&\left\|\boldsymbol{f}_{{k}}\right\|=1,\\ &&&\left\|\boldsymbol{w}_{{k}}\right\|=1.\end{aligned}start_ROW start_CELL end_CELL start_CELL start_UNDERACCENT bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_UNDERACCENT start_...
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe...
In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network. To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebo...
A
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
C
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Suc...
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient.
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
A
Table 6 shows that though the BLEU improvements start saturating with deep depth-wise LSTM Transformers of more than 12121212 layers, depth-wise LSTM is able to ensure convergence of up to 24242424 layer Transformers. The experiments also show that the size differences between these datasets did not lead to differences...
Our approach with the Transformer base setting brings about more improvements on the English-German task than that on the English-French task. We conjecture that maybe because the performance on the English-French task using a large dataset (∼similar-to\sim∼36363636M sentence pairs) may rely more on the capacity of th...
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing dep...
D
\operatorname{Struct}(\upsigma)\right)roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( roman_Struct ( roman_σ ) ).
Consider a monotone sentence φ∈𝖥𝖮⁢[σ]Struct⁡(σ)𝜑𝖥𝖮subscriptdelimited-[]σStructσ\varphi\in\mathsf{FO}[\upsigma]_{\operatorname{Struct}(\upsigma)}italic_φ ∈ sansserif_FO [ roman_σ ] start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT. Let (Ui)i∈Isubscriptsubscript𝑈𝑖𝑖𝐼(U_{i})_{i\in I}( italic_U start_P...
Because ⟦φ⟧Struct⁡(σ)\llbracket\varphi\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_φ ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT is an open set of Struct⁡(σ)Structσ\operatorname{Struct}(\upsigma)roman_Struct ( roman_σ ) for τ𝖥𝖮subscriptτ𝖥𝖮\uptau_{\mathsf{FO}}roman_τ start_POSTSUBSCRIPT ...
⟦𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT and ⟦𝖤𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta...
τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
A
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
Distortion Learning Evaluation: Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training da...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
B
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies. The model is tra...
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1 shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed
A
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ...
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ...
D
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio...
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian...
A
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat...
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
Moreover, the level of protection for identities is possible to be compelled to increase for meeting the condition of l𝑙litalic_l-diversity. For example, the generalized table in Figure 1(c) must comply with at least 5-anonymity for satisfying 5-diversity even if the demand for protecting identities is not that high. ...
D
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
A
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dy...
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are gen...
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio...
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
B
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
B
Out-of-KG entity prediction methods, such as MEAN [19], VN Network [20], and LAN [21], leverage logic rules to infer the missing relationships but do not generate unconditioned entity embeddings for other tasks. These methods share a similar task setting with ours, where all relations are known during training. The new...
We present the training procedure of decentRL for entity alignment in Algorithm 1. It is worth noting that decentRL does not rely on additional data such as pretrained KG embeddings or word embeddings. The algorithm first randomly initializes the DAN model, entity embeddings, and relation embeddings. The training proc...
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem...
Moreover, DAN introduces a distinctive attention mechanism that employs the neighbors of the central entity to evaluate the neighbors themselves. This collective voting mechanism helps mitigate bias and contributes to improved performance, even on traditional tasks. It also distinguishes DAN from other existing inducti...
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different down...
B
One reason to perform self-supervised exploration is to adapt the trained explorative agent in similar environments for exploration. To evaluate such adaptability, we conduct experiments on Super Mario. Super Mario has several levels of different scenarios. We take 5555 screenshots at each level when playing games, as...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
We illustrate the results in Fig. 9. We observe that the episode length becomes longer over training time with the intrinsic reward estimated from VDM, as anticipated. We observe that our method reaches the episode length of 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with the minimum iterati...
One reason to perform self-supervised exploration is to adapt the trained explorative agent in similar environments for exploration. To evaluate such adaptability, we conduct experiments on Super Mario. Super Mario has several levels of different scenarios. We take 5555 screenshots at each level when playing games, as...
To evaluate the adaptability, we further adopt the policies learned from the Level 1111 to other levels. More specifically, for each method, we first save the last policy when training in the Level 1111, and then fine-tune such a policy in the Levels 2222 and 3333. Since the VDM and RFM methods perform the best in the ...
D
However, even if P𝑃Pitalic_P is unisolvent, as is well known and shown in our previous work [51], the inversion of the matrix V𝑉Vitalic_V becomes numerically ill-conditioned when represented in the canonical basis qα⁢(x)=xαsubscript𝑞𝛼𝑥superscript𝑥𝛼q_{\alpha}(x)=x^{\alpha}italic_q start_POSTSUBSCRIPT italic_α end...
where the Chebyshev extremes Chebn0superscriptsubscriptCheb𝑛0\mathrm{Cheb}_{n}^{0}roman_Cheb start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT defined in Eq. (7.1) are Leja ordered [61]. Since these PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT for...
Though, approximations of lower accuracy might be reached faster then by polynomial interpolation, this makes these approaches incapable for answering Question 1 when higher-precision approximations are required. The multivariate polynomial interpolation method presented here reaches this goal.
Therefore, alternative interpolation schemes with better numerical condition and lower computational complexity are desirable. While previous approaches to addressing this problem relied on tensorial interpolation schemes [33, 48, 59, 75], we here propose a different approach.
This allowed us to extend the classic 1D Newton and Lagrange interpolation methods to multivariate schemes in a numerically stable and efficient way, resulting in a practically implemented algorithm with 𝒪⁢(|A|2)𝒪superscript𝐴2\mathcal{O}(|A|^{2})caligraphic_O ( | italic_A | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIP...
C
It is shown in [39] that its empirical estimate decays into zero with rate O⁢(n−1/2)𝑂superscript𝑛12O(n^{-1/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) under mild conditions, and a two-sample test can be constructed based on this nice statistical behavior. However, it is costly to comput...
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
Some projection-based variants of the Wasserstein distance are also discussed to address the computational complexity issue, including the sliced [37] and the max-sliced [38] Wasserstein distances. Sliced Wasserstein distance is based on the average Wasserstein distance between two projected distributions along infinit...
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized. Our proposed projected Wasserstein distance is similar to this framework, but we use 1111-Wasserst...
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
D
The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
B
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only...
D
F(k+N)⁢(x)+αN−1⁢F(k+N−1)⁢(x)+⋯+α1⁢F(k+1)⁢(x)+α0⁢F(k)⁢(x)=0superscript𝐹𝑘𝑁𝑥subscript𝛼𝑁1superscript𝐹𝑘𝑁1𝑥⋯subscript𝛼1superscript𝐹𝑘1𝑥subscript𝛼0superscript𝐹𝑘𝑥0F^{(k+N)}(x)+\alpha_{N-1}F^{(k+N-1)}(x)+\dots+\alpha_{1}F^{(k+1)}(x)+\alpha_{0% }F^{(k)}(x)=0italic_F start_POSTSUPERSCRIPT ( italic_k + italic_N ) ...
where αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are defined as in equation (7). This reduced Koopman operator Kfsubscript𝐾𝑓K_{f}italic_K start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT was used to develop a piece of computational machinery to analyze the dynamic evolution of the ...
In essence, this notion of linear complexity of a function can be used as a characterization for the computational effort involved for computations on F⁢(x)𝐹𝑥F(x)italic_F ( italic_x ) such as computing the cycle structures of the map, computing its compositional inverse etc.,
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic...
B
We use the same software as described in Section 4.2. All cross-validation loops used for parameter tuning are nested within the outer loop used for evaluating classification performance. We again use the recommendations of Hofner \BOthers. (\APACyear2015) for choosing the parameters, by specifying q𝑞qitalic_q and a ...
The nonnegative lasso, utilizing only an L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT penalty, produced even sparser models than the elastic net. Interestingly, in our simulations this increased sparsity did not appear to have a substantial negative effect on accuracy, although some minor reduct...
The results of applying MVS with the seven different meta-learners to the colitis data can be observed in Table 2. In terms of raw test accuracy the nonnegative lasso is the best performing meta-learner, followed by the nonnegative elastic net and the nonnegative adaptive lasso. In terms of AUC and H, the best performi...
The results for the breast cancer data can be observed in Table 3. The interpolating predictor and the lasso are the best performing meta-learners in terms of all three classification measures, with the interpolating predictor having higher test accuracy and H, and the lasso having higher AUC. However, the interpolatin...
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Hou...
B
Table 6 presents the reduction rates achieved by each of the five techniques. The reduction rate is computed as 1 minus the ratio of the number of relevant variables selected to the total number of variables in a dataset. The results reveal substantial variations in reduction rates among the different techniques for t...
Compared to other methods, IEPC exhibits a notably lower reduction rate, which, we believe, contributes to its unstable performance. The experimental results in Figure 3 indicate that when considering only linear prediction models, IEPC performs better with regularization techniques such as LASSO and Ridge, as opposed ...
Conversely, algorithms that contain combinations of DC with Lasso or Ridge demonstrate the worst performance. Additionally, algorithms using MI as the relevant variable selection technique generally show inferior results, regardless of the techniques used in the other two phases. When using IEPC for relevant variable s...
Table 6 presents the reduction rates achieved by each of the five techniques. The reduction rate is computed as 1 minus the ratio of the number of relevant variables selected to the total number of variables in a dataset. The results reveal substantial variations in reduction rates among the different techniques for t...
Conversely, the results from the combination of IEPC with Ridge exhibit much lower performance compared to other combinations. Methods using Max as the scoring technique yield the worst results. Additionally, the results with MI as the relevant variable selection technique are generally inferior, regardless of the tech...
A
\log(t)})| | italic_θ - italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | | start_POSTSUBSCRIPT bold_H start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT = over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_d roman_log ( italic_t ) end_ARG ...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait...
D
Another perspective to address the scale issue, especially for the small scale, is data augmentation, e.g. mosaic augmentation in YOLOv4 [5], which pieces together four images into one large image and crop a center area for training. It helps the model learn to not overemphasize the activations for large objects so as ...
Then how can we attack the small-scale problem of short actions? A possible solution is to temporally up-scale videos to obtain more frames to represent an action. Recent literature shows the practice of re-scaling videos via linear interpolation before feeding into a network [3, 20, 21, 44, 48], but these methods actu...
The video self-stitching (VSS) component transforms a video into multi-scale input for the network. As illustrated in Fig. 3, it takes a video sequence, extracts snippet-level features, cuts into multiple short clips if it is long, up-scales each short clip along the temporal dimension, and stitches together each pair ...
Recent temporal action localization methods can be generally classified into two categories based on the way they deal with the input sequence. In the first category, the works such as BSN [21], BMN [20], G-TAD [44], BC-GNN [3] re-scale each video to a fixed temporal length (usually a small length such as 100 snippets...
Why are short actions hard to localize? Short actions have small temporal scales with fewer frames, and therefore, their information is prone to loss or distortion throughout a deep neural network. Most methods in the literature process videos regardless of action duration, which as a consequence sacrifices the perfor...
C
Tuning the Evolutionary Optimization Process. After S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT’s default execution with 100 models for each algorithm (50 due to crossover and 50 because of mutation), we continue with setting the next batch of crossover and mutation processes. We received usefu...
Thus, we choose to set the production from 25 models to 10 for both LR and GradB algorithms. Similarly to the previous paragraph, we select well-performing and diverse models as shown in Figure 6(b), and the unselected models are being used in the crossover and mutation method based on the previously adjusted parameter...
Figure 3: Tuning the crossover and mutation process toward S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. In (a), we set fewer models for mutation and more for crossover for both KNN and MLP algorithms. Our choice is motivated by the feedback received from the bad KNN mutation in S1subscript𝑆1S_{...
In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algo...
From Figure 4(a), right, we see that only a few KNN, LR, and MLP models were better than the previous stages. Thus, we conclude that there is no further improvement, and it is hard to find better hyperparameter tuples. We skip the addition of models from S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCR...
A
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
We then present a decentralized Markov-chain synthesis (DSMC) algorithm based on the proposed consensus protocol and we prove that the resulting DSMC algorithm satisfies these mild conditions. This result is employed to prove that the resulting Markov chain has a desired steady-state distribution and that all initial d...
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen...
we propose the decentralized state-dependent Markov chain synthesis (DSMC) algorithm that achieves convergence to the desired distribution with an exponential rate and minimal state transitions. Additionally, we present a shortest path algorithm that can be integrated with the DSMC algorithm, as utilized in [7, 14, 15]...
A
𝐞⁢(xi)=distg⁢e⁢o⁢(xj,xj∗)diam⁢(𝒳j),𝐞subscript𝑥𝑖subscriptdist𝑔𝑒𝑜subscript𝑥𝑗superscriptsubscript𝑥𝑗diamsubscript𝒳𝑗\displaystyle\mathbf{e}(x_{i})=\frac{\text{dist}_{geo}(x_{j},x_{j}^{*})}{\text% {diam}(\mathcal{X}_{j})}\,,bold_e ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG di...
We compare our method against several recent state-of-the-art methods, including the pairwise matching approach ZoomOut [47], the two-stage approach ZoomOut+Sync that performs synchronisation to achieve cycle consistency in the results produced by ZoomOut, as well as the multi-matching methods HiPPI [9] and ConsistentZ...
In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut r...
Our method shows state-of-the-art results on this dataset, see Fig. 2 and Tab. 2. While the PCK curves between ours, ZoomOut+Sync and HiPPI in Fig. 2 are close, the AUC in Tab. 2 shows that our performance is still superior by a small margin. Qualitative results can be found in the supplementary material.
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ...
A
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect...
We now introduce a last class of intersection graphs. A rooted path graph is the intersection graph of directed paths in a rooted tree. Rooted path graphs can be recognized in linear time by using the algorithm by Dietz [7]. All inclusions between introduced classes of graphs are resumed in the following:
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect...
We denote by G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) a finite connected undirected graph, where V𝑉Vitalic_V, |V|=n𝑉𝑛|V|=n| italic_V | = italic_n, is a set of vertices and E𝐸Eitalic_E, |E|=m𝐸𝑚|E|=m| italic_E | = italic_m, is a collection of pairs of vertices called edges. Let P𝑃{P}italic_P be a fin...
A graph is an interval graph if it is the intersection graph of a family of intervals on the real line; or, equivalently, the intersection graph of a family of subpaths of a path. Interval graphs are characterized by Lekkerkerker and Boland [15] as chordal graphs with no asteroidal triples, where an asteroidal triple i...
D
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_...
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
C
Furthermore, we remark that our statistical rate is with respect to the RKHS norm, which is more challenging to handle than ‖fn,λ−f∗‖ℒℙ2.subscriptnormsubscript𝑓𝑛𝜆superscript𝑓superscriptsubscriptℒℙ2\|f_{n,\lambda}-f^{*}\|_{\mathcal{L}_{\mathbb{P}}^{2}}.∥ italic_f start_POSTSUBSCRIPT italic_n , italic_λ end_POSTSUBS...
In the sequel, we upper bound the statistical error fn,λ−f∗subscript𝑓𝑛𝜆superscript𝑓f_{n,\lambda}-f^{*}italic_f start_POSTSUBSCRIPT italic_n , italic_λ end_POSTSUBSCRIPT - italic_f start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT in terms of the RKHS norm, which is further used to obtain an upper bound of εksubscript𝜀𝑘...
Under the regularity condition that the kernel satisfies Assumption 4.4, the RKHS norm statistical rate further implies an upper bound on the estimation error of the gradient function ∇f∗∇superscript𝑓\nabla f^{*}∇ italic_f start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
Whereas H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT serves as an upper bound on the Lipschitz constant of ∇f~k∗∇superscriptsubscript~𝑓𝑘\nabla\widetilde{f}_{k}^{*}∇ over~ start_ARG italic_f end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, which...
Assumption 4.4 characterizes the regularity of ℋℋ\mathcal{H}caligraphic_H. Specifically, it postulates that the kernel K𝐾Kitalic_K and its derivatives are upper bounded, which is satisfied by many popular kernels, including the Gaussian RBF kernel and the Sobolev kernel (Smale and Zhou, 2003; Rosasco et al., 2013; Yan...
B
The comparative results evaluated on the meta-test mode are shown in Tab. V. The “original” means the model is trained on the current testing scenario, and the “transfer” stands for the model is trained on the road map of Hangzhou. From the results, we can obtain follow findings:
1) Colight needs full state information in both training and testing, hence it cannot be used for a new scenario which contains different number intersections compared with the training scenario. That is, the heterogeneous scenarios will cause heterogeneous inputs of the policy network, which makes the network fail to...
The most straightforward RL baseline considers each intersection independently and models the task as a single agent RL problem [12]. However, the observation, received reward and dynamics of each traffic signal are closely related to its neighbors, and the coordination between signals should be modeled. Hence, optimiz...
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a...
4) The neighbors’ information is modeled in CoLight and it performs well.It indicates modeling neighbors is critical for the coordination. The results of MetaVIM is superior to CoLight on each scenario and configuration, resulting mean 43 improvement. Compared to Colight, MetaVIM proposes an intrinsic reward to help th...
A
represents its Jacobian (with respect to both 𝐱𝐱\mathbf{x}bold_x and 𝐲𝐲\mathbf{y}bold_y) while 𝐟𝐱⁢(𝐱0,𝐲0)subscript𝐟𝐱subscript𝐱0subscript𝐲0\mathbf{f}_{\mathbf{x}}(\mathbf{x}_{0},\mathbf{y}_{0})bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_...
For a smooth mapping (𝐱,𝐲)↦𝐟⁢(𝐱,𝐲)maps-to𝐱𝐲𝐟𝐱𝐲(\mathbf{x},\mathbf{y})\,\mapsto\,\mathbf{f}(\mathbf{x},\mathbf{y})( bold_x , bold_y ) ↦ bold_f ( bold_x , bold_y ) at (𝐱0,𝐲0)subscript𝐱0subscript𝐲0(\mathbf{x}_{0},\mathbf{y}_{0})( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT 0 e...
(partial) Jacobians with respect to 𝐱𝐱\mathbf{x}bold_x and 𝐲𝐲\mathbf{y}bold_y respectively at (𝐱0,𝐲0)subscript𝐱0subscript𝐲0(\mathbf{x}_{0},\mathbf{y}_{0})( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ).
represents its Jacobian (with respect to both 𝐱𝐱\mathbf{x}bold_x and 𝐲𝐲\mathbf{y}bold_y) while 𝐟𝐱⁢(𝐱0,𝐲0)subscript𝐟𝐱subscript𝐱0subscript𝐲0\mathbf{f}_{\mathbf{x}}(\mathbf{x}_{0},\mathbf{y}_{0})bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_...
𝐱↦𝐟⁢(𝐱,𝐲∗)maps-to𝐱𝐟𝐱subscript𝐲\mathbf{x}\,\mapsto\,\mathbf{f}(\mathbf{x},\mathbf{y}_{*})bold_x ↦ bold_f ( bold_x , bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) with respect to the parameter value 𝐲=𝐲∗𝐲subscript𝐲\mathbf{y}\,=\,\mathbf{y}_{*}bold_y = bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT as
B
We evaluate Adaptive(w𝑤witalic_w) for 100 values of the sliding window w𝑤witalic_w, equidistant in the range [100,100000]100100000[100,100000][ 100 , 100000 ]. This is a crucial parameter: if w𝑤witalic_w is too small, we do not obtain sufficient information on the frequencies, whereas if w𝑤witalic_w is too big, th...
When Adaptive(w𝑤witalic_w) opens a new profile group, the predicted frequencies are updated based on the w𝑤witalic_w most recently packed items. These w𝑤witalic_w items follow a distribution that may have changed since the time a new profile group was opened. As such, the performance of Adaptive(w𝑤witalic_w) depen...
Figure 6 depicts the number of bins opened by Adaptive(w𝑤witalic_w) as a function of w𝑤witalic_w for different benchmarks. Here, we report the average cost of the algorithms over 20 randomly generated sequences. We observe that for the Weibull and “GI” benchmarks, there is a relatively wide range for w𝑤witalic_w tha...
Figure 3 depicts the cost of the algorithms for a typical sequence, as a function of the prediction error. The chosen files are “csBA125_9” (for “GI”), “Schwerin2_BPP32” (for “Shwerin”), “BPP_750_50_0.1_0.8_2” (for “Randomly_Generated”), “Hard28_BPP832” (for “Schoenfield_Hard28”), and “Waescher_TEST0082” (for “Wäscher”...
Adaptive(w𝑤witalic_w) improves upon FirstFit and BestFit when w𝑤witalic_w takes values in the shorter range [2000,4000]. For “Schwerin”, Adaptive(w𝑤witalic_w) always performs better, which can be explained by the discussion in Section 6.3. For “Wäscher”, Adaptive(w𝑤witalic_w) does not offer any advantage over
B
Finally, we empirically show the proposed framework produces high-fidelity and watertight meshes. It means that it solves the initial problem of disjoint patches occurring in the original AtlasNet (Groueix et al., 2018). To evaluate the continuity of output surfaces, we propose to use the following metric.
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of W⁢T𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl...
The above formulation alone causes that many of the produced patches have unnecessarily long edges, and the network folds them, so the patch fits the surface of an object. To mitigate the issue, we add an edge length regularization motivated by (Wang et al., 2018). If we assume that the reconstructed mesh has the form...
Watertigthness Typically, a mesh is referred to as being either watertight or not watertight. Since it is a true or false statement, there is no well-established measure to define the degree of discontinuities in the object’s surface. To fill this gap, we propose a metric based on a simple, approximate check of whether...
To leverage that knowledge, we express watertigthness as a ratio of rays that passed the parity test to the total number of all casted rays. Firstly, we sample N𝑁Nitalic_N points p∈S^𝑝^𝑆p\in\hat{S}italic_p ∈ over^ start_ARG italic_S end_ARG from all triangles of the reconstructed object S^^𝑆\hat{S}over^ start_ARG ...
C
}{2}}over^ start_ARG bold_t end_ARG start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT bold_t start_POSTSUPERSCRIPT italic_k + divide start_ARG 1 end_ARG st...
The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ...
To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
As it was noted above, the standard analysis of Mirror-Prox requires the feasible sets to be compact. Although we run Mirror-Prox algorithm on problem (54) with unconstrained variables 𝐬𝐬{\bf s}bold_s and 𝐳𝐳{\bf z}bold_z, we still can bound these variables according to Theorem 2.4.
D
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
In the case that we can find some non-star spanning tree T𝑇Titalic_T of G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Tital...
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
C
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP...
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
The support of a chain σ𝜎\sigmaitalic_σ, denoted supp⁡(σ)supp𝜎\operatorname{supp}(\sigma)roman_supp ( italic_σ ), in a simplicial complex is the set of simplices with nonzero coefficients in σ𝜎\sigmaitalic_σ. We say that two chains σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have overlapping supports if there exists a sim...
In this paper, we show that the gap observed for convex lattice sets occurs in the broad topological setting of triangulable spaces with a forbidden homological minor, a notion introduced by Wagner [37] as a higher-dimensional analogue of the familiar notion of graph minors [34].
Theorem 1.1 depends on p𝑝pitalic_p, q𝑞qitalic_q, K𝐾Kitalic_K and b𝑏bitalic_b (but, as usual, is independent of the size of the cover). Moreover, while the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover can grow with b𝑏bitalic_b (it is at least (b−1)⁢(μ⁢(K)+2)𝑏1𝜇𝐾2(b-1)(\mu(K)+2)( italic_b - ...
C
The basic recorded steps are: (1) Include, (2) Exclude, (3) Transform, and (4) Generate for each feature. The size of the circle encodes the order of the main actions, with larger radii for recent steps. The brown color is used only if the overall performance increases.
(a) presents another transformation of the second most impactful feature (according to Fig. 5(b)). F4_p4///F15///F18_l1p is the most important combination (see the darker green color in (b)). The punchcard visualization in (c) indicates that when we removed F16, the performance increased and that the new feature booste...
The calculation is according to three validation metrics after we subtract their standard deviations. The grouped bar chart presents the performance based on accuracy, weighted precision, and weighted recall and their standard deviations due to cross-validation (error margins in black).
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heat...
T5: Evaluate the results of the feature engineering process. At any stage of the feature engineering process (T2–T4), a user should be able to observe the fluctuations in performance with the use of standard validation metrics (e.g., accuracy, precision, and recall) [32]. Also, users could possibly want to refer to the...
B
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ...
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
B
So far, there is no study comparing methods from either group comprehensively. Often papers fail to compare against recent methods and vary widely in the protocols, datasets, architectures, and optimizers used. For instance, the widely used Colored MNIST dataset, where colors and digits are spuriously correlated with e...
Methods are typically highly sensitive to hyperparameter choices, and papers report numbers on systems in which the hyperparameters were tuned using the test set distribution [18, 50, 64]. In the real world, biases may stem from multiple factors and may change in different environments, making this setup unrealistic. ...
Assuming access to the test distribution for model selection is unrealistic and can result in models being right for the wrong reasons [64]. Rather, it is ideal if the methods can generalize without being tuned on the test distribution and we study this ability by comparing models selected through varying tuning distri...
Figure 1: Current bias mitigation systems are tested on simple datasets that are easy to analyze, but do not offer challenges present in realistic cases. Addressing this, we propose the Biased MNISTv1 dataset which is easy to analyze, yet is reflective of real world challenges since it contains multiple sources of bias...
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA ...
A
ℒEuclidean=‖𝒑−𝒑^‖2,subscriptℒEuclideansubscriptnorm𝒑bold-^𝒑2\mathcal{L}_{\mathrm{Euclidean}}=||\boldsymbol{p}-\boldsymbol{\hat{p}}||_{2},caligraphic_L start_POSTSUBSCRIPT roman_Euclidean end_POSTSUBSCRIPT = | | bold_italic_p - overbold_^ start_ARG bold_italic_p end_ARG | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ,
Two kinds of evaluation protocols are commonly used for deep-learning based gaze estimation methods, including within-dataset and cross-dataset evaluation. The within-dataset evaluation assesses the model performance on the unseen subjects from the same dataset. The dataset is divided into training and test set accordi...
We also convert the two definitions with post-processing methods following Sec. 4.2.2. We respectively conduct benchmarks for 2D PoG and 3D gaze estimation. The 3D gaze estimation also are divided into within-dataset and cross-dataset evaluation. We mark the top three performance in all benchmarks with underlines.
The calibration problem can be considered as domain adaption problems, where the training set is the source domain and the test set is the target domain. The test set usually contains unseen subjects or unseen environment. Researchers aim to improve the performance in the target domain using calibration samples.
It is the most popular dataset for appearance-based gaze estimation methods. It contains a total of 213,659 images collected from 15 subjects. The images are collected in daily life over several months and there is no constraint for the head pose. MPIIGaze dataset provides both 2D and 3D gaze annotation. It also provid...
A
Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60...
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60...
Table 2 reports the classification rates on the SMFRD dataset. The highest recognition rate is achieved by the ResNet-50 through the quantization of DRF features by 88.9%. This performance is achieved using 70 codewords that feed an MLP classifier. AlexNet model realized good recognition rates comparing to the VGG-16 ...
D
\mathscr{A}]\triangleqitalic_F ∈ [ ⟨ ∗ , italic_x ⟩ ⇒ italic_P ( italic_x ) : italic_ϕ bold_⇒ script_A ] ≜ if ⋅;⋅⊢ϕ\cdot;\cdot\vdash\phi⋅ ; ⋅ ⊢ italic_ϕ, then F,proca(P(a))∈⟦a:𝒜⟧F,\operatorname{proc}a\,(P(a))\in\llbracket a:\mathscr{A}\rrbracketitalic_F , roman_proc italic_a ( italic_P ( italic_a ) ) ∈ ⟦ italic_a : sc...
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which...
Positive semantic types are defined by intension—the contents of a particular cell—whereas negative semantic types are defined by extension—how interacting with a continuation produces the desired result. Analogously for the λ𝜆\lambdaitalic_λ-calculus, the semantic positive product is defined as containing pairs of te...
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i]⁢[z/x]⁢Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita...
With these compatibility lemmas in hand, we are almost ready to construct a correspondence between the syntactic typing of processes and configuration objects with the semantic typing thereof. First, we need a semantic interpretation of (syntactic) types.
B
𝐦k←(EP⁢KUk1⁢(𝐦k),P⁢KUk)←superscript𝐦𝑘subscriptsuperscript𝐸1𝑃subscript𝐾subscript𝑈𝑘superscript𝐦𝑘𝑃subscript𝐾subscript𝑈𝑘\mathbf{m}^{k}\leftarrow(E^{1}_{PK_{U_{k}}}(\mathbf{m}^{k}),PK_{U_{k}})bold_m start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ← ( italic_E start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT sta...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
Finally, the comparison between the two proposed schemes and the existing relevant schemes is summarized in Table I. As can be seen therein, the two proposed schemes FairCMS-I and FairCMS-II have advantages over the existing works. In addition, the two proposed schemes offer owners the flexibility to choose. If the sec...
For another, since M>T>L𝑀𝑇𝐿M>T>Litalic_M > italic_T > italic_L and δ>1𝛿1{\delta}>1italic_δ > 1, it is intuitive from Table II that in FairCMS-II, the cloud has increased in computing and storage costs compared to that of FairCMS-I. In fact, the cloud-side communication cost of also increases in FairCMS-II as the un...
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
A
This section presents an empirical investigation of the performance of GraphFM on two CTR benchmark datasets and a recommender system dataset. The experimental settings are described, followed by comparisons with other state-of-the-art methods. An ablation study is also conducted to verify the importance of each compo...
Since our proposed approach selects the beneficial feature interactions and models them in an explicit manner, it has high efficiency in analyzing high-order feature interactions and thus provides rationales for the model outcome. Through extensive experiments conducted on CTR benchmark and recommender system datasets,...
Our proposed GraphFM achieves best performance among all these four classes of methods on three datasets. The performance improvement of GraphFM compared with the three classes of methods (A, B, C) is especially significant, above 0.010.01\mathbf{0.01}bold_0.01-level. The aggregation-based methods including InterHAt, A...
(2) By treating features as nodes and their pairwise feature interactions as edges, we bridge the gap between GNN and FM, and make it feasible to leverage the strength of GNN to solve the problem of FM. (3) Extensive experiments are conducted on CTR benchmark and recommender system datasets to evaluate the effectivenes...
Our experiments are conducted on three real-world datasets, two CTR benchmark datasets, and one recommender system dataset. Details of these datasets are illustrated in Table 1. The data preparation follows the strategy in Tian et al. (2023). We randomly split all the instances in 8:1:1 for training, validation, and te...
D
where Q𝑄Qitalic_Q is a symmetric positive definite matrix with log-normally distributed eigenvalues and φℝ+⁢(⋅)subscript𝜑subscriptℝ⋅\varphi_{\mathbb{R}_{+}}(\cdot)italic_φ start_POSTSUBSCRIPT blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ )
The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run.
In practice, a halving strategy for the step size is preferred for the implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy, the per iteration cost of the stateless step-size is about three times that of the simple step-size.
A
More than 15 years ago, in his pioneering work, McGregor [McG05] initiated the study of arbitrarily good matching approximation algorithms — that is, (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate for an arbitrarily small ε>0𝜀0\varepsilon>0italic_ε > 0 — in the semi-streaming model. He presented a randomized algo...
This is the first (deterministic or randomized) algorithm with polynomially in 1/ε1𝜀1/\varepsilon1 / italic_ε many passes. It not only improves exponentially on the randomized (1/ε)O⁢(1/ε)superscript1𝜀𝑂1𝜀\left(1/\varepsilon\right)^{O(1/\varepsilon)}( 1 / italic_ε ) start_POSTSUPERSCRIPT italic_O ( 1 / italic_ε ) en...
Summarized, all previously known (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximation algorithms, whether deterministic or randomized, need exponentially in 1/ε1𝜀1/\varepsilon1 / italic_ε many passes. Moreover, while for the special case of bipartite graphs poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_po...
It is not known whether the latter two approaches can be modified to work in case of general graphs. Our results apply to general graphs and achieve a pass complexity of poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) without any dependency on n𝑛nitalic_n.
In the special case of bipartite graphs, the deterministic algorithms by Ahn and Guha [AG11], Eggert et al. [EKMS12], as well as Assadi et al. [AJJ+22] obtain a runtime of poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes. The first algorithm can also be adapted to the case of genera...
B
For minimizing strongly convex and smooth objectives, the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method not only enjoys linear convergence over fixed graphs [24, 25], but also works well under time-varying graphs and asynchronous settings [24, 26, 27].
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the...
In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP...
For example, the rapid development of distributed machine learning involves data whose size is getting increasingly large, and they are usually stored across multiple computing agents that are spatially distributed. Centering large amounts of data is often undesirable due to limited communication resources and/or priva...
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
D
The inclusion of noise ymsubscript𝑦𝑚y_{m}italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT in the optimization process adds an interesting dimension to the standard loss function of a machine learning model. While the primary objective is still the minimization of the loss, the maximization of the noise term pl...
Setting. To train ResNet18 in CIFAR-10, one can use stochastic gradient descent with momentum 0.90.90.90.9, the learning rate of 0.10.10.10.1 and a batch size of 128128128128 (40404040 batches = 1111 epoch). This is one of the default learning settings. Based on these settings, we build our settings using the intuitio...
Data and model. We consider the benchmark of image classification on the CIFAR-10 [46] dataset. It contains 50,0005000050,00050 , 000 and 10,0001000010,00010 , 000 images in the training and validation sets, respectively, equally distributed over 10101010 classes. To emulate the distributed scenario, we partition the ...
Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre...
Unlike classical distributed learning methods, the FL approach assumes that data is not stored within a centralized computing cluster but is stored on clients’ devices, such as laptops, phones, and tablets. This formulation of the training problem gives rise to many additional challenges, including the privacy of clien...
B
This means that neither NEs nor (C)CEs can be directly used prescriptively in n-player, general-sum games. These solution concepts specify what subsets of joint strategies are in equilibrium, but does not specify how decentralized agents should select amongst these. Furthermore, the presence of a correlation device doe...
There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve ...
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th...
The set of (C)CEs forms a convex polytope, and therefore any strictly convex function could uniquely select amongst this set. The literature only provides one such example: MECE (Ortiz et al., 2007) which has a number of appealing properties, but was found to be slow to solve large games. There is a gap in the literatu...
An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has...
B
\epsilon^{\prime}-\xi}^{\infty}{\delta_{2}}\left(t\right)dt\right)italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_ϵ ) ≔ start_UNDERACCENT italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ ( 0 , italic_ϵ ) , italic_ξ ∈ ( 0 , italic_ϵ - italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_UNDERAC...
Our Covariance Lemma (3.5) shows that there are two possible ways to avoid adaptivity-driven overfitting—by bounding the Bayes factor term, which induces a bound on |q⁢(Dv)−q⁢(D)|𝑞superscript𝐷𝑣𝑞𝐷\left|{q}\left(D^{v}\right)-{q}\left(D\right)\right|| italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSC...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior d...
Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta...
D
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
D
Apart from the above methods which design explicit rules to infer the reasonable placement for the foreground object, some methods [188, 2, 190, 147, 164] employ deep learning techniques to predict the placement and generate the composite image automatically.
In this section, we focus on instance-specific object placement and compare existing object placement methods for generating a reasonable composite image. For ease of comparison, we fix the foreground scale and only predict the reasonable location for the foreground object. Recall that instance-specific object placemen...
Figure 7: We show three types of methods for instance-specific object placement. Generative model: given the foreground, foreground object mask, and background, the model generates a reasonable placement (e.g., location (x,y) and scale (w,h)) for the foreground. Slow discriminative model: given the composite image and ...
Figure 6: We show three types of methods for category-specific object placement. Generative model: given the foreground category and background image, the model generates a reasonable bounding box (e.g., location (x,y) and scale (w,h)). Slow discriminative model: given the foreground category, foreground bounding box, ...
The existing deep learning based object placement methods can be divided into category-specific object placement and instance-specific object placement. For category-specific object placement, the model aims to predict plausible bounding boxes given a background image and a foreground category. This group of methods a...
D
Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations. In the field of urban computing, it is highly probable that the knowledge required ...
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem...
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One e...
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp...
Federated learning: Secondly, CityNet is an appropriate dataset to investigate various federated learning topics under different settings, with each party holding data from one source or one city. Urban data is usually generated by a multitude of human activities and stored by diverse stakeholders, such as organization...
D
In the section on quantile regression it was noted that this approach tends to have a problem with modelling the tails of the distribution, with the added consequence that this can influence the validity at extreme significance levels. However, when combining such models with conformal prediction, validity is not an i...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
In the preceding four sections, we introduced different classes of interval estimators, each having its own characteristics. In this section, we summarize the main properties for clarity and convenience. We identify four properties that are important for practical purposes. The first one is the main notion of this pap...
In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th...
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
B
Fig. 2(b) shows the fine-tuning architecture for note-level classification. While the Transformer uses the hidden vectors to recover the masked tokens during pre-training, it has to predict the label of an input token during fine-tuning, by learning from the labels provided in the training data of the downstream task ...
For the sequence-level tasks, which require only a prediction for an entire sequence, we follow \textciteemopia and choose the Bi-LSTM-Attn model from \textcitelin2017structured as our baseline, which was originally proposed for sentiment classification in NLP. The model combines LSTM with a self-attention module for t...
Being inspired by the Bi-LSTM-Attn model \parencitelin2017structured, we employ an attention-based weighting average mechanism to convert the sequence of 512 hidden vectors for an input sequence to one single vector before feeding it to the classifier layer, which comprises two dense layers. We note that, unlike the ba...
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, ...
Some researchers work on MIDI alone, while others use both audio and MIDI in multi-modal emotion classification \parencitepanda2013multi. The only deep learning-based approach we are aware of is presented by \textciteemopia, using an RNN-based classifier called “Bi-LSTM-Attn” \parencitelin2017structured but without emp...
B
Observe that for a tree on n𝑛nitalic_n vertices we can compute for every vertex v𝑣vitalic_v and its neighbor u𝑢uitalic_u functions f⁢(v,u)𝑓𝑣𝑢f(v,u)italic_f ( italic_v , italic_u ) and g⁢(v,u)𝑔𝑣𝑢g(v,u)italic_g ( italic_v , italic_u ) denoting the sizes of subsets of C1⁢(T)subscript𝐶1𝑇C_{1}(T)italic_C start_PO...
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the...
The idea is to start from any vertex w𝑤witalic_w, and then jump to its neighbor with the largest component size in T−w𝑇𝑤T-witalic_T - italic_w, until we hit a vertex with desired property. Note that for any vertex v𝑣vitalic_v there can be at most one neighbor u𝑢uitalic_u such that its connected component Tusubscri...
In every tree T𝑇Titalic_T there exists a central vertex v∈V⁢(T)𝑣𝑉𝑇v\in V(T)italic_v ∈ italic_V ( italic_T ) such that every connected component of T−v𝑇𝑣T-vitalic_T - italic_v has at most |V⁢(T)|2𝑉𝑇2\frac{|V(T)|}{2}divide start_ARG | italic_V ( italic_T ) | end_ARG start_ARG 2 end_ARG vertices.
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
A