context
stringlengths
250
4.99k
A
stringlengths
250
4.85k
B
stringlengths
250
4.17k
C
stringlengths
250
4.32k
D
stringlengths
250
8.2k
label
stringclasses
4 values
−3⁢(1+m)8⁢x−m−4⁢R′′+18⁢x−m−3⁢R′′′.31𝑚8superscript𝑥𝑚4superscript𝑅′′18superscript𝑥𝑚3superscript𝑅′′′\displaystyle-\frac{3(1+m)}{8}x^{-m-4}R^{\prime\prime}+\frac{1}{8}x^{-m-3}R^{% \prime\prime\prime}.- divide start_ARG 3 ( 1 + italic_m ) end_ARG start_ARG 8 end_ARG italic_x start_POSTSUPERSCRIPT - italic_m - 4 end_P...
The Newton’s Method of third order convergence is implemented for Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic...
computed from Rnm⁢(x)/Rnm′⁢(x)=f⁢(x)/f′⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R sta...
Since Rnm⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥R_{n}^{m}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) is a polynomial of order n𝑛nitalic_n, the (n+1)𝑛1(n+1)( italic_n + 1 )st derivatives
D
Having computed the T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we begin the main ‘for’ loop of Algorithm 3, running through the columns of g𝑔gitalic_g in reverse order. Observe that r𝑟ritalic_r takes each value 1,…,d1…𝑑1,\dots,d1 , … , italic_d exactly once as we run through the columns of ...
If we are in the (unique) column where r=d𝑟𝑑r=ditalic_r = italic_d then there is no ‘column clearing’ to do and we skip straight to the row clearing stage. For each other column, we start by calling the subroutine FirstTransvections[r𝑟ritalic_r] (Algorithm 4).
At this point in each pass of the main ‘for’ loop of Algorithm 3, we call the subroutine LeftUpdate[i𝑖iitalic_i] for i=r+2,…,d𝑖𝑟2…𝑑i=r+2,\ldots,ditalic_i = italic_r + 2 , … , italic_d, unless r≥d−1𝑟𝑑1r\geq d-1italic_r ≥ italic_d - 1, in which case the current column will have already been cleared. The role of thi...
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left...
Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r. Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero.
A
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
B
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Moreover, Alg-A is more stable than the alternatives. During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses a...
D
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
A
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′⁢(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp⁡(−uν)superscript𝑢𝜈...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th...
A
Twitter Features refer to basic Twitter features, such as hashtags, mentions, retweets. In addition, we derive three more URL-based features. The first is the WOT–trustworthy-based– score which is crawled from the APIs of WOT.com555https://www.mywot.com/en/api. The second is domain categories which we have collected fr...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i...
User Features. Apart from the features already exploited in related work (e.g., VerifiedUser, NumOfFriends, NumOfTweets, ReputationScore), we add two new features captured from Twitter interface: (1) how many photos have been posted by a user (UserNumPhoto), and (2) whether the user lives in a large city. We use the li...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
C
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
C
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b], as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3.
B
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)). Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)).
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
B
Furthermore, it is expected that complex representations at multiple spatial scales are necessary for accurate predictions of human fixation patterns. We therefore incorporated a contextual module that samples multi-scale information and augments it with global scene features. The contribution of the contextual module ...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
With the advent of deep neural network solutions for visual tasks such as image classification Krizhevsky et al. (2012), saliency modeling has also undergone a paradigm shift from manual feature engineering towards automatic representation learning. In this work, we leveraged the capability of convolutional neural net...
Early approaches towards computational models of visual attention were defined in terms of different theoretical frameworks, including Bayesian Zhang et al. (2008) and graph-based formulations Harel et al. (2006). The former was based on the notion of self-information derived from a probability distribution over linear...
The spatial allocation of attention when viewing natural images is commonly represented in the form of topographic saliency maps that depict which parts of a scene attract fixations reliably. Identifying the underlying properties of these regions would allow us to predict human fixation patterns and gain a deeper under...
C
Many existing algorithms constructing path decompositions are of theoretical interest only, and this disadvantage carries over to the possible algorithms computing the locality number or cutwidth (see Section 6) based on them. However, the reduction of 5.7 is also applicable in a purely practical scenario, since any ki...
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u...
C
Imaging modalities that have found use in cardiology include Magnetic Resonance Imaging (MRI), Fundus Photography, Computerized Tomography (CT), Echocardiography, Optical Coherence Tomography (OCT), Intravascular Ultrasound (IVUS), and others. Deep learning has been mostly successful in this area, mainly due to archite...
In[128] the authors created a recurrent u-net that learns image representations from a stack of 2D slices and has the ability to leverage inter-slice spatial dependencies through internal memory units. It combines anatomical detection and segmentation into a single end-to-end architecture, achieving comparable results ...
Regarding the solution to the interpretability problem when new methods are necessary researchers should prefer making simpler deep learning methods (end-to-end and non-ensembles) to increase their clinical applicability, even if that means reduced reported accuracy.
Luo et al.[133] adopted a LV atlas mapping method to achieve accurate localization using MRI data from DS16. Then, a three layer CNN was trained for predicting the LV volume, achieving comparable results with the winners of the challenge in terms of root mean square of end-diastole and end-systole volumes.
Most predominantly CNNs and u-nets are used solely or in combination with RNNs, AEs, or ensembles. The problem is that most of them are not end-to-end; they rely on preprocessing, handcrafted features, active contours, level set and other non-differentiable methods, thus partially losing the ability to scale on the pre...
A
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (...
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score...
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an import...
D
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin...
D
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra...
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga...
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
A
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ...
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation. Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg...
The above observations were recently made in the context of online algorithms with machine-learned predictions. Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ...
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
A
There, in the last column, it is also included the time that each classifier required to classify all the subjects in the test set. As we can see, SS3 obtained the best F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ERDE values for all the considered o𝑜oitalic_o values except for ERDE5. On the...
As it will be discussed in the next section, when classifying a subject in a streaming-like way, the execution cost of each classifier for each subject is O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) with respect to the total number of subject’s writings, n𝑛nitalic_n ...
There, in the last column, it is also included the time that each classifier required to classify all the subjects in the test set. As we can see, SS3 obtained the best F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ERDE values for all the considered o𝑜oitalic_o values except for ERDE5. On the...
It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5. Additionally, the classification is also i...
However, as we will discuss further in the next section, SS3 has a more efficient computation time in comparison with the remaining algorithms. For instance, it took SVM more than one hour (73.9 min) to complete the classification of the test set while it took SS3 a small fraction of it (roughly 5.3%) to carry out the ...
D
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods...
We can find that after a sufficient number of iterations, the parameter in DGC (w/o mfm) can only oscillate within a relatively large neighborhood of the optimal point. Compared with DGC (w/o mfm), the parameter in GMC converges closer to the optimal point and then remains stable. Figure 2(a) shows the distances to the...
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods...
Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data dis...
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021). We ado...
C
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
Applying dropout at the activations in order to correct weights that have overshot, especially when they are initialized with high values. However, the effect of dropout on SANs would generally be negative since SANs have much less weights than DNNs thus need less regularization.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
B
In summary, our work differs significantly from each of the above-mentioned works, and other literatures in UAV ad-hoc networks. As far as we know, our proposed algorithm is capable of learning previous utilities and strategies, achieve NE with restricted information and constrained strategies sets, and update strategi...
When there are numbers of UAVs in the network, it is possible for the coverage areas of different UAVs to overlap. When a UAV overlaps with another, they will not support all users but share the mission. The users in the overlaps will be served randomly with equal probability by each UAV. Fig. 2 presents the overlaps b...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
C
U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end...
U^z′superscriptsubscript^𝑈𝑧′\displaystyle\widehat{U}_{z}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢z^¯∗U¯absent¯^𝐷𝑧¯𝑈\displaystyle=\overline{\widehat{Dz}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_z end...
U¯z′superscriptsubscript¯𝑈𝑧′\displaystyle\overline{U}_{z}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢z¯¯∗U¯absent¯¯𝐷𝑧¯𝑈\displaystyle=\overline{\overline{Dz}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_z e...
U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r e...
U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end...
A
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
B
θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and θi−superscriptsubscript𝜃𝑖\theta_{i}^{-}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are the parameters of network and target network at iteration i respectively. The target netw...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro...
Figure 5 demonstrates that using Dropout methods in DQN reduce the overestimation from the optimal policy. Despite that Gridworld environment is not suffering from intangible overestimation that can distort the overall cumulative rewards but reducing overestimation leads to more accurate predictions.
C
Creating large 2D and 3D publicly available medical benchmark datasets for semantic image segmentation such as the Medical Segmentation Decathlon (Simpson et al., 2019). Medical imaging datasets are typically much smaller in size than natural image datasets (Jin et al., 2020), and the curation of larger public dataset...
A possible solution to address the paucity of sufficient annotated medical data is the development and use of physics based imaging simulators, the outputs of which can be used to train segmentation models and augment existing segmentation datasets. Several platforms (Marion et al., 2011; Glatard et al., 2013) as well...
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutio...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr...
A
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig...
In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped. Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially.
In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped. In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ...
In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped. In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ...
In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped. Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially.
B
The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in R⁢Fsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRI...
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data po...
This shows that neural random forest imitation is able to generate significantly better data samples based on the knowledge in the random forest. NRFI dynamic improves the performance by automatically optimizing the decision tree sampling and generating the largest variation in the data.
Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI unifo...
The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence. NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples.
A
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
C
As expected, the test accuracy increases gradually with high bit widths while the throughput decreases accordingly. Following the Pareto front starting from the bottom right indicates that the best performing models use a combination of 1 bit for the weights and a gradual increase of activations up to 3 bits.
Afterwards the models perform best if the weights are scaled to 2 bits and the activation bit width is further increased to 4 bits. This supports the observation of the previous sections, showing that model accuracy is sensitive to activation quantization rather than weight quantization.
Targeting the same problem, Lin et al. (2023) introduced activation-aware weight quantization, which exploits the fact that the weights of large language models are not equally important. They propose to guide the selection of important weights by activation magnitudes (rather than weight magnitudes) and protecting sal...
This architecture is identical to the original ResNet model except that it is scaled in width rather than depth. Additionally, we create a DenseNet variant for this experiment which is scaled in depth to 28 layers and the width is varied until it approximately matches the number of parameters and computations of the WR...
Quantized DNNs with 1-bit weights and activations are the worst performing models, which is due to the severe implications of extreme quantization on prediction performance. As can be seen, however, the overall performance of the quantized models increases considerably when the bit width of activations is increased to ...
A
Let (X,dX)𝑋subscript𝑑𝑋(X,d_{X})( italic_X , italic_d start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) be a compact ANR metric space. Then, there exists r⁢(X)>0𝑟𝑋0r(X)>0italic_r ( italic_X ) > 0 such that VRr⁢(X)subscriptVR𝑟𝑋\mathrm{VR}_{r}(X)roman_VR start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X ) is...
A problem of interest in the area of persistent homology is that of deciding how much information from a metric space is captured by its associated persistent homology invariants. One basic (admittedly imprecise) question that we posed in page 1 is:
One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
The notion of persistent homology arose from work by Frosini, Ferri, and Landi [40, 41], Robins [74], and Edelsbrunner [27, 37] and collaborators. After that, considering the persistent homology of the simplicial filtration induced from Vietoris-Rips complexes was a natural next step. For example, Carlsson and de Silv...
From a different perspective, by appealing to our isomorphism theorem, it is also possible to apply certain results from quantitative topology to the problem of characterization of metric spaces by their Vietoris-Rips persistence barcodes. In applied algebraic topology, a general question of interest is:
A
The third option—Dimension Correlation—provides a tool for the user to check the hypothesis that a visual pattern, as observed, is strongly correlated to a pattern in the high-dimensional space (Subsection 4.4). The final mode—Reset Filters—removes every filter applied with the previously-described interaction modes.
To complement the main view, the Overview (Figure 1(b)) shows the static t-SNE projection and serves as a contextual anchor that is independent of the interactions and/or filters applied to the main view. Data-specific labels (when those exist) are shown using a categorical colormap, along with simple statistics about...
Clustervision [51] is a visualization tool used to test multiple batches of a varying number of clusters and allows the users to pick the best partitioning according to their task. Then, the dimensions are ordered according to a cluster separation importance ranking. As a result, the interpretation and assessment of t...
The results (i.e., relevances of each dimension) are finally shown in an interactive horizontal bar chart (Figure 1(j)), where the dimensions are sorted from top to bottom according to relevance (with the most relevant on the top). While the relevance is computed using the absolute value of the correlation, we decided...
Figure 1: Visual inspection of t-SNE results with t-viSNE: (a) a panel for uploading data sets, choosing between two execution modes (grid search or a single set of parameters), and storing new (or loading previous) executions; (b) overview of the results with data-specific labels encoded with categorical colors; (c) t...
A
From a positive vision, bio-inspired algorithms have been regularly used in AI and real-world applications. These algorithms hold potential in new scientific avenues, contributing to recent advances in DL evolution [8], the design of large language models (LLM) [627], and more recently, the design and enrichment of GPA...
From a positive vision, bio-inspired algorithms have been regularly used in AI and real-world applications. These algorithms hold potential in new scientific avenues, contributing to recent advances in DL evolution [8], the design of large language models (LLM) [627], and more recently, the design and enrichment of GPA...
Both taxonomies and the analysis provide a full overview of the situation of the bio-inspired optimization field. However, Figure 1 reflects the interest of research in this field, as the number of papers is in continuous growth of interest. We believe that it is essential to highlight and reflect on what is expected ...
As we have mentioned in the abstract, this fifth and last version of this series of documents ends with an analysis that addresses the double vision of a wide range of proposals, which after five years of analysis must be indicated that they border on a lack of analysis of the real problems and useful proposals, and o...
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex...
D
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
B
False negatives in our measurements mean that a network that does not perform filtering of spoofed packets is not marked as such. We next list the causes of false negatives for each of our three techniques. Essentially the false negatives cannot be resolved, and therefore our measurement results of networks that enforc...
IPID technique. Load balancing can introduce a challenge in identifying whether a given network enforces ingress filtering. As a result of load balancing our packets will be split between multiple instances of the server, hence resulting in low IPID counter values. There are different approaches for distributing the l...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
A
\cdot\mathbf{h}_{p-1}+\mathbf{b}_{d}),\\ \hat{\mathbf{y}}=\mathbf{W}_{dy}\cdot\mathbf{d}+\mathbf{b}_{y}.\end{split}start_ROW start_CELL bold_s = roman_ReLU ( bold_W start_POSTSUBSCRIPT italic_x italic_s end_POSTSUBSCRIPT ⋅ bold_x + bold_b start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) , end_CELL end_ROW start_ROW sta...
TABLE I: Mean generalization accuracy. Listed is the classification accuracy (correct / total) of various models evaluated on the unseen testing data, i.e., batch T𝑇Titalic_T. The values represent the average accuracy over 30 trials. The final column lists the mean of the values for batches 3 through 10. A bolded valu...
The second comparison is between the weighted ensembles of SVMs, i.e., the state of the art [7], and the weighted ensembles of neural networks. For each batch, an SVM and a neural network were trained with that batch as the training set. Weighted ensembles were constructed for each batch T𝑇Titalic_T by assigning weig...
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
Figure 3: Generalization accuracy. The generalization accuracy of each model was evaluated on batch T𝑇Titalic_T. For each model type and every batch, 30 models were trained. The line represents the average over the 30 trials, and the error bar is the 95% confidence interval. (A.) The skill and context+skill models are...
A
Third, we gave a 2O⁢(δ1−1/d)⁢nsuperscript2𝑂superscript𝛿11𝑑𝑛2^{O(\delta^{1-1/d})}n2 start_POSTSUPERSCRIPT italic_O ( italic_δ start_POSTSUPERSCRIPT 1 - 1 / italic_d end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n expected time algorithm for random point sets.
Let Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the spacings Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman...
Let Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the x𝑥xitalic_x-coordinates of the points are taken independently uniformly at random fro...
The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets: a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut...
Random point sets. In the third scenario the points in P𝑃Pitalic_P are drawn independently and uniformly at random from the hypercylinder [0,n]×Balld−1⁢(δ/2)0𝑛superscriptBall𝑑1𝛿2[0,n]\times\mathrm{Ball}^{d-1}(\delta/2)[ 0 , italic_n ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / ...
C
The free product of two semigroups R=⟨P∣ℛ⟩𝑅inner-product𝑃ℛR=\langle P\mid\mathcal{R}\rangleitalic_R = ⟨ italic_P ∣ caligraphic_R ⟩ and S=⟨Q∣𝒮⟩𝑆inner-product𝑄𝒮S=\langle Q\mid\mathcal{S}\rangleitalic_S = ⟨ italic_Q ∣ caligraphic_S ⟩ (with P∩Q=∅𝑃𝑄P\cap Q=\emptysetitalic_P ∩ italic_Q = ∅) is the semigroup with pres...
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t...
While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne...
Note that there is a difference between the free product in the category of semigroups and the free product in the category of monoids or groups. In particular, in the semigroup free product (which we are exclusively concerned with in this paper) there is no amalgamation over the identity element of two monoids. Thus, ...
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ...
C
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian...
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible...
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ...
While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented...
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p...
B
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil...
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da...
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. D...
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application...
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word...
C
The use of visualization for ensemble learning could possibly introduce further biases to the already blurry situation based on the different ML models involved. Thus, the thorough selection of both interaction techniques and visual representations that highlight and potentially overcome any cognitive biases is a major...
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th...
T1: Search the solution space for the most suitable algorithms, data, and models for the task. Some of the major challenges of stacking are the choice of the most suitable algorithms and models, the data processing necessary for the selected models, further improvements for the models, and reduction of the complexity o...
Predictions’ Space. The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances.
The model exploration phase is perhaps the most important step on the way to build a good ensemble. It focuses on comparing and exploring different models both individually and in groups. Due to the page limits, we now assume that we selected the most performant models, removed the remaining from the stack, and reached...
B
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
D
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
In the field of Natural Language Processing (NLP), the abundance of training data plays a crucial role in the performance of deep learning models [Dodge et al., 2021]. However, numerous NLP applications face a substantial challenge due to the scarcity of annotated data [Schick and Schütze, 2021]. For example, in person...
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
D
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DR...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce...
A
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
B
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
corresponding to θ(m)⁢(k)=(θ1⁢(k),…,θm⁢(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
B
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ...
We applied joint Byte-Pair Encoding Sennrich et al. (2016) with 32⁢k32𝑘32k32 italic_k merging operations on all data sets to address the unknown word issue. We only kept sentences with a maximum of 256256256256 subword tokens for training. For fair comparison, we did not tune any hyperparameters but followed Vaswani e...
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin...
A
⟦φ⟧Z=⋃i∈I⟦ψi⟧Z∪⋃i<j∈I⟦θi,j⟧Z\llbracket\varphi\rrbracket_{Z}=\bigcup_{i\in I}\llbracket\psi_{i}\rrbracket_{% Z}\cup\bigcup_{i<j\in I}\llbracket\theta_{i,j}\rrbracket_{Z}⟦ italic_φ ⟧ start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT = ⋃ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ⟦ italic_ψ start_POSTSUBSCRIPT...
⟨X,τ,ℒ⟩𝑋τℒ\left\langle X,\uptau,\mathcal{L}\right\rangle⟨ italic_X , roman_τ , caligraphic_L ⟩. Hence ⟨ℒ′⟩=⟨τ∩ℒ⟩delimited-⟨⟩superscriptℒ′delimited-⟨⟩τℒ\langle\mathcal{L}^{\prime}\rangle=\langle\uptau\cap\mathcal{L}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ = ⟨ roman_τ ∩ caligraphic_L ⟩.
τℒ≜⟨ℒ∪{Uc∣U∈ℒ}⟩≜subscriptτℒdelimited-⟨⟩ℒconditional-setsuperscript𝑈𝑐𝑈ℒ\uptau_{\mathcal{L}}\triangleq\langle\mathcal{L}\cup\left\{U^{c}\mid U\in% \mathcal{L}\right\}\rangleroman_τ start_POSTSUBSCRIPT caligraphic_L end_POSTSUBSCRIPT ≜ ⟨ caligraphic_L ∪ { italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∣ it...
⟨Y,τ,ℒY⟩𝑌τsubscriptℒ𝑌\left\langle Y,\uptau,\mathcal{L}_{Y}\right\rangle⟨ italic_Y , roman_τ , caligraphic_L start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ⟩. If ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is a sublattice of ℒℒ\mathcal{L}caligraphic_L and ℒ′Xsubscriptsu...
ℒℒ\mathcal{L}caligraphic_L of ℘⁢(Z)Weierstrass-p𝑍\wp(Z)℘ ( italic_Z ), we write ℒX≜{U∩X∣U∈ℒ}≜subscriptℒ𝑋conditional-set𝑈𝑋𝑈ℒ\mathcal{L}_{X}\triangleq\{U\cap X\mid U\in\mathcal{L}\}caligraphic_L start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ≜ { italic_U ∩ italic_X ∣ italic_U ∈ caligraphic_L } for the lattice induce...
B
The comparison results of the real distorted image are shown in Fig. 13. We collect the real distorted images from the videos on YouTube, captured by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. 13, our approach generates the best rect...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify...
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l...
In this work, we presented a new learning representation for the deep distortion rectification and implemented a standard and widely-used camera model to validate its effectiveness. The rectification results on the synthesized and real-world scenarios also demonstrated our approach’s superiority compared with the stat...
In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyze the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the ...
C
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods...
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r...
D
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m...
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu...
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
C
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an...
A
a⁢t⁢t⁢r⁢i⁢b⁢u⁢t⁢e⁢_⁢w⁢e⁢i⁢g⁢h⁢t⁢s𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒_𝑤𝑒𝑖𝑔ℎ𝑡𝑠attribute\_weightsitalic_a italic_t italic_t italic_r italic_i italic_b italic_u italic_t italic_e _ italic_w italic_e italic_i italic_g italic_h italic_t italic_s denotes the weights of QI attributes, and each weight is calculated by d⁢i⁢sm⁢a⁢x⁢(a⁢t⁢t⁢r...
This section evaluates the effectiveness of the proposed MuCo algorithm. We apply Mondrian [14], which is one of the most effective generalization approaches, and Anatomy [33], which always preserves the best information utility, as the baselines. We use the US Census data [29], eliminate the tuples with missing value...
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
A
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
D
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi...
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
C
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
B
Figure 1: A comparison between KG embedding and word embedding. Left: the KG and the sentence contain the same information. Center: the triplet-based models are similar to Skip-gram where each neighbor embedding is used to predict the central element. Right: the GNN-based models resemble CBOW where all neighbor embedd...
The performance of decentRL at the input layer notably lags behind that of other layers and AliNet. As discussed in previous sections, decentRL does not use the embedding of the central entity as input when generating its output embedding. However, this input embedding can still accumulate knowledge by participating i...
Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors. DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its...
Consider the instance of encoding the relational information of the entity W3C into an embedding. All relevant information is structured in the form of triplets, such as (RDF,developer,W3C)RDFdeveloperW3C(\textit{RDF},\textit{developer},\textit{W3C})( RDF , developer , W3C ). Removing the self-entity W3C does not comp...
Within the realm of encoding relational information, it becomes pertinent to question the necessity of incorporating the self-entity when aggregating neighborhood information. In this paper, we delve into this question and find that, at least concerning encoding relational information, the answer may lean towards the n...
D
Optimization detail. We update the parameters of VDM for tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times after each episode by using Adam optimizer with the learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The hyper-parameter tvdmsub...
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to ...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-err...
Figure 6: The evaluation curve in Atari games. The first 6 games are hard exploration tasks. The different methods are trained with different intrinsic rewards, and extrinsic rewards are used to measure the performance. Our method performs best in most games, both in learning speed and quality of the final policy. The ...
A
Until today, the classic Gauss quadrature formula is the best approach to approximating integrals IGauss⁢(f)≈∫Ωf⁢(x)⁢dxsubscript𝐼Gauss𝑓subscriptΩ𝑓𝑥differential-d𝑥I_{\mathrm{Gauss}}(f)\approx\int_{\Omega}f(x)\,\mathrm{d}xitalic_I start_POSTSUBSCRIPT roman_Gauss end_POSTSUBSCRIPT ( italic_f ) ≈ ∫ start_POSTSUBSCRIPT...
However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Tr...
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros...
Leslie Greengard, Christian L. Mueller, Alex Barnett, Manas Rachh, Heide Meissner, Uwe Hernandez Acosta, and Nico Hoffmann are deeply acknowledged for their inspiring hints and helpful discussions. Further, we are grateful to Michael Bussmann and thank the whole CASUS institute (Görlitz, Germany) for hosting stimulatin...
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
C
Classical tests (see, e.g., [12]) mainly follow the parametric approaches, which are designed based on prior information about the distributions under each class. Examples in classical tests include the Hotelling’s two-sample test [13] and the Student’s t-test [14].
Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS). However, it is pointed out in [23] that when the bandwidth is chosen based on ...
Given collected samples xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ymsuperscript𝑦𝑚y^{m}italic_y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, a non-parametric two-sample test is usually constructed based on IPMs, which quantify the discrepancy between the associated em...
Classical tests (see, e.g., [12]) mainly follow the parametric approaches, which are designed based on prior information about the distributions under each class. Examples in classical tests include the Hotelling’s two-sample test [13] and the Student’s t-test [14].
In this paper, we consider non-parametric two-sample testing, in which no prior information about the unknown distribution is available. Two-sample tests for non-parametric settings are usually constructed based on some metrics quantifying the distance between two distributions.
D
The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
C
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working...
If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
Graph described in Fig.  4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma...
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
C
=3⁢(x)+3⁢(x3+2⁢x2+3⁢x+3)+4⁢(2⁢x3+3⁢x2+4⁢x+2)absent3𝑥3superscript𝑥32superscript𝑥23𝑥342superscript𝑥33superscript𝑥24𝑥2\displaystyle=3(x)+3(x^{3}+2x^{2}+3x+3)+4(2x^{3}+3x^{2}+4x+2)= 3 ( italic_x ) + 3 ( italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + 2 italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 3 ...
In this section, we focus on additional results on the linear representation of f𝑓fitalic_f when f𝑓fitalic_f is a monomial function. The following theorem re-establishes the condition invertibility of a monomial while adding additional results on the linear complexity.
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
We now show that whenever f𝑓fitalic_f is a permutation function in 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, the inverse function can be represented similarly over the same space S𝑆Sitalic_S. First, we prove a condition of invertibility of f𝑓fitalic_f in terms of the ...
A
Typically B𝐵Bitalic_B is set to 50, but the choice of q𝑞qitalic_q and πthrsubscript𝜋thr\pi_{\text{thr}}italic_π start_POSTSUBSCRIPT thr end_POSTSUBSCRIPT is somewhat more involved. In particular, one can obtain a bound on the expected number of falsely selected variables, the so-called per-family error rate (PFER),...
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge...
Stability selection is an ensemble learning framework originally proposed for use with the lasso (Meinshausen \BBA Bühlmann, \APACyear2010), although it can be used with a wide variety of feature selection methods (Hofner \BOthers., \APACyear2015). The basic idea of stability selection is to apply a feature selection m...
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Hou...
Forward selection is a simple, greedy feature selection algorithm (Guyon \BBA Elisseeff, \APACyear2003). It is a so-called wrapper method, which means it can be used in combination with any learner (Guyon \BBA Elisseeff, \APACyear2003). The basic strategy is to start with a model with no features, and then add the sing...
D
LogDP [7] is a semi-supervised log-based anomaly detection approach tailored for large-scale service-oriented systems troubleshooting. LogDP leverages the dependency relationships among log events and proximity among log sequences to identify anomalies in extensive unlabeled log data. By categorizing events, learning ...
This phase can utilize off-the-shelf feature selection methods [29, 30] to identify the relevant variables. When choosing a feature selection method, the following factors should be considered: (1) The prediction models used in the prediction model training phase; (2) The interpretability of the selected variables; an...
To address these gaps, this paper introduces a Dependency-based Anomaly Detection framework (DepAD) to provide a general approach to dependency-based anomaly detection. For each phase of the DepAD framework, this paper analyzes what and how to utilize the off-the-shelf techniques in the context of anomaly detection. We...
A common way of examining dependency deviations in the dependency-based approach is to check the difference between the observed value and the expected value of an object, where the expected value is estimated based on the underlying dependency between variables [7, 4, 5]. Thus, dependency-based approach naturally lead...
Recent research in the data-centric AI (DCAI) domain has leveraged anomaly detection techniques to identify out-of-distribution samples and data inconsistencies  [20, 21, 22]. A representative method, DAGnosis [21], uses dependency-based approach to effectively detect and interpret inconsistencies. This method employs ...
D
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, th...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
B
The second category is to use sliding windows to crop the original video into multiple input sequences. This can preserve the original information of each frame. The works R-C3D [42], TAL-NET [9], PBRNet [24], belonging to this category, perform pooling / strided convolution to obtain multi-scale features. Compared to...
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur...
Graph neural networks (GNN) are a useful model for exploiting correlations in irregular structures [17]. As they become popular in different computer vision fields [13, 38, 40], researchers also find their application in temporal action localization [3, 44, 46]. G-TAD [44] breaks the restriction of temporal locations o...
We illustrate these two types of edges in Fig. 4. We make K/2𝐾2K/2italic_K / 2 edges of a node free edges, which are only determined based on feature similarity between nodes, without considering the source clips. We measure the feature similarity between two nodes vtsubscript𝑣𝑡v_{t}italic_v start_POSTSUBSCRIPT ital...
Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ...
B
Thereafter in Section 5, we demonstrate the applicability and usefulness of VisEvol with another real-world data set focusing on biodegradation of molecules. Next, in Section 6, we review the feedback our VA tool obtained during the interview sessions by summing up the experts’ opinions and the limitations that guide u...
Numerous techniques exist that try to solve this challenge, such as the well-known grid search, random search [BB12], and Bayesian optimization that belong to the generic type of sequential-based methods [BBBK11, SSW∗16]. Other proposed methods include bandit-based approaches [FKH18, LJD∗17], population-based methods [...
Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
One common focus of related work is the hyperparameter search for deep learning models. HyperTuner [LCW∗18] is an interactive VA system that enables hyperparameter search by using a multi-class confusion matrix for summarizing the predictions and setting user-defined ranges for multiple validation metrics to filter out...
B
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs. Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analy...
A complex communication architecture is not required since communication only with neighboring bins is sufficient for an agent to determine its transition probabilities. If agents have only access to the number of agents of their own and neighboring bins, then they also need to know the total number of agents in the sw...
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
A
However, extracting a point-wise correspondence from a functional map matrix is not trivial [17, 57]. This is mainly because of the low-dimensionality of the functional map, and the fact that not every functional map matrix is a representation of a point-wise correspondence [51]. In [44], the authors simultaneously sol...
The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos...
The identification of correspondences between 3D shapes, also known as the shape matching problem, is a longstanding challenge in visual computing. Correspondence problems have a high relevance due to their plethora of applications, including 3D reconstruction, deformable object tracking, style transfer, shape analysis...
It was shown that deep learning is an extremely powerful approach for extracting shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me...
Due to their low-dimensionality and continuous representation, functional maps also serve as the backbone of many deep learning architectures for 3D correspondence. One of the first examples is FMNet [40], which has also been extended for unsupervised learning settings recently [27, 3, 59].
D
A graph G𝐺Gitalic_G is a chordal graph if and only if it there exists a tree T𝑇Titalic_T, called clique tree, with vertex set 𝐂𝐂\mathbf{C}bold_C such that, for every v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, T⁢(𝐂v)𝑇subscript𝐂𝑣T(\mathbf{C}_{v})italic_T ( bold_C start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) is a tree ...
The main goal of our paper is: given a graph G𝐺Gitalic_G, find a (directed) clique path tree of G𝐺Gitalic_G or say that G𝐺Gitalic_G is not a (directed) path graph. To reach our purpose, we follow the same way in [18], by decomposing recursively G𝐺Gitalic_G by clique separators.
If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed path graph.
We present the algorithm RecognizePG. Note that it is an implementation of Theorem 6 with very small changes. W.l.o.g., we assume that G𝐺Gitalic_G is connected, indeed a graph G𝐺Gitalic_G is a path graph if and only if all its connected components are path graphs. Moreover, we can obtain the clique path tree of G𝐺Gi...
The tree T𝑇Titalic_T of the previous theorem is called the clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a path graph or the directed clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a directed path graph. In Figure 1, the left part shows a path graph G𝐺Gitalic_G, and on the right there is a clique path tree...
A
We study how the purity of mixed nodes under different settings affects the performances of these overlapping community detection methods in sub-experiments 1(e) and 1(f). Fix (n0,ρ)=(100,0.1)subscript𝑛0𝜌1000.1(n_{0},\rho)=(100,0.1)( italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_ρ ) = ( 100 , 0.1 ), and l...
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
C
For any functional F:ℳ→ℝ:𝐹→ℳℝF\colon\mathcal{M}\rightarrow\mathbb{R}italic_F : caligraphic_M → blackboard_R, we let grad⁡Fgrad𝐹\operatorname{{\mathrm{grad}}}Froman_grad italic_F denote the functional gradient of F𝐹Fitalic_F with respect to the Riemannian metric g𝑔gitalic_g.
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilber...
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity. Therefore, in this scenario, variational ...
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe...
A
We conduct the experiments on CityFlow [20], an city-level open-source simulation platform for traffic signal control. The simulator is used as the environment to provide state for traffic signal control, the agents execute actions by changing the phase of traffic lights, and the simulator returns feedback. Specifical...
Figure 6: The illustration of the road networks. The first row shows the road networks of Jinan (China), Hangzhou (China) and New York (USA), containing 12, 16 and 48 traffic signals respectively, and the second row shows the road network of Shenzhen containing 33 traffic signals.
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-...
The evaluation scenarios come from four real road network maps of different scales, including Hangzhou (China), Jinan (China), New York (USA) and Shenzhen (China), illustrated in Fig. 6. The road networks and data of Hangzhou, Jinan and New York are from the public datasets222https://traffic-signal-control.github.io/....
We conduct extensive experiments on CityFlow [20] in public datasets Hangzhou (China), Jinan (China), New York (USA), and our derived dataset Shenzhen (China) road networks under various traffic patterns, and empirically demonstrate that our proposed method can achieve state-of-the-art performances over the above scena...
C
\mathbf{x}_{j},0.9999)_{\mbox{\scriptsize rank-{3}}}^{\dagger}\,\mathbf{f}(% \mathbf{x}_{j},0.9999)\|_{2}~{}\longrightarrow~{}1.51\times 10^{-16}∥ bold_x start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = ∥ bold_f start_...
}}^{\dagger}\,\mathbf{f}(\tilde{\mathbf{x}},0.9999)\,=\,\mathbf{0}bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( over~ start_ARG bold_x end_ARG , 0.9999 ) start_POSTSUBSCRIPT rank-3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( over~ start_ARG bold_x end_ARG , 0.9999 ) = bold_0. The modu...
zero of 𝐱↦𝐟⁢(𝐱,t∗)maps-to𝐱𝐟𝐱subscript𝑡\mathbf{x}\,\mapsto\,\mathbf{f}(\mathbf{x},\,t_{*})bold_x ↦ bold_f ( bold_x , italic_t start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) from the empirical data t~= 0.9999~𝑡0.9999\tilde{t}\,=\,0.9999over~ start_ARG italic_t end_ARG = 0.9999, the point
a stationary point 𝐱~~𝐱\tilde{\mathbf{x}}over~ start_ARG bold_x end_ARG at which 𝐟𝐱⁢(𝐱~,0.9999)rank-3†⁢𝐟⁢(𝐱~,0.9999)= 0subscript𝐟𝐱superscriptsubscript~𝐱0.9999rank-3†𝐟~𝐱0.9999 0\mathbf{f}_{\mathbf{x}}(\tilde{\mathbf{x}},0.9999)_{\mbox{\scriptsize rank-{3}%
t~=0.9999~𝑡0.9999\tilde{t}\,=\,\mbox{\scriptsize 0.9999}over~ start_ARG italic_t end_ARG = 0.9999. Even though the solutions of 𝐟⁢(𝐱,t~)= 0𝐟𝐱~𝑡 0\mathbf{f}(\mathbf{x},\tilde{t})\,=\,\mathbf{0}bold_f ( bold_x , over~ start_ARG italic_t end_ARG ) = bold_0 are all
C
We set the bin capacity to k=100𝑘100k=100italic_k = 100, and we also scale down each item to the closest integer in [1,k]1𝑘[1,k][ 1 , italic_k ]. This choice is relevant for applications such as Virtual Machine placement (Section 5.1), as explained in Section 5.1. We generate two classes of input sequences.
For Weibull benchmarks, the input sequence consists of items generated independently and uniformly at random, and the shape parameter is set to s⁢h=3.0𝑠ℎ3.0sh=3.0italic_s italic_h = 3.0. For BPPLIB benchmarks, we first select a file of the benchmark uniformly at random, then generate input items from the chosen file, ...
The distribution of the input sequence changes every 50000 items. Namely, the input sequence is the concatenation of n/50000𝑛50000n/50000italic_n / 50000 subsequences. For Weibull benchmarks, each subsequence is a Weibull distribution, whose shape parameter is chosen uniformly at random from [1.0,4.0]1.04.0[1.0,4.0][ ...
The Weibull distribution is specified by two parameters: the shape parameter s⁢h𝑠ℎshitalic_s italic_h and the scale parameter s⁢c𝑠𝑐scitalic_s italic_c (with s⁢h,s⁢c>0𝑠ℎ𝑠𝑐0sh,sc>0italic_s italic_h , italic_s italic_c > 0). The shape parameter defines the spread of item sizes: lower values indicate greater skew tow...
s⁢h=3𝑠ℎ3sh=3italic_s italic_h = 3, or a file from the GI Benchmark), we generate 20 random sequences of length 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT. For each sequence, we compute FirstFit, BestFit, and the L⁢2𝐿2L2italic_L 2 lower bound. The average costs of these algorithms, over the ...
A
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of W⁢T𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
A
{\bf p}\in{\mathcal{P}}\end{subarray}}\frac{1}{m}\sum_{i=1}^{m}f_{i}(x,p_{i},% \widehat{y}_{av}^{N},\widehat{q}_{i}^{N})}\leq\varepsilon.roman_max start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_y ∈ over¯ start_ARG caligraphic_Y end_ARG , end_CELL end_ROW start_ROW start_CELL bold_q ∈ caligraphic_Q end_CELL e...
To describe this class of first-order methods, we use a similar definition of Black-Box procedure as in [51]. We assume that one local iteration costs t𝑡titalic_t time units, and the communication round costs τ𝜏\tauitalic_τ time units. Additionally, information can be transmitted only along the undirected edge of the...
The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ...
If Bρ≠∅subscript𝐵𝜌B_{\rho}\neq\varnothingitalic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ≠ ∅, in the global output of any procedure that satisfies Assumption 4.1, after T𝑇Titalic_T units of time, only the first k=⌊T−2⁢tt+ρ⁢τ⌋+2𝑘𝑇2𝑡𝑡𝜌𝜏2k=\left\lfloor\frac{T-2t}{t+\rho\tau}\right\rfloor+2italic_k = ⌊ div...
This fact leads to the main idea of the proof. At the initial moment of time T=0𝑇0T=0italic_T = 0, we have all zero coordinates in the global output, since the starting points x0,y0subscript𝑥0subscript𝑦0x_{0},y_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ar...
A
And from the bijection we can deduce that ∩(Tw)<∩(Gw∧Ts)subscript𝑇𝑤subscript𝐺𝑤subscript𝑇𝑠\cap(T_{w})<\cap(G_{w}\wedge T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) < ∩ ( italic_G start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ∧ italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) for so...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
necessarily complete) G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) that admits a star spanning tree Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In the first part we present a formula to calculate ∩(Ts)subscript𝑇𝑠\cap(T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSU...
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
A
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP...
Theorem 1.1 depends on p𝑝pitalic_p, q𝑞qitalic_q, K𝐾Kitalic_K and b𝑏bitalic_b (but, as usual, is independent of the size of the cover). Moreover, while the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover can grow with b𝑏bitalic_b (it is at least (b−1)⁢(μ⁢(K)+2)𝑏1𝜇𝐾2(b-1)(\mu(K)+2)( italic_b - ...
The support of a chain σ𝜎\sigmaitalic_σ, denoted supp⁡(σ)supp𝜎\operatorname{supp}(\sigma)roman_supp ( italic_σ ), in a simplicial complex is the set of simplices with nonzero coefficients in σ𝜎\sigmaitalic_σ. We say that two chains σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have overlapping supports if there exists a sim...
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
In this paper, we show that the gap observed for convex lattice sets occurs in the broad topological setting of triangulable spaces with a forbidden homological minor, a notion introduced by Wagner [37] as a higher-dimensional analogue of the familiar notion of graph minors [34].
D
He has approximately 3.5 years of experience with ML, and he currently works with reinforcement learning. The second ML expert (E2) is a senior researcher in software engineering and applied ML working in a governmental research institute as an adjunct professor. He has worked with ML for the past 8 years. The third ex...
(1) presentation of the key goals of FeatureEnVi, (2) demonstration of the functionality of each view and experts’ interaction with the system using the iris flower data set [75], and (3) explanation of the process of reaching the results for the red wine quality use case in Section 4. The first part serves as an intro...
The complex nature of feature engineering, occasionally declared as “black art” [2, 28], motivated us to concentrate our effort on addressing the three research questions mentioned above. In this paper, we present a visual analytics (VA) system, called FeatureEnVi (Feature Engineering Visualization, as seen in Fig. 1),...
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
Visualization and interaction. E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv...
A
[xref,x˙ref]𝖳superscriptsubscript𝑥refsubscript˙𝑥ref𝖳[x_{\text{ref}},\dot{x}_{\text{ref}}]^{\mathsf{\scriptscriptstyle T}}[ italic_x start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT , over˙ start_ARG italic_x end_ARG start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT and [yref...
To explore these trade-offs we formulate high-level optimization problem with cost function and constraints defined based on the entire position and velocity trajectory, which indicate respectively the overall performance of the control scheme and the operation limits.
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
Model predictive contouring control (MPCC) is a control scheme based on minimisation of a cost function which trades the competing objectives of tracking accuracy and traversal time by adjusting the corresponding weights in the cost function. We now introduce the main ingredients for a MPCC formulation.
D
We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u...
For each dataset, we assess all bias mitigation methods with the same neural network architecture. For CelebA, we use ResNet-18 [29]. For Biased MNISTv1, we use a convolutional neural network with four ReLU layers consisting of a max pooling layer attached after the first convolutional layer. For GQA-OOD, we employ th...
We first present the mean per group accuracy for all eight methods on all three datasets in Table. 1 to see if any method does consistently well across benchmarks. For this, we used class and gender labels as explicit biases for CelebA. For Biased MNISTv1, there are multiple ways to define explicit biases, but for this...
So far, there is no study comparing methods from either group comprehensively. Often papers fail to compare against recent methods and vary widely in the protocols, datasets, architectures, and optimizers used. For instance, the widely used Colored MNIST dataset, where colors and digits are spuriously correlated with e...
We compare seven state-of-the-art bias mitigation methods on classification tasks using Biased MNISTv1 and CelebA, measuring generalization to minority patterns, scalability to multiple sources of biases, sensitivity to hyperparameters, etc. We ensure fair comparisons by using the same architecture, optimizer, and per...
A
Krafka et al. replace the fully-connected layer with an SVM and fine-tune the SVM layer to predict the gaze location [42]. Zhang et al. split the CNN into three parts: the encoder, the feature extractor, and the decoder [133]. They fine-tune the encoder and decoder in each target domain.
Salvalaio et al. implicitly collect calibration data when users are using computers. They collect data when the user is clicking a mouse, this is based on the assumption that users are gazing at the position of the cursor when clicking the mouse [146]. They use online learning to fine-tune their model with the calibrat...
Inter-subject bias. Chen et al. observe the inter-subject bias in most datasets [131, 132]. They make the assumption that there exists a subject-dependent bias that cannot be estimated from images. Thus, they propose a gaze decomposition method. They decompose the gaze into the subject-dependent bias and the subject-in...
Xiong et al. introduce a random effect parameter to learn the person-specific information in gaze estimation [114]. They utilize the variational expectation-maximization algorithm [115] and stochastic gradient descent [116] to estimate the parameters of the random effect network during training. They use another networ...
They learn the person-specific feature during fine-tuning. Linden et al. introduce user embedding for recording personal information. They obtain user embedding of the unseen subjects by fine-tuning using calibration samples [136]. Chen et al.  [131, 132] observe the different gaze distributions of subjects. They use t...
D
Since the publication of AlexNet architecture in 2012 by Krizhevsky et al. krizhevsky2012imagenet , deep CNN have become a common approach in face recognition. It has also been successfully used in face recognition under occlusion variation almabdy2019deep ; hariri2017geometrical ; kadhim2023face . It is seen that the ...
Real-World-Masked-Face-Dataset wang2020masked is a masked face dataset devoted mainly to improve the recognition performance of the existing face recognition technology on the masked faces during the COVID-19 pandemic. It contains three types of images namely, Masked Face Detection Dataset (MFDD), Real-world Masked F...
Occlusion is a key limitation of real-world 2D face recognition methods. Generally, it comes out from wearing hats, eyeglasses, masks as well as any other objects that can occlude a part of the face while leaving others unaffected. Thus, wearing a mask is considered the most difficult facial occlusion challenge since ...
Inspired by the high performance of CNN based methods that have strong robustness to illumination, facial expression, and facial occlusion changes, we propose in this paper an occlusion removal approach and deep CNN based model to address the problem of masked face recognition during the COVID-19 pandemic. Motivations...
Since the publication of AlexNet architecture in 2012 by Krizhevsky et al. krizhevsky2012imagenet , deep CNN have become a common approach in face recognition. It has also been successfully used in face recognition under occlusion variation almabdy2019deep ; hariri2017geometrical ; kadhim2023face . It is seen that the ...
C
Validity conditions of infinite proofs have been developed to keep cut elimination productive, which correspond to criteria like the guardedness check [BDS16, BT17, DP19, DP20d]. Although we use infinite typing derivations, we explicitly avoid syntactic termination checking for its non-compositionality. Nevertheless, w...
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to c...
Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was...
Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], ...
Session types are inextricably linked with SAX, as it also has an asynchronous message passing interpretation [PP21]. Severi et al. [SPTDC16] give a mixed functional and concurrent programming language where corecursive definitions are typed with Nakano’s later modality [Nak00]. Since Vezzosi [Vez15] gives an embedding...
D
Protect the owner’s copyright. We need to embed the user’s fingerprint in the owner’s media content to enable traitor tracing. As long as an unfaithful user makes an unauthorized redistribution, he/she can be detected by the embedded fingerprint in the media content.
Finally, we conduct a comparative experiment to evaluate the proposed schemes against their relevant existing counterparts, and the results are displayed in Fig. 15. For FairCMS-I and FairCMS-II, we measure the time overhead of Part 2 as it is executed once for each user. For the other schemes, we evaluate their prima...
Ensure efficiency gains and scalability. For one thing, we need to carefully control the owner-side overhead to ensure that the owner can gain significant local resource savings from cloud media sharing. For another, we need to ensure that the two proposed schemes are scalable to handle real-time requests from users.
First, the owner requires that the cloud not be able to obtain the plaintext about the media content and the LUTs, and that access to the media content is controlled by his/her authorization. Second, the owner asks for significant overhead savings from cloud media sharing. Third, the owner demands traitor tracing of us...
There are two extra challenges that need to be addressed. For one thing, considering that the original purpose of cloud’s involvement is to help resource-constrained owners efficiently share their media contents, the owner-side overhead needs to be carefully controlled to ensure that owners can obtain sig-nificant reso...
B
The attention coefficient αi⁢jsubscript𝛼𝑖𝑗\alpha_{ij}italic_α start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is calculated by the soft attention mechanism, while the pi⁢jsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is calculated by the hard attention mechanism. By mu...
Specifically, to accommodate the polysemy of feature interactions in different semantic spaces, we utilize a multi-head attention mechanism Vaswani et al. (2017); Veličković et al. (2018). Each layer of our proposed model produces higher-order interactions based on the existing ones and thus the highest-order of intera...
GraphFM(-M): in the interaction aggregation component, we use a multi-head attention mechanism to learn the diversified polysemy of feature interactions in different semantic subspaces. To check its rationality, we use only one attention head when aggregating.
Due to the strength in modeling relations on graph-structured data, GNN has been widely applied to various applications like neural machine translation Beck et al. (2018), semantic segmentation Qi et al. (2017), image classification Marino et al. (2017), situation recognition Li et al. (2017), recommendation Wu et al. ...
To capture the diversified polysemy of feature interactions in different semantic subspaces Li et al. (2020) and also stabilize the learning process Vaswani et al. (2017); Veličković et al. (2018), we extend our mechanism to employ multi-head attention.
D
where Q𝑄Qitalic_Q is a symmetric positive definite matrix with log-normally distributed eigenvalues and φℝ+⁢(⋅)subscript𝜑subscriptℝ⋅\varphi_{\mathbb{R}_{+}}(\cdot)italic_φ start_POSTSUBSCRIPT blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ )
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy, the per iteration cost of the stateless step-size is about three times that of the simple step-size.
The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run.
In practice, a halving strategy for the step size is preferred for the implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps
C
One that has attracted a lot of attention, especially in the past decade, is the graph stream model, which was introduced by Feigenbaum et al. [FKM+04, FKM+05, Mut05] in 2005. In this model, the edges of the graph are not stored in the memory but appear in an arbitrary (that is, adversarially determined) sequential ord...
In a new pass, for each edge e={u,v}𝑒𝑢𝑣e=\{u,v\}italic_e = { italic_u , italic_v } in the stream, the algorithm checks whether the structure containing u𝑢uitalic_u and the structure containing v𝑣vitalic_v, if such structures exist, can augment over e𝑒eitalic_e. If it is possible, via Augment-and-Clean the algori...
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu...
In particular, it is desirable that the number of passes is independent of the input graph size. We call an algorithm a k𝑘kitalic_k-pass algorithm if the algorithm makes k𝑘kitalic_k passes over the edge stream, possibly each time in a different order [MP80, FKM+05].
One that has attracted a lot of attention, especially in the past decade, is the graph stream model, which was introduced by Feigenbaum et al. [FKM+04, FKM+05, Mut05] in 2005. In this model, the edges of the graph are not stored in the memory but appear in an arbitrary (that is, adversarially determined) sequential ord...
C
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compres...
Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols. The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a...
Recently, several compression methods have been proposed for distributed and federated learning, including [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40]. Recent works have tried to combine the communication compression methods with decentralized optimization.
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16]. Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv...
C
Certainly, we want to reduce the number of communications (or calls the regularizer gradient) as much as possible. This is especially important when the problem (1) is a fairly personalized (λ≪Lmuch-less-than𝜆𝐿\lambda\ll Litalic_λ ≪ italic_L) and information from other nodes is not significant. To solve this problem ...
Note that the lower bound not depend on which local oracles we use. This seems natural, because from a communication point of view it does not matter how certain local subproblems are solved. The same effect can be seen for decentralized (not personalized) minimization problems: [36] gives lower bounds on communicatio...
It is clear that the method from [29] cannot be used for saddle point problems. Sliding for saddles has its own specifics – exactly for the same reasons why Extra Step Method is used for smooth saddles instead of the usual Descent-Ascent [42] (at least because Descent-Ascent diverges for the most common bilinear probl...
\tfrac{\lambda}{2}\|\sqrt{W}Y\|^{2}\right\}{ ∑ start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT...
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ...
B
Kuhn Poker (Kuhn, 1950; Southey et al., 2009; Lanctot, 2014) is a zero-sum poker game with only two actions per player. The two-player variant is solvable with PSRO, however the three-player version benefits from JPSRO. The results in Figure 2(a) show rapid convergence to equilibrium.
We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t...
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina...
D
Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a nume...
One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In  Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individua...
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. (2021); Steinke and Zakynthinou (2020). Using these methods Feldman and Steinke (2018) presented a ...
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay...
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so ...
D
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
C
With the emergence of image harmonization datasets consisting of paired training data (see Section IV-E), abundant image harmonization methods [156, 18, 20, 97, 102, 146, 14] using paired supervision have been developed. Tsai et al. [156] proposed the first end-to-end CNN network for image harmonization and leveraged a...
Inoue et al. [57] developed a multi-task framework with two decoders accounting for depth map prediction and ambient occlusion map prediction respectively. ARShadowGAN [92] proposed an attention-guided residual network. The network predicts two attention maps for background shadow and occluder respectively, which are c...
Zhang et al. [202] proposed to make sequential decisions to produce a reasonable placement by using reinforcement learning. Azadi et al. [2] employed STN to warp the foreground and relative appearance flow network to change the viewpoint of foreground. Additionally, they investigated on self-consistency constraint, tha...
Cun and Pun [22] designed an additional Spatial-Separated Attention Module to deal with foreground and background feature maps separately. Hao et al. [49] employed self-attention [165] mechanism to propagate relevant information from background to foreground.
Blind image harmonization: Most image harmonization methods require the foreground mask as input, which means that the inharmonious region is known in advance. However, in real-world applications, we may not know the exact inharmonious region in advance. Image harmonization without foreground mask is called blind imag...
C
Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations. In the field of urban computing, it is highly probable that the knowledge required ...
Federated learning: Secondly, CityNet is an appropriate dataset to investigate various federated learning topics under different settings, with each party holding data from one source or one city. Urban data is usually generated by a multitude of human activities and stored by diverse stakeholders, such as organization...
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One e...
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem...
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp...
A
All neural networks were constructed using the default implementations fromPyTorch pytorch . The general architecture for all neural-network-based models was fixed. The Adam optimizer was used for weight optimization with a fixed learning rate of 5×10−45superscript1045\times 10^{-4}5 × 10 start_POSTSUPERSCRIPT - 4 end...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
For each of the four classes of interval estimators in Section 3, at least one example was chosen for a general comparison. Furthermore, to handle calibration issues, conformal prediction was chosen as a post-hoc method due to its nonparametric and versatile nature. Every model that produces a prediction interval (or a...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
C
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split). With the same batch size of 12, we fine-tune the pre-trained our model for each ta...
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split). With the same batch size of 12, we fine-tune the pre-trained our model for each ta...
In our experiments, we will use the same pre-trained model parameters to initialise the models for different downstream tasks. During fine-tuning, we fine-tune the parameters of all the layers, including the self-attention and token embedding layers.
Fig. 2(b) shows the fine-tuning architecture for note-level classification. While the Transformer uses the hidden vectors to recover the masked tokens during pre-training, it has to predict the label of an input token during fine-tuning, by learning from the labels provided in the training data of the downstream task ...
We now present our PTM, a pre-trained Transformer encoder with 111M parameters for piano MIDI music. We adopt as the model backbone the BERTBASEBASE{}_{\text{BASE}}start_FLOATSUBSCRIPT BASE end_FLOATSUBSCRIPT model \parencitebert, a classic multi-layer bi-directional Transformer encoder with 12 layers of multi-head sel...
B
And of course we have to use a different color for each vertex, so B⁢B⁢Cλ⁢(Kn,T)≥n𝐵𝐵subscript𝐶𝜆subscript𝐾𝑛𝑇𝑛BBC_{\lambda}(K_{n},T)\geq nitalic_B italic_B italic_C start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_T ) ≥ italic_n – thus B⁢B⁢Cλ⁢(Kn,T)...
In this section we will proceed as follows: we first introduce the so-called red-blue-yellow (k,l)𝑘𝑙(k,l)( italic_k , italic_l )-decomposition of a forest F𝐹Fitalic_F on n𝑛nitalic_n vertices, which finds a set Y𝑌Yitalic_Y of size at most l𝑙litalic_l such that we can split V⁢(F)∖Y𝑉𝐹𝑌V(F)\setminus Yitalic_V ( it...
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
To achieve the same result for forest backbones we only need to add some edges that would make the backbone connected and spanning. However, we can always make a forest connected by adding edges between some leaves and isolated vertices and we will not increase the maximum degree of the forest, as long as Δ⁢(F)≥2normal...
D