context stringlengths 250 4.99k | A stringlengths 250 4.85k | B stringlengths 250 4.17k | C stringlengths 250 4.32k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
−3(1+m)8x−m−4R′′+18x−m−3R′′′.31𝑚8superscript𝑥𝑚4superscript𝑅′′18superscript𝑥𝑚3superscript𝑅′′′\displaystyle-\frac{3(1+m)}{8}x^{-m-4}R^{\prime\prime}+\frac{1}{8}x^{-m-3}R^{%
\prime\prime\prime}.- divide start_ARG 3 ( 1 + italic_m ) end_ARG start_ARG 8 end_ARG italic_x start_POSTSUPERSCRIPT - italic_m - 4 end_P... | The Newton’s Method of third order convergence is implemented for
Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios | {n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic... | computed from Rnm(x)/Rnm′(x)=f(x)/f′(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R sta... |
Since Rnm(x)superscriptsubscript𝑅𝑛𝑚𝑥R_{n}^{m}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) is a polynomial of order n𝑛nitalic_n, the (n+1)𝑛1(n+1)( italic_n + 1 )st derivatives | D |
Having computed the T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we begin the main ‘for’ loop of Algorithm 3, running through the columns of g𝑔gitalic_g in reverse order.
Observe that r𝑟ritalic_r takes each value 1,…,d1…𝑑1,\dots,d1 , … , italic_d exactly once as we run through the columns of ... | If we are in the (unique) column where r=d𝑟𝑑r=ditalic_r = italic_d then there is no ‘column clearing’ to do and we skip straight to the row clearing stage.
For each other column, we start by calling the subroutine FirstTransvections[r𝑟ritalic_r] (Algorithm 4). | At this point in each pass of the main ‘for’ loop of Algorithm 3, we call the subroutine LeftUpdate[i𝑖iitalic_i] for i=r+2,…,d𝑖𝑟2…𝑑i=r+2,\ldots,ditalic_i = italic_r + 2 , … , italic_d, unless r≥d−1𝑟𝑑1r\geq d-1italic_r ≥ italic_d - 1, in which case the current column will have already been cleared.
The role of thi... |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. | A |
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | B |
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. | The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Moreover, Alg-A is more stable than the alternatives.
During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses a... | D |
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... |
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... | A |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th... | A |
Twitter Features refer to basic Twitter features, such as hashtags, mentions, retweets. In addition, we derive three more URL-based features. The first is the WOT–trustworthy-based– score which is crawled from the APIs of WOT.com555https://www.mywot.com/en/api. The second is domain categories which we have collected fr... | As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ... | To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i... | User Features. Apart from the features already exploited in related work (e.g., VerifiedUser, NumOfFriends, NumOfTweets, ReputationScore), we add two new features captured from Twitter interface: (1) how many photos have been posted by a user (UserNumPhoto), and (2) whether the user lives in a large city. We use the li... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | C |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | C |
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b],
as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3. | B |
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)).
Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)). | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | B |
Furthermore, it is expected that complex representations at multiple spatial scales are necessary for accurate predictions of human fixation patterns. We therefore incorporated a contextual module that samples multi-scale information and augments it with global scene features. The contribution of the contextual module ... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... |
With the advent of deep neural network solutions for visual tasks such as image classification Krizhevsky et al. (2012), saliency modeling has also undergone a paradigm shift from manual feature engineering towards automatic representation learning. In this work, we leveraged the capability of convolutional neural net... | Early approaches towards computational models of visual attention were defined in terms of different theoretical frameworks, including Bayesian Zhang et al. (2008) and graph-based formulations Harel et al. (2006). The former was based on the notion of self-information derived from a probability distribution over linear... | The spatial allocation of attention when viewing natural images is commonly represented in the form of topographic saliency maps that depict which parts of a scene attract fixations reliably. Identifying the underlying properties of these regions would allow us to predict human fixation patterns and gain a deeper under... | C |
Many existing algorithms constructing path decompositions are of theoretical interest only, and this disadvantage carries over to the possible algorithms computing the locality number or cutwidth (see Section 6) based on them. However, the reduction of 5.7 is also applicable in a purely practical scenario, since any ki... |
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under... | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... |
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection.... |
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u... | C |
Imaging modalities that have found use in cardiology include Magnetic Resonance Imaging (MRI), Fundus Photography, Computerized Tomography (CT), Echocardiography, Optical Coherence Tomography (OCT), Intravascular Ultrasound (IVUS), and others.
Deep learning has been mostly successful in this area, mainly due to archite... | In[128] the authors created a recurrent u-net that learns image representations from a stack of 2D slices and has the ability to leverage inter-slice spatial dependencies through internal memory units.
It combines anatomical detection and segmentation into a single end-to-end architecture, achieving comparable results ... |
Regarding the solution to the interpretability problem when new methods are necessary researchers should prefer making simpler deep learning methods (end-to-end and non-ensembles) to increase their clinical applicability, even if that means reduced reported accuracy. | Luo et al.[133] adopted a LV atlas mapping method to achieve accurate localization using MRI data from DS16.
Then, a three layer CNN was trained for predicting the LV volume, achieving comparable results with the winners of the challenge in terms of root mean square of end-diastole and end-systole volumes. | Most predominantly CNNs and u-nets are used solely or in combination with RNNs, AEs, or ensembles.
The problem is that most of them are not end-to-end; they rely on preprocessing, handcrafted features, active contours, level set and other non-differentiable methods, thus partially losing the ability to scale on the pre... | A |
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a... | The results in these figures are generated by averaging 5555 runs for each game.
The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (... | In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score... |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... |
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an import... | D |
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods.
Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin... | D |
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra... |
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga... | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi... |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... | A |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ... | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | The above observations were recently made in the context of online algorithms with machine-learned predictions.
Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | A |
There, in the last column, it is also included the time that each classifier required to classify all the subjects in the test set. As we can see, SS3 obtained the best F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ERDE values for all the considered o𝑜oitalic_o values except for ERDE5.
On the... | As it will be discussed in the next section, when classifying a subject in a streaming-like way, the execution cost of each classifier for each subject is O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) with respect to the total number of subject’s writings, n𝑛nitalic_n ... | There, in the last column, it is also included the time that each classifier required to classify all the subjects in the test set. As we can see, SS3 obtained the best F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ERDE values for all the considered o𝑜oitalic_o values except for ERDE5.
On the... | It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5.
Additionally, the classification is also i... | However, as we will discuss further in the next section, SS3 has a more efficient computation time in comparison with the remaining algorithms. For instance, it took SVM more than one hour (73.9 min) to complete the classification of the test set while it took SS3 a small fraction of it (roughly 5.3%) to carry out the ... | D |
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods... | We can find that after a sufficient number of iterations, the parameter in DGC (w/o mfm) can only oscillate within a relatively large neighborhood of the optimal point. Compared with DGC (w/o mfm), the parameter in GMC converges closer to the optimal point and then remains stable.
Figure 2(a) shows the distances to the... |
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods... | Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data dis... | We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | C |
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information.
Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels. | We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information.
Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels. | Applying dropout at the activations in order to correct weights that have overshot, especially when they are initialized with high values.
However, the effect of dropout on SANs would generally be negative since SANs have much less weights than DNNs thus need less regularization. | During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs.
The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as... | From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | B |
In summary, our work differs significantly from each of the above-mentioned works, and other literatures in UAV ad-hoc networks. As far as we know, our proposed algorithm is capable of learning previous utilities and strategies, achieve NE with restricted information and constrained strategies sets, and update strategi... | When there are numbers of UAVs in the network, it is possible for the coverage areas of different UAVs to overlap. When a UAV overlaps with another, they will not support all users but share the mission. The users in the overlaps will be served randomly with equal probability by each UAV. Fig. 2 presents the overlaps b... | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit... |
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better. | To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ... | C |
U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end... | U^z′superscriptsubscript^𝑈𝑧′\displaystyle\widehat{U}_{z}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dz^¯∗U¯absent¯^𝐷𝑧¯𝑈\displaystyle=\overline{\widehat{Dz}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_z end... | U¯z′superscriptsubscript¯𝑈𝑧′\displaystyle\overline{U}_{z}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dz¯¯∗U¯absent¯¯𝐷𝑧¯𝑈\displaystyle=\overline{\overline{Dz}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_z e... | U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r e... | U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
=Dr^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end... | A |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | B |
θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and θi−superscriptsubscript𝜃𝑖\theta_{i}^{-}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are the parameters of network and target network at iteration i respectively. The target netw... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro... | Figure 5 demonstrates that using Dropout methods in DQN reduce the overestimation from the optimal policy. Despite that Gridworld environment is not suffering from intangible overestimation that can distort the overall cumulative rewards but reducing overestimation leads to more accurate predictions.
| C |
Creating large 2D and 3D publicly available medical benchmark datasets for semantic image segmentation such as the Medical Segmentation Decathlon (Simpson et al., 2019). Medical imaging datasets are typically much smaller in size than natural image datasets (Jin et al., 2020), and the curation of larger public dataset... |
A possible solution to address the paucity of sufficient annotated medical data is the development and use of physics based imaging simulators, the outputs of which can be used to train segmentation models and augment existing segmentation datasets. Several platforms (Marion et al., 2011; Glatard et al., 2013) as well... |
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutio... | Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... | A |
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs.
The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig... | In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped.
Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially. | In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped.
In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ... | In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped.
In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ... | In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped.
Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially. | B |
The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in RFsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRI... |
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data po... | This shows that neural random forest imitation is able to generate significantly better data samples based on the knowledge in the random forest.
NRFI dynamic improves the performance by automatically optimizing the decision tree sampling and generating the largest variation in the data. | Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI unifo... | The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence.
NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples. | A |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... | C |
As expected, the test accuracy increases gradually with high bit widths while the throughput decreases accordingly.
Following the Pareto front starting from the bottom right indicates that the best performing models use a combination of 1 bit for the weights and a gradual increase of activations up to 3 bits. | Afterwards the models perform best if the weights are scaled to 2 bits and the activation bit width is further increased to 4 bits.
This supports the observation of the previous sections, showing that model accuracy is sensitive to activation quantization rather than weight quantization. | Targeting the same problem, Lin et al. (2023) introduced activation-aware weight quantization, which exploits the fact that the weights of large language models are not equally important.
They propose to guide the selection of important weights by activation magnitudes (rather than weight magnitudes) and protecting sal... | This architecture is identical to the original ResNet model except that it is scaled in width rather than depth.
Additionally, we create a DenseNet variant for this experiment which is scaled in depth to 28 layers and the width is varied until it approximately matches the number of parameters and computations of the WR... | Quantized DNNs with 1-bit weights and activations are the worst performing models, which is due to the severe implications of extreme quantization on prediction performance.
As can be seen, however, the overall performance of the quantized models increases considerably when the bit width of activations is increased to ... | A |
Let (X,dX)𝑋subscript𝑑𝑋(X,d_{X})( italic_X , italic_d start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) be a compact ANR metric space. Then, there exists r(X)>0𝑟𝑋0r(X)>0italic_r ( italic_X ) > 0 such that VRr(X)subscriptVR𝑟𝑋\mathrm{VR}_{r}(X)roman_VR start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X ) is... | A problem of interest in the area of persistent homology is that of deciding how much information from a metric space is captured by its associated persistent homology invariants. One basic (admittedly imprecise) question that we posed in page 1 is:
| One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
|
The notion of persistent homology arose from work by Frosini, Ferri, and Landi [40, 41], Robins [74], and Edelsbrunner [27, 37] and collaborators. After that, considering the persistent homology of the simplicial filtration induced from Vietoris-Rips complexes was a natural next step. For example, Carlsson and de Silv... | From a different perspective, by appealing to our isomorphism theorem, it is also possible to apply certain results from quantitative topology to the problem of characterization of metric spaces by their Vietoris-Rips persistence barcodes. In applied algebraic topology, a general question of interest is:
| A |
The third option—Dimension Correlation—provides a tool for the user to check the hypothesis that a visual pattern, as observed, is strongly correlated to a pattern in the high-dimensional space (Subsection 4.4). The final mode—Reset Filters—removes every filter applied with the previously-described interaction modes.
|
To complement the main view, the Overview (Figure 1(b)) shows the static t-SNE projection and serves as a contextual anchor that is independent of the interactions and/or filters applied to the main view. Data-specific labels (when those exist) are shown using a categorical colormap, along with simple statistics about... |
Clustervision [51] is a visualization tool used to test multiple batches of a varying number of clusters and allows the users to pick the best partitioning according to their task. Then, the dimensions are ordered according to a cluster separation importance ranking. As a result, the interpretation and assessment of t... |
The results (i.e., relevances of each dimension) are finally shown in an interactive horizontal bar chart (Figure 1(j)), where the dimensions are sorted from top to bottom according to relevance (with the most relevant on the top). While the relevance is computed using the absolute value of the correlation, we decided... | Figure 1: Visual inspection of t-SNE results with t-viSNE: (a) a panel for uploading data sets, choosing between two execution modes (grid search or a single set of parameters), and storing new (or loading previous) executions; (b) overview of the results with data-specific labels encoded with categorical colors; (c) t... | A |
From a positive vision, bio-inspired algorithms have been regularly used in AI and real-world applications. These algorithms hold potential in new scientific avenues, contributing to recent advances in DL evolution [8], the design of large language models (LLM) [627], and more recently, the design and enrichment of GPA... | From a positive vision, bio-inspired algorithms have been regularly used in AI and real-world applications. These algorithms hold potential in new scientific avenues, contributing to recent advances in DL evolution [8], the design of large language models (LLM) [627], and more recently, the design and enrichment of GPA... |
Both taxonomies and the analysis provide a full overview of the situation of the bio-inspired optimization field. However, Figure 1 reflects the interest of research in this field, as the number of papers is in continuous growth of interest. We believe that it is essential to highlight and reflect on what is expected ... |
As we have mentioned in the abstract, this fifth and last version of this series of documents ends with an analysis that addresses the double vision of a wide range of proposals, which after five years of analysis must be indicated that they border on a lack of analysis of the real problems and useful proposals, and o... |
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex... | D |
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph... | (3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
| (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | B |
False negatives in our measurements mean that a network that does not perform filtering of spoofed packets is not marked as such. We next list the causes of false negatives for each of our three techniques. Essentially the false negatives cannot be resolved, and therefore our measurement results of networks that enforc... |
IPID technique. Load balancing can introduce a challenge in identifying whether a given network enforces ingress filtering. As a result of load balancing our packets will be split between multiple instances of the server, hence resulting in low IPID counter values. There are different approaches for distributing the l... | There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th... |
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca... |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... | A |
\cdot\mathbf{h}_{p-1}+\mathbf{b}_{d}),\\
\hat{\mathbf{y}}=\mathbf{W}_{dy}\cdot\mathbf{d}+\mathbf{b}_{y}.\end{split}start_ROW start_CELL bold_s = roman_ReLU ( bold_W start_POSTSUBSCRIPT italic_x italic_s end_POSTSUBSCRIPT ⋅ bold_x + bold_b start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) , end_CELL end_ROW start_ROW sta... | TABLE I: Mean generalization accuracy. Listed is the classification accuracy (correct / total) of various models evaluated on the unseen testing data, i.e., batch T𝑇Titalic_T. The values represent the average accuracy over 30 trials. The final column lists the mean of the values for batches 3 through 10. A bolded valu... |
The second comparison is between the weighted ensembles of SVMs, i.e., the state of the art [7], and the weighted ensembles of neural networks. For each batch, an SVM and a neural network were trained with that batch as the training set. Weighted ensembles were constructed for each batch T𝑇Titalic_T by assigning weig... |
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill... | Figure 3: Generalization accuracy. The generalization accuracy of each model was evaluated on batch T𝑇Titalic_T. For each model type and every batch, 30 models were trained. The line represents the average over the 30 trials, and the error bar is the 95% confidence interval. (A.) The skill and context+skill models are... | A |
Third, we gave a 2O(δ1−1/d)nsuperscript2𝑂superscript𝛿11𝑑𝑛2^{O(\delta^{1-1/d})}n2 start_POSTSUPERSCRIPT italic_O ( italic_δ start_POSTSUPERSCRIPT 1 - 1 / italic_d end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n expected time algorithm for random point sets. | Let Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the spacings Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman... |
Let Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the x𝑥xitalic_x-coordinates of the points are taken independently uniformly at random fro... | The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets:
a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut... | Random point sets.
In the third scenario the points in P𝑃Pitalic_P are drawn independently and uniformly at random from the hypercylinder [0,n]×Balld−1(δ/2)0𝑛superscriptBall𝑑1𝛿2[0,n]\times\mathrm{Ball}^{d-1}(\delta/2)[ 0 , italic_n ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / ... | C |
The free product of two semigroups R=⟨P∣ℛ⟩𝑅inner-product𝑃ℛR=\langle P\mid\mathcal{R}\rangleitalic_R = ⟨ italic_P ∣ caligraphic_R ⟩ and S=⟨Q∣𝒮⟩𝑆inner-product𝑄𝒮S=\langle Q\mid\mathcal{S}\rangleitalic_S = ⟨ italic_Q ∣ caligraphic_S ⟩
(with P∩Q=∅𝑃𝑄P\cap Q=\emptysetitalic_P ∩ italic_Q = ∅) is the semigroup with pres... |
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t... | While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne... | Note that there is a difference between the free product in the category of semigroups and the free product in the category of monoids or groups.
In particular, in the semigroup free product (which we are exclusively concerned with in this paper) there is no amalgamation over the identity element of two monoids. Thus, ... | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | C |
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | B |
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... |
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. D... |
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application... | Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word... | C |
The use of visualization for ensemble learning could possibly introduce further biases to the already blurry situation based on the different ML models involved. Thus, the thorough selection of both interaction techniques and visual representations that highlight and potentially overcome any cognitive biases is a major... | Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th... | T1: Search the solution space for the most suitable algorithms, data, and models for the task.
Some of the major challenges of stacking are the choice of the most suitable algorithms and models, the data processing necessary for the selected models, further improvements for the models, and reduction of the complexity o... | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | The model exploration phase is perhaps the most important step on the way to build a good ensemble. It focuses on comparing and exploring different models both individually and in groups. Due to the page limits, we now assume that we selected the most performant models, removed the remaining from the stack, and reached... | B |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr... | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | D |
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | In the field of Natural Language Processing (NLP), the abundance of training data plays a crucial role in the performance of deep learning models [Dodge et al., 2021]. However, numerous NLP applications face a substantial challenge due to the scarcity of annotated data [Schick and Schütze, 2021]. For example, in person... |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ... | We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019]. | D |
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in... |
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da... |
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DR... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... |
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce... | A |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | B |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
corresponding to θ(m)(k)=(θ1(k),…,θm(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | B |
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ... | We applied joint Byte-Pair Encoding Sennrich et al. (2016) with 32k32𝑘32k32 italic_k merging operations on all data sets to address the unknown word issue. We only kept sentences with a maximum of 256256256256 subword tokens for training. For fair comparison, we did not tune any hyperparameters but followed Vaswani e... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... | For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et... |
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin... | A |
⟦φ⟧Z=⋃i∈I⟦ψi⟧Z∪⋃i<j∈I⟦θi,j⟧Z\llbracket\varphi\rrbracket_{Z}=\bigcup_{i\in I}\llbracket\psi_{i}\rrbracket_{%
Z}\cup\bigcup_{i<j\in I}\llbracket\theta_{i,j}\rrbracket_{Z}⟦ italic_φ ⟧ start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT = ⋃ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ⟦ italic_ψ start_POSTSUBSCRIPT... | ⟨X,τ,ℒ⟩𝑋τℒ\left\langle X,\uptau,\mathcal{L}\right\rangle⟨ italic_X , roman_τ , caligraphic_L ⟩.
Hence ⟨ℒ′⟩=⟨τ∩ℒ⟩delimited-⟨⟩superscriptℒ′delimited-⟨⟩τℒ\langle\mathcal{L}^{\prime}\rangle=\langle\uptau\cap\mathcal{L}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ = ⟨ roman_τ ∩ caligraphic_L ⟩. | τℒ≜⟨ℒ∪{Uc∣U∈ℒ}⟩≜subscriptτℒdelimited-⟨⟩ℒconditional-setsuperscript𝑈𝑐𝑈ℒ\uptau_{\mathcal{L}}\triangleq\langle\mathcal{L}\cup\left\{U^{c}\mid U\in%
\mathcal{L}\right\}\rangleroman_τ start_POSTSUBSCRIPT caligraphic_L end_POSTSUBSCRIPT ≜ ⟨ caligraphic_L ∪ { italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∣ it... | ⟨Y,τ,ℒY⟩𝑌τsubscriptℒ𝑌\left\langle Y,\uptau,\mathcal{L}_{Y}\right\rangle⟨ italic_Y , roman_τ , caligraphic_L start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ⟩. If ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is a
sublattice of ℒℒ\mathcal{L}caligraphic_L and ℒ′Xsubscriptsu... | ℒℒ\mathcal{L}caligraphic_L of ℘(Z)Weierstrass-p𝑍\wp(Z)℘ ( italic_Z ), we write ℒX≜{U∩X∣U∈ℒ}≜subscriptℒ𝑋conditional-set𝑈𝑋𝑈ℒ\mathcal{L}_{X}\triangleq\{U\cap X\mid U\in\mathcal{L}\}caligraphic_L start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ≜ { italic_U ∩ italic_X ∣ italic_U ∈ caligraphic_L } for the lattice induce... | B |
The comparison results of the real distorted image are shown in Fig. 13. We collect the real distorted images from the videos on YouTube, captured by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. 13, our approach generates the best rect... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... |
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l... |
In this work, we presented a new learning representation for the deep distortion rectification and implemented a standard and widely-used camera model to validate its effectiveness. The rectification results on the synthesized and real-world scenarios also demonstrated our approach’s superiority compared with the stat... | In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyze the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the ... | C |
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point.
Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | D |
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
| If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m... |
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... | C |
That is, the mean square error at the next time can be controlled by that at the
previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n... | As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... |
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an... | A |
attribute_weights𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒_𝑤𝑒𝑖𝑔ℎ𝑡𝑠attribute\_weightsitalic_a italic_t italic_t italic_r italic_i italic_b italic_u italic_t italic_e _ italic_w italic_e italic_i italic_g italic_h italic_t italic_s denotes the weights of QI attributes, and each weight is calculated by dismax(attr... |
This section evaluates the effectiveness of the proposed MuCo algorithm. We apply Mondrian [14], which is one of the most effective generalization approaches, and Anatomy [33], which always preserves the best information utility, as the baselines. We use the US Census data [29], eliminate the tuples with missing value... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | A |
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... |
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20... | D |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | D |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3