context stringlengths 250 4.63k | A stringlengths 250 4.99k | B stringlengths 250 4.17k | C stringlengths 250 5.14k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
x2(x2−1)d2dx2Rnm(x)=[nx2(n+D)−m(D−2+m)]Rnm(x)+x[D−1−(D+1)x2]ddxRnm(x).superscript𝑥2superscript𝑥21superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥delimited-[]𝑛superscript𝑥2𝑛𝐷𝑚𝐷2𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑥delimited-[]𝐷1𝐷1superscript𝑥2𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥x^{2}(x^{2}-... | ^{2}-m^{2}\right]x^{2}\\
+D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP... | +x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e... | }\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m%
}}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s... | {n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic... | B |
The sets T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT are computed as described above in preparation for the first column clearing stage, but are subsequently computed via the recursion (3) (with increased memory quota relati... | To aid the exposition and analysis, Algorithm 3 refers to several subroutines, namely Algorithms 4–7. In an implementation the code for the Algorithms 4–7 would be inserted into Algorithm 3 in the lines where they are called. We present them as subroutines here to improve the readability of Algorithm 3. However, we ass... | The case where d𝑑ditalic_d is even is very similar, but requires a few changes that would complicate the pseudocode.
So, for the clarity of our exposition, we analyse the case d𝑑ditalic_d odd here and then explain the differences for the case d𝑑ditalic_d even in the next subsection. |
Although the described modifications are not complicated in and of themselves, they would introduce noticeable complications into our pseudocode and hence we have chosen to separate the d𝑑ditalic_d even case for the sake of clearer exposition, opting to simply point out and explain the changes instead of writing them... | Let us now explain the changes required when d𝑑ditalic_d is even.
The main issue is that the formula (3) used to compute the sets of transvections Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT recursively throughout our implementation of the algorithm described by Taylor looks two steps b... | C |
It then follows from Lemma 1 that 1≤αiF≤α1superscriptsubscript𝛼𝑖𝐹𝛼1\leq\alpha_{i}^{F}\leq\alpha1 ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ≤ italic_α for all the local eigenvalues. Thus, Λ~h△=Λ~hfsuperscriptsubscript~Λℎ△superscriptsubscript~Λℎ𝑓\ti... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | A |
Moreover, Alg-A is more stable than the alternatives.
During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses a... | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | D |
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... | . We showcase here a study of the Munich shooting. We first show the event timeline at an early stage. Next we discuss some examples of misclassifications by our “weak” classifier and show some analysis on the strength of some highlighted features. The rough event timeline looks as follows.
|
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed ... | C |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | C |
We use Levenberg-Marquardt algorithm to learn the parameters of the SIS and SEIZ. In each time interval from t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT to tnsubscript𝑡𝑛t_{n}italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, we fit the sequenced tweets’ volume from the beginning time t0s... |
But the SpikeM can’t fit to the events with multi-pikes. For that, the term external shock S(n)𝑆𝑛S(n)italic_S ( italic_n ) should not occur once but more. So (kwon2013prominent, ) extend the SpikeM by adding a periodic interaction function for the term external shock S(n)𝑆𝑛S(n)italic_S ( italic_n ). Same approac... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | But if we fit the models of the first few hours with limited data, the result of learning parameters is not so accurate. We show the performance of fitting these two model with only the first 10 hours tweets’ volume in Figure 4. As we can see except for the first one, the fitting results of other three are not good eno... | . As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | C |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | C |
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b],
as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3. | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | D |
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)).
Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)). | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | B |
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai... | Table 6: A summary of the quantitative results for the models with ⊕direct-sum\oplus⊕ and without ⊖symmetric-difference\ominus⊖ an ASPP module. The evaluation was carried out on five eye tracking datasets respectively. Each network was independently trained 10 times resulting in a distribution of values characterized b... |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... | Table 4: The results after evaluating our model with respect to its computational efficiency. We tested five versions trained on different eye tracking datasets, each receiving input images of their preferred sizes in pixels (px). After running each network on 10,000 test set instances from the ImageNet database for 10... |
The images presented during the acquisition of saliency maps in all aforementioned datasets are largely based on natural scenes. Stimuli of CAT2000 additionally fall into predefined categories such as Action, Fractal, Object, or Social. Together with the corresponding fixation patterns, they constituted the input and ... | C |
Since a marking sequence is just a linear arrangement of the symbols of the input word, computing marking sequences seems to be well tailored to greedy algorithms: until all symbols are marked, we choose an unmarked symbol according to some greedy strategy and mark it. Unfortunately, we can formally show that many nat... |
We call a marking sequence σ𝜎\sigmaitalic_σ for a word α𝛼\alphaitalic_α block-extending, if every symbol that is marked except the first one has at least one block-extending occurrence. This definition leads to the general combinatorial question of whether every word has an optimal marking sequence that is block-ext... | This proposition points out that even simple words can have only optimal marking sequences that are not block-extending. In terms of greedy strategies however, Proposition 5.4 only shows a lower bound of roughly 2222 for the approximation ratio of any greedy algorithm that employs some block-extending greedy strategy (... | These strategies are – except for LeftRightLeftRight\operatorname{\textsf{LeftRight}}LRstrategy – nondeterministic, since there are in general several valid choices of the next symbol to mark. However, we will show poor performances for these strategies independent of the nondeterministic choices (i. e., the approximat... |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie... | C |
There are also cardiology applications that used CRFs with deep learning as a segmentation refinement step in fundus photography[171, 174], and in LV/RV[143].
Multimodal deep learning[271] can also be used to improve diagnostic outcomes e.g. the possibility of combining fMRI and ECG data. | The proposed framework uses FNN and GRU for handling non-temporal and temporal features respectively, thus learning their shared latent representations for prediction.
The results show that deep learning methods consistently outperform the super learner in the majority of the prediction tasks of the MIMIC (predictions ... | There are also cardiology applications that used CRFs with deep learning as a segmentation refinement step in fundus photography[171, 174], and in LV/RV[143].
Multimodal deep learning[271] can also be used to improve diagnostic outcomes e.g. the possibility of combining fMRI and ECG data. | According to the literature, RNNs are widely used in cardiology structured data because they are capable in finding optimal temporal features better than other deep/machine learning methods.
On the other hand, applications in this area are relatively few and this is mainly because there is a small number of public data... | Dedicated databases must be created in order to increase research in this area since according to the current review there are only three cardiology databases with multimodal data.
In addition to the previous databases MIMIC-III has also been used for multimodal deep learning by [68] for predicting in-hospital, short/l... | D |
Figure 3: Comparison with Rainbow and PPO. Each bar illustrates the number of interactions with environment required by Rainbow (left) or PPO (right) to achieve the same score as our method (SimPLe). The red line indicates the 100100100100K interactions threshold which is used by the our method. |
Figure 3: Comparison with Rainbow and PPO. Each bar illustrates the number of interactions with environment required by Rainbow (left) or PPO (right) to achieve the same score as our method (SimPLe). The red line indicates the 100100100100K interactions threshold which is used by the our method. |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... | In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score... | We evaluate our method on 26262626 games selected on the basis of being solvable with existing state-of-the-art model-free deep RL algorithms222Specifically, for the final evaluation we selected games which achieved non-random results using our method or the Rainbow algorithm using 100100100100K interactions., which in... | B |
We used Adam [20] as the optimizer with learning rate lr=0.001𝑙𝑟0.001lr=0.001italic_l italic_r = 0.001, betas b1=0.9subscript𝑏10.9b_{1}=0.9italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9, b2=0.999subscript𝑏20.999b_{2}=0.999italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.999, epsilon ϵ=10−8italic-ϵsupe... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | As shown in Table. I the one layer CNN DenseNet201 achieved the best accuracy of 85.3%percent85.385.3\%85.3 % with training time 70 seconds/epoch on average.
In overall the one layer CNN S2I achieved best accuracies for eleven out of fifteen ‘base models’. | The names of the classes are depicted at the right along with the predictions for this example signal.
The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h... | C |
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con... |
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con... | A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure... | The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | B |
For paid exchanges at the beginning of the phase, Tog incurs a cost that is less than m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Before serving the last request σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT of the phase, the access cost of Tog is less ... |
The worst-case ratio between the costs of Tog and Mtf2 is maximized when the last phase is an ignoring phase. In this case, we have k𝑘kitalic_k trusting phases and k𝑘kitalic_k ignoring phases. The total cost of Mtf2 is at least km3+k(βm3/2−m2)=km3(1+β/2−1/m)𝑘superscript𝑚3𝑘𝛽superscript𝑚32superscript𝑚2𝑘sup... |
For a trusting phase, the cost of Tog is in the range (m3,m3(1+1/m+1/m2))superscript𝑚3superscript𝑚311𝑚1superscript𝑚2(m^{3},m^{3}(1+1/m+1/m^{2}))( italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m + 1 / italic_m start_POSTSUPERSCRIPT 2 en... | Similar arguments apply for an ignoring phase with the exception that the threshold is β⋅m2⋅𝛽superscript𝑚2\beta\cdot m^{2}italic_β ⋅ italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and there are no paid exchanges performed by Tog. So, we can observe the following.
|
In an ignoring phase, the cost of Tog for the phase is in the range (βm3,βm3(1+1/m2))𝛽superscript𝑚3𝛽superscript𝑚311superscript𝑚2(\beta m^{3},\beta m^{3}(1+1/m^{2}))( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m ... | C |
This scenario, known as “early risk detection” have gained increasing interest in recent years with potential applications in rumor detection [Ma et al., 2015, 2016, Kwon et al., 2017], sexual predator detection and aggressive text identification [Escalante et al., 2017], depression detection [Losada et al., 2017, Losa... | Although the use of MDP is very appealing from a theoretical point of view, and we will consider it for future work, the model they proposed would not be suitable for risk tasks. The use of SVMs along with Φ(s)Φ𝑠\Phi(s)roman_Φ ( italic_s ) implies that the model is a black box, not only hiding the reasons for classif... | As far as we know, the approach presented in [Dulac-Arnold et al., 2011] is the first to address a (sequential) text classification task as a Markov decision process (MDP) with virtually three possible actions: read (the next sentence), classify333In practice, this action is a collection of actions, one for each catego... | Finally, [Loyola et al., 2018] considers the decision of “when to classify” as a problem to be learned on its own and trains two SVMs, one to make category predictions and the other to decide when to stop reading the stream.
Nonetheless, the use of these two SVMs, again, hides the reasons behind both, the classificatio... | It is true that more elaborated methods that simultaneously learn the classification model and the policy to stop reading could have been used, such as in [Dulac-Arnold et al., 2011, Yu et al., 2017].
However, for the moment it is clear that this very simple approach is effective enough to outperform the remainder meth... | B |
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... |
We find that due to the momentum factor masking (mfm) in DGC (Lin et al., 2018), DGC (w/ mfm) will degenerate to DSGD rather than DMSGD if sparse communication is not adopted, while GMC will degenerate to DMSGD if sparse communication is not adopted. | We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca... | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | process. As for global momentum, the momentum term −(𝐰t−𝐰t−1)/ηsubscript𝐰𝑡subscript𝐰𝑡1𝜂-({\bf w}_{t}-{\bf w}_{t-1})/\eta- ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) / italic_η contains global information from all the workers. Since we are... | A |
SANs combined with the φ𝜑\varphiitalic_φ metric compress the description of the data in a way a minimum description language framework would, by encoding them into 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α... | It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data.
This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most rep... | During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs.
The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | D |
With the rapid commercialization of UAVs, a lot of research has emerged in this field [16]. To efficiently deploy UAVs, studies have been made to find out UAV distribution on network graph [9] and a graphical model has been proposed for channels reuse [17]. The resource allocation of channel and time is also a hot are... |
Typical wireless protocol 802.11b/g only provides limited channels for users, which is far more than enough for high-quality communication services [18]. To reduce the load in central system, making use of distributed available resources in networks turns out to be an ideal solution. Underlay Device-to-Device (D2D) co... |
Catastrophic natural and man-made disasters, such as earthquakes, typhoons, and wars, usually involve great loss of life and/or properties, historical interests in vast areas. Though sometimes unavoidable, the loss of life and property can be effectively reduced if proper disaster management has been implemented. Sinc... | To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ... | To support the communication mission, all UAVs are required to cooperate and support the user communication in need. UAVs work above post-disaster area D𝐷Ditalic_D. If a user (User1subscriptUser1{\rm User}_{1}roman_User start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) needs to communicate with another user (User2subscriptUser... | A |
}_{\perp\alpha}\,\,\left(\overline{\widehat{\nabla}}\,\,\overline{T}_{\alpha}%
\right)\right\}= - { ( over^ start_ARG italic_κ end_ARG start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT - over^ start_ARG italic_κ end_ARG start_POSTSUBSCRIPT ⟂ italic_α end_POSTSUBSCRIPT ) ( over^ start_ARG bold_B end_ARG start_POSTSUBSCRI... | [m-3] is a typical representative number density, and the
thermal diffusion coefficients χ∥α,χ⟂α\chi_{\parallel\alpha},\,\chi_{\perp\alpha}italic_χ start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT , italic_χ start_POSTSUBSCRIPT ⟂ italic_α end_POSTSUBSCRIPT | =−((κ∥α−κ⟂α)∇∥Tα+κ⟂α∇Tα)\displaystyle=-\left(\left(\kappa_{\parallel\alpha}-\kappa_{\perp\alpha}\right%
)\nabla_{\parallel}T_{\alpha}+\kappa_{\perp\alpha}\nabla T_{\alpha}\right)= - ( ( italic_κ start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT - italic_κ start_POSTSUBSCRIPT ⟂ italic_α end_POSTSUBSCRIPT ) ∇ start_POST... | \kappa{}_{\perp\alpha}\nabla_{\perp}T_{\alpha}\right)= - ( italic_κ start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT ∇ start_POSTSUBSCRIPT ∥ end_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT + italic_κ start_FLOATSUBSCRIPT ⟂ italic_α end_FLOATSUBSCRIPT ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT i... | of the order κ∥α=n0χ∥α\kappa_{\parallel\alpha}=n_{0}\chi_{\parallel\alpha}italic_κ start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_χ start_POSTSUBSCRIPT ∥ italic_α end_POSTSUBSCRIPT
and κ⟂α=n0χ⟂αsubscript𝜅perpendicular-toabsent𝛼subscript𝑛0subscript𝜒perpen... | D |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | A |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | B |
The scarcity of richly annotated medical images is limiting supervised deep learning-based solutions to medical image analysis tasks (Perone and Cohen-Adad, 2019), such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models. | Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G... | Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol... | Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic... | Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add ... | B |
In the supplementary material we report numerical differences in the size of the cut obtained on random graphs when using 𝐯maxsubscript𝐯max{\mathbf{v}}_{\text{max}}bold_v start_POSTSUBSCRIPT max end_POSTSUBSCRIPT or 𝐯maxssubscriptsuperscript𝐯𝑠max{\mathbf{v}}^{s}_{\text{max}}bold_v start_POSTSUPERSCRIPT italic_s en... | Since computing the optimal MAXCUT solution is NP-hard, it is generally not possible to evaluate the quality of the cut found by the proposed spectral method (Sect. III-A) in terms of discrepancy from the MAXCUT.
Therefore, to assess the quality of a solution we consider the following bounds | The results show that on the two regular graphs, which are bipartite, the cut obtained with the spectral algorithm coincides with the MAXCUT upper bound and, therefore, also with the optimal solution.
For every other graph, the cut yielded by the spectral algorithm is always larger than the random cut. | The results show that on the two regular graphs, which are bipartite, the cut obtained with the spectral algorithm coincides with the MAXCUT upper bound and, therefore, also with the optimal solution.
For every other graph, the cut yielded by the spectral algorithm is always larger than the random cut. | The examples encompass the two extreme cases where the MAXCUT solution is known: a bipartite graph where MAXCUT is 1 and the complete graph where MAXCUT is 0.5.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ... | A |
The results are shown in Figure 3 exemplarily for the Car, Covertype, and Wisconsin Breast Cancer (Original) dataset. The other datasets show similar characteristics. The overall evaluation on all datasets is presented in the next section.
The number of training examples per class is shown in parentheses and increases ... | Sethi (1990) presents a mapping of decision trees to two-hidden-layer neural networks.
In the first hidden layer, the number of neurons equals the number of split nodes in the decision tree. Each of these neurons implements the decision function of the split nodes and determines the routing to the left or right child n... | For each setting, the test accuracy of the random forest is indicated by a red dashed line.
The average test accuracy and standard deviation depending on the network architecture, i.e., the number of neurons in the first and second hidden layer, are plotted for different architectures. | First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all... | NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the dec... | B |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | D |
A variant of the VGG architecture is used on the CIFAR-10 task for evaluation because FINN does not support residual connections yet, and the configuration of the FINN framework is adjusted so that highest throughput is targeted with respect to the available resources of the device (BRAM, LUTs, etc).
| The WRN model on the CIFAR-10 task is used again as a baseline, with a depth of 28 layers, varying widths of the model, and weights/activations quantized to different bit widths.
Figure 5 reports test accuracies and throughput for different WRN variants and compression methods. | Quantized DNNs with 1-bit weights and activations are the worst performing models, which is due to the severe implications of extreme quantization on prediction performance.
As can be seen, however, the overall performance of the quantized models increases considerably when the bit width of activations is increased to ... | As expected, the test accuracy increases gradually with high bit widths while the throughput decreases accordingly.
Following the Pareto front starting from the bottom right indicates that the best performing models use a combination of 1 bit for the weights and a gradual increase of activations up to 3 bits. | Afterwards the models perform best if the weights are scaled to 2 bits and the activation bit width is further increased to 4 bits.
This supports the observation of the previous sections, showing that model accuracy is sensitive to activation quantization rather than weight quantization. | C |
Let M𝑀Mitalic_M be an n𝑛nitalic_n-dimensional metric manifold. Then, note that we have FillRadn(M,G,[M])=FillRad(M)subscriptFillRad𝑛𝑀𝐺delimited-[]𝑀FillRad𝑀\mathrm{FillRad}_{n}(M,G,[M])=\mathrm{FillRad}(M)roman_FillRad start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_M , italic_G , [ italic_M ] ) = roma... | A priori, one can define the generalized filling radius for any metric space X𝑋Xitalic_X. However, we believe that the context of ANR metric spaces is the right level of generalization for our purposes because of the following proposition analogous to Proposition 1.
| Let (X,E)𝑋𝐸(X,E)( italic_X , italic_E ) be a metric pair where X𝑋Xitalic_X is a compact ANR metric space. For any integer k≥1𝑘1k\geq 1italic_k ≥ 1, any abelian group G𝐺Gitalic_G, and any ω∈Hk(X;G)𝜔subscriptH𝑘𝑋𝐺\omega\in\mathrm{H}_{k}(X;G)italic_ω ∈ roman_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( ital... |
The goal of this section is to provide some partial results regarding the structure of barc∗VR(⋅)subscriptsuperscriptbarcVR∗⋅\mathrm{barc}^{\mathrm{VR}}_{\ast}(\cdot)roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ⋅ ) for non-smooth spaces; see Figure 12. In ord... |
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic... | A |
Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... | C1: Remaining Cost
Looking at the main view (Figure 7(c), \raisebox{-.9pt} {1}⃝), we detect an area on the top of cluster C1 with slightly increased size for a few points (in comparison to the other points in the same cluster), which means there are high values of remaining cost in this small area. | C3: Densities
The next step in our analysis is to confirm if the layout of the points accurately represents the original N-D densities of the clusters. By inspecting the distribution of colors over the points in the main view (Figure 7(c)), we can see that each cluster has a different density profile: C1 presents the... |
Figure 7: Use case based on the Pima Indian Diabetes data set. Although there are three separate clusters C1–C3, the class labels are mostly mixed (a), and the Shepard Heatmap (b) indicates that smaller N-D distances are spread out in 2-D. Some insights about the clusters (c): C1 has a small area with high remaining c... | The second option of the Visual Mapping panel, the Remaining Cost, indicates (in the points’ sizes, by default) the final value of KLD(Pi∥Qi)𝐾𝐿𝐷conditionalsubscript𝑃𝑖subscript𝑄𝑖KLD(P_{i}\|Q_{i})italic_K italic_L italic_D ( italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ italic_Q start_POSTSUBSCRIPT ... | A |
Taking into account all the reviewed papers, we group the proposals therein in a hierarchy of categories. In the hierarchy, not all proposals of a category must fit in one of its subcategories. In our classification, categories lying at the same level are disjoint sets, which means that each proposed algorithm can be ... |
It has not been until relatively recent times that the community has embraced the need for arranging the myriad of existing bio-inspired algorithms and classifying them under principled, coherent criteria. In 2013, [74] presented a classification of meta-heuristic algorithms as per their biological inspiration that di... |
Figure 2 depicts the classification we have reached, indicating, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that the largest group of all is Swarm Intelligence category (more than a half of the proposed, 53%), inspired in the Swarm... | Methodologically, a classification of all nature- and bio-inspired algorithms that can be found in the literature can become complicated, considering the different sources of inspiration as biological, physical, human-being, … In some papers, authors suggest a possible categorization of more well-established groups, bu... |
The above statement is quantitatively supported by Figure 1, which depicts the increasing number of papers/book chapters published in the last years with bio-inspired optimization and nature-inspired optimization in their title, abstract and/or keywords. We have considered both bio-inspired and nature-inspired optimiz... | C |
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | It should be emphasized that a large k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT frequently leads to capture the wrong information.
After the transformation of GAE, the nearest neighbors are more likely to belong with the same cluster |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | C |
Although the agents provide the optimal setup for testing filtering, with control over the packets that can be crafted and sent from both sides, as we explain in Related Work Section 2, this approach is limited only to networks that deploy agents on their networks. In contrast, SMap provides better coverage since it i... |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... | Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi... |
These findings show that SMap offers benefits over the existing methods, providing better coverage of the ASes in the Internet and not requiring agents or conditions for obtaining traceroute loops, hence improving visibility of networks not enforcing ingress filtering. | Agents Active Measurements. Agents with active probes found 608 ASes that were found not to be enforcing ingress filtering using the agents approach of the Spoofer Project (these include duplicates with the traceroute loops measurements). Those contain some of the duplicates from traceroute measurements: together both ... | C |
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on... |
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso... | The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer.... | Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... | B |
$P_{0}\cup\dots\cup P_{i-1}\cup B$ realizing the matching $M$}\end{cases}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] := { start_ROW start_CELL A representative set containing pairs ( italic_M , italic_x ) , where italic_M is a perfect matching on italic_B ∈ caligraphic_B start_POSTS... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | A |
While we define the congruence over Q∗superscript𝑄Q^{*}italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, we are only interested in the generated semigroup and let Σ(𝒜)=Q+/=𝒜\Sigma(\mathcal{A})=Q^{+}/{=_{\mathcal{A}}}roman_Σ ( caligraphic_A ) = italic_Q start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT / = start_POSTS... | Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
| A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup.
If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup. | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... |
Let S𝑆Sitalic_S be a (completely) self-similar semigroup. Then S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is (completely) self-similar. Furthermore, if S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t ... | B |
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... | C |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... |
We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents. We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross ver... | Duplicate and Near-Duplicate Detection. Examination of the corpus revealed that it contained many duplicate and near-duplicate documents. We removed exact duplicates by hashing all the raw documents and discarding multiple copies of exact hashes. Through manual inspection, we found that a number of privacy policies fro... |
To remove near-duplicates from within the same domain we used Simhashing (Charikar, 2002). Simhashing is a hashing technique in which similar inputs produce similar hashes. After creating shingles (Broder et al., 1997) of size three, we created 64 bit document Simhashes and measured document similarity by calculating ... | C |
G2: Support exploration. VA systems enable users to reach crucial findings and to take actions according to them. This iterative process requires a human-in-the-loop who can thus explore the data and the model through the interactive visualization [1]. | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme... | As the solution space for ensemble learning is more confusing compared to single ML techniques, keeping track of the history of events and providing provenance for exploring and backtracking of alternative paths is necessary to reach this goal.
Furthermore, provenance in VA for ensemble learning increases the interpret... | There is a large solution space of different learning methods and concrete models which can be combined in a stack. Hence, the identification and selection of particular algorithms and instantiations over the time of exploration is crucial for the the user. One way to manage this is to keep track of the history of each... | C |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | D |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | A |
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DR... | The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
Tracking the AOAs and AODs is essential for beam tracking, which... |
The CCA codebook-based multi-UAV beam tracking scheme with TE awareness. Based on the designed codebook, by exploiting the Gaussian process (GP) tool, both the position and attitude of UAVs can be fast tracked for fast multiuser beam tracking along with dynamic TE estimation. Moreover, the estimated TE is leveraged to... | A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi... |
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe... | B |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | D |
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... | B |
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
| We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
|
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ... |
Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-laye... | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | B |
which strictly contains V1×V2subscript𝑉1subscript𝑉2V_{1}\times V_{2}italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT,
but is still included in f−1(U)superscript𝑓1𝑈f^{-1}(U)italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) which is in contradictio... | compact in X1×X2subscript𝑋1subscript𝑋2X_{1}\times X_{2}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
We are going to prove that f−1(U)superscript𝑓1𝑈f^{-1}(U)italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) is actually | First of all,
because f−1(U)superscript𝑓1𝑈f^{-1}(U)italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) is open in (X1,τ1)×(X2,τ2)subscript𝑋1subscriptτ1subscript𝑋2subscriptτ2(X_{1},\uptau_{1})\times(X_{2},\uptau_{2})( italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , roman_τ start_POSTSUBSCRIPT 1 e... | ≡\equiv≡-saturated sets in X1×X2subscript𝑋1subscript𝑋2X_{1}\times X_{2}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
Hence, writing f−1(U)superscript𝑓1𝑈f^{-1}(U)italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) as the finite union of maximal | compact open set of X1×X2subscript𝑋1subscript𝑋2X_{1}\times X_{2}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a finite union of sets of
the form K×X2𝐾subscript𝑋2K\times X_{2}italic_K × italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and X1×Ksubscript𝑋1𝐾X_{... | C |
(2) For each backbone network, the layer depths of VGG16, InceptionV3, and ResNet50 are 23, 159, and 168, respectively. These architectures represent the different extraction abilities of image features. As illustrated in Fig. 6, the distortion parameter estimation achieves the lowest error (0.15) using InceptionV3 as... | (3) From the loss curves in Fig. 7, the ordinal distortion estimation achieves the fastest convergence and best performance on the validation dataset. It is also worth to note that the ordinal distortion estimation already performs well on the validation at the first twenty epochs, which verifies that this learning rep... | Figure 7: Analysis of two learning representation in terms of the training and validation loss curves. We show the learning performance of the distortion parameter estimation without (top) and with (middle) the normalization of magnitude, and the ordinal distortion estimation (bottom). Our proposed ordinal distortion e... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... |
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific... | A |
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD.
However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory. | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | SGD and its variants are iterative methods. In the t𝑡titalic_t-th iteration, these methods randomly
choose a subset (also called a mini-batch) ℐt⊂{1,2,…,n}subscriptℐ𝑡12…𝑛{\mathcal{I}}_{t}\subset\{1,2,\ldots,n\}caligraphic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ⊂ { 1 , 2 , … , italic_n } and compute the | In this paper, we first review the convergence property of MSGD, one of the most widely used variants of SGD, and analyze the failure of MSGD in large-batch training from an optimization perspective. Then, we propose a novel method, called
stochastic normalized gradient descent with momentum (SNGM), for large-batch tra... | If we avoid these tricks, these methods may suffer from severe performance degradation.
For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective. | D |
In Two-Stage Stochastic Multi-knapsack Supplier or 2S-MuSup for short, there are L𝐿Litalic_L additional knapsack constraints on FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT. Specifically, we are given budgets Wℓ≥0subscript𝑊ℓ0W_{\ell}\geq 0italic_W start_POSTSUBSCRIPT roman_ℓ end_POSTSUB... |
We define a strategy s𝑠sitalic_s to be a (|𝒟|+1)𝒟1(|\mathcal{D}|+1)( | caligraphic_D | + 1 )-tuple of facility sets (FIs,FAs)subscriptsuperscript𝐹𝑠𝐼subscriptsuperscript𝐹𝑠𝐴(F^{s}_{I},F^{s}_{A})( italic_F start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT , italic_... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ... | Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... | C |
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... |
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an... | As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | A |
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... | B |
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set... | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | As shown in Table 3, all PointRend models achieve promising performance. Even without ensemble, our PointRend baseline, which yields 77.38 mAP, has already achieved 1st place on the test leaderboard. Note that several attempts, like BFP Pang et al. (2019) and EnrichFeat, give no improvements against PointRend baseline,... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | B |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| D |
For any algorithm, the dynamic regret is at least Ω(B1/3d5/6HT2/3)Ωsuperscript𝐵13superscript𝑑56𝐻superscript𝑇23\Omega(B^{1/3}d^{5/6}HT^{2/3})roman_Ω ( italic_B start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 6 end_POSTSUPERSCRIPT italic_H italic_T start_POSTSUPERSCRIPT 2 / 3 en... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | B |
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... | D |
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da... | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... | GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no... |
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... | C |
Figure 5: Result of VDM in ‘Noisy-Mnist’. (a) When we input an image of digit ‘0’, we sample 10 latent variables {𝐳𝟏,…,𝐳𝟏𝟎}subscript𝐳1…subscript𝐳10\{\mathbf{z_{1}},...,\mathbf{z_{10}}\}{ bold_z start_POSTSUBSCRIPT bold_1 end_POSTSUBSCRIPT , … , bold_z start_POSTSUBSCRIPT bold_10 end_POSTSUBSCRIPT } and generate... | Figure 4: Result of the probabilistic-ensemble dynamic model in ‘Noisy-Mnist’. (a) When we input an image of the digit ‘0’, three images are generated from different models. Different models all generate the correct prediction of image class but lacks the diversity of writing styles. (b) When we input an image of the d... | We analyze the possible reasons in the following. (i) The probabilistic-ensemble model proposed in [48] is used in continuous control tasks, where the state is low-dimensional and unstructured. However, Noisy-Mnist has high-dimensional image-based observations. The probabilistic ensemble may not suitable for this setti... |
The ensemble-based baseline contains three individual encoder-decoder networks. As shown in Fig. 4, three images are generated from each model with the same input. We do not average the outputs of the three models. In (a), we use the image of digit ‘0’ as the input and generate a prediction from each network in the en... | As an example, we model the transition dynamics in MDP of ‘Noisy-Mnist’ in Fig. 2. We first use an ensemble-based model that contains three individual encoder-decoder networks as a baseline. According to a resent research in model-based RL [48], the ensemble model with probabilistic neural networks achieves the state-o... | D |
If we would add nodes to make the grid symmetric or tensorial, then
the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}ital... | for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π,
find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 ... | Here, we answer Questions 1–2.
To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us... |
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider | We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that
P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros... | D |
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + f... | A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark.
In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance ... | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1].
However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t inves... | In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance.
Omitted proofs can be found in Appendix A. | C |
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mod... | Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g... | Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | B |
Graph described in Fig. 4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma... | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... |
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized... |
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation. |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the... | B |
Given a polynomial function f(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard... | We developed a linear representation theory for functions over 𝔽𝔽\mathbb{F}blackboard_F in the previous section. This section extends the idea to a family of functions over 𝔽𝔽\mathbb{F}blackboard_F defined through a 𝔽𝔽\mathbb{F}blackboard_F-valued parameter. The well-known Dickson polynomial is one such motivatin... | Given a polynomial function f(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard... | Given an 1111-parameter family of maps over 𝔽𝔽\mathbb{F}blackboard_F, determine if it is parametrically invertible over 𝔽𝔽\mathbb{F}blackboard_F. It is also shown in this paper that the compositional inverse of a 1111-parameter family of permutation polynomials is also a 1111-parameter family of permutation polynom... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... | C |
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which... | In terms of view selection, each of the 10×10101010\times 1010 × 10 fitted models is associated with a set of selected views. However, quantities like TPR, FPR and FDR cannot be computed since the true status of the views is unknown. We therefore report the number of selected views, since this allows assessment of mode... | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An exam... |
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which... | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... | B |
For each dataset, we conduct repeated experiments to ensure robustness. If the ratio of anomalies to the total number of objects in the dataset is greater than 1%percent11\%1 %, we randomly sample 1%percent11\%1 % of the total number of objects from the anomalous class as anomalies. This sampling process is repeated 20... | IndepVar: the percentage of independent variables. Independent variables refer to the variables that are not linked to any other variables in a Bayesian network. Therefore, IndepVar characteristic is calculated as the percentage of variables with no PC variables.
|
To handle categorical variables, we convert them into numeric variables using 1-of-ℓℓ\ellroman_ℓ encoding [73]. This ensures that all variables are represented numerically, allowing us to perform the necessary calculations during the evaluation process. |
Regarding the experiments on noisy variables, we introduce noisy variables into the synthetic datasets following the process in existing literature [28]. Specifically, to ensure minimal dependency between the noisy and the original variables, the values of these noisy variables are drawn from a uniform distribution be... | The number of noisy variables: noisy variables are the variables that are unrelated to the data generation process. Research [86, 87, 28] has shown that these variables can hide the characteristics of anomalies, making anomaly detection more challenging.
| B |
For building intuition, assume that 𝐗𝒬t⊤θ∗superscriptsubscript𝐗subscript𝒬𝑡topsubscript𝜃\mathbf{X}_{\mathcal{Q}_{t}}^{\top}\theta_{*}bold_X start_POSTSUBSCRIPT caligraphic_Q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_θ start_POSTSUBSCRIPT ∗... |
Our result is still O(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they... | Next we show how using a global lower bound in form of κ𝜅\kappaitalic_κ (see Assumption 2) early in the analysis in the works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] lead to loose prediction error upper bound. For this we first introduce a new notation:
| The detailed proof is provided in A.4. Here we develop the main ideas leading to this result and develop an analytical flow which will be re-used while working with convex confidence set Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in Section 4.3. In the previou... | where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star... | B |
Recent temporal action localization methods can be generally classified into two categories based on the way they deal with the input sequence. In the first category, the works such as BSN [21], BMN [20], G-TAD [44], BC-GNN [3] re-scale each video to a fixed temporal length (usually a small length such as 100 snippets... | For example, BSN relies on the startness/endness curves to identify proposal candidates, but when more frames are used, the curves will have too many peaks and valleys to generate meaningful proposals. In G-TAD, if too many snippets are interpolated and neighboring snippets become similar, it tends to find graph neighb... | Graph neural networks (GNN) are a useful model for exploiting correlations in irregular structures [17]. As they become popular in different computer vision fields [13, 38, 40], researchers also find their application in temporal action localization [3, 44, 46]. G-TAD [44] breaks the restriction of temporal locations o... | they find themselves interested in a short video clip that just fleeted away. They would scroll back to the clip and re-play it with a lower speed, by pause-and-play for example. We mimic this process when preparing a video before feeding it into a neural network. We propose to focus on a short period of a video, and m... |
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur... | A |
(2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | Thus, VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(h) is always active for Projection 2, as it is related to the majority-voting ensemble.
Soft majority voting strategy (i.e., predicted probabilities) is always applied. | (2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | (iv) control the evolutionary process by setting the number of models that will be used for crossover and mutation in each algorithm (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b)); and
(v) compare the performances of the best so far identified ensemble against the acti... | After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in... | A |
In terms of the convergence rate, these algorithms are only effective in cases with high transition capabilities.
Additionally, the performance of these algorithms is highly sensitive to hyperparameters and requires careful selection for optimum results in each experiment. | Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... | For the fastest mixing Markov chain synthesis, the problem is formulated as a convex optimization problem in [5], assuming that the Markov chain is symmetric. This paper also presents an extension to the method that involves synthesizing the fastest mixing reversible Markov chain with a given desired distribution. Furt... | It is worth noting that the bins comprising the operational region, as defined in Definition 6, determine the vertices of the uniform graph in Definition 1. Consequently, these vertices correspond to the states of the Markov chain defined in Definition 3. Similarly, the transition constraints of the swarm, defined by a... | Graph temporal logic (GTL) is introduced in [16] to impose high-level task specifications as a constraint to the Markov chain synthesis. Markov chain synthesis is formulated as mixed-integer nonlinear programming (MINLP) feasibility problem and the problem is solved using a coordinate descent algorithm. In addition, an... | D |
Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc... | Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc... | Apart from methods tackling a QAP formulation (see previous paragraph), there exist directions utilising other structural properties of isometries.
The Laplace-Beltrami operator (LBO) [54], a generalisation of the Laplace operator on manifolds, as well as its eigenfunctions are invariant under isometries. | The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos... | Functional Maps [51] formulate the correspondence problem as a linear mapping 𝒞ij:L2(𝒳i)→L2(𝒳j):subscript𝒞𝑖𝑗→superscript𝐿2subscript𝒳𝑖superscript𝐿2subscript𝒳𝑗\mathcal{C}_{ij}:L^{2}(\mathcal{X}_{i})\to L^{2}(\mathcal{X}_{j})caligraphic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT : italic_L st... | B |
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some parti... | interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path %
graphs $\subset$ path graphs $\subset$ chordal graphs}.interva... | The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov... | Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati... | On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ... | B |
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi... |
The ego-networks dataset contains more than 1000 ego-networks from Facebook, Twitter, and GooglePlus. In an ego-network, all the nodes are friends of one central user and the friendship groups or circles (depending on the platform) set by this user can be used as ground truth communities. The SNAP ego-networks are ope... | The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002F... |
Dolphins: this network consists of frequent associations between 62 dolphins in a community living off Doubtful Sound. In the Dolphins network, node denotes a dolphin, and edge stands for companionship dolphins0 ; dolphins1 ; dolphins2 . The network splits naturally into two large groups females and males dolphins1 ; ... | In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi... | C |
See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm.
Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL[p(z),p(z|x))... | In other words, posterior sampling with Langevin MCMC can be posed as a distributional optimization method.
Furthermore, in addition to the KL divergence, F(p)𝐹𝑝F(p)italic_F ( italic_p ) in (3.1) also incorporates other f𝑓fitalic_f-divergences (Csiszár, 1967). |
The goal of GAN (Goodfellow et al., 2014) is to learn a generative model p𝑝pitalic_p that is close to a target distribution q𝑞qitalic_q, where p𝑝pitalic_p is defined by transforming a low dimensional noise via a neural network. Since the objective in (3.1) includes f𝑓fitalic_f-divergences as special cases, our dis... | To circumvent such intractability, variational inference turns to minimize the KL divergence between a variational posterior p𝑝pitalic_p and the true posterior p(z|x)𝑝conditional𝑧𝑥p(z{\,|\,}x)italic_p ( italic_z | italic_x ) in
(3.8) (Wainwright and Jordan, 2008; Blei et al., 2017), yielding the following distribu... | When ℳℳ\mathcal{M}caligraphic_M is specified by the level set of KL divergence, for any fixed θ𝜃\thetaitalic_θ, using Lagrangian duality, we can transform the inner problem in (3.7) into a KL divergence regularized distributional optimization problem as in (3.1) with g𝑔gitalic_g is replaced by ℓ(⋅;θ)ℓ⋅𝜃\ell(\cdot;\... | A |
We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compa... | 2) The performances of Individual RL and PressLight drop 38% and 41% when the model is transferred. It shows that the models learned by the regular RL algorithms indeed rely on the training scenario. MetaLight is more robust to various scenarios than Individual RL and PressLight, and it indicates the advantage of the m... |
In this paper, we propose a novel Meta RL method MetaVIM for multi-intersection traffic signal control, which can make the policy learned from a training scenario generalizable to new unseen scenarios. MetaVIM learns the decentralized policy for each intersection which considers neighbor information in a latent way. W... | To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traf... |
To learn effective decentralized policies, there are two main challenges. Firstly, it is impractical to learn an individual policy for each intersection in a city or a district containing thousands of intersections. Parameter sharing may help. However, each intersection has a different traffic pattern, and a simple sh... | B |
Arank-r𝐳=𝐛subscript𝐴rank-r𝐳𝐛A_{\mbox{\scriptsize rank-$r$}}\,\mathbf{z}\,=\,\mathbf{b}italic_A start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT bold_z = bold_b that is orthogonal to columns of
[Q0U,Q~]subscript𝑄0𝑈~𝑄[Q_{0}\,U,\,\tilde{Q}][ italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_U , over~ s... | of A𝖧superscript𝐴𝖧A^{{\mbox{\tiny$\mathsf{H}$}}}italic_A start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT like the cases for m<n𝑚𝑛m\,<\,nitalic_m < italic_n.
As a result, a basis {𝐮1,…,𝐮n−r}subscript𝐮1…subscript𝐮𝑛𝑟\{\mathbf{u}_{1},\,\ldots,\,\mathbf{u}_{n-r}\}{ bold_u start_POSTSUBSCRIPT 1 end_POSTSUBSC... | The range, kernel, rank and Hermitian transpose of a matrix A𝐴Aitalic_A are
denoted by ℛ𝒶𝓃ℊℯ(𝒜)ℛ𝒶𝓃ℊℯ𝒜\mathpzc{Range}(A)italic_script_R italic_script_a italic_script_n italic_script_g italic_script_e ( italic_script_A ), 𝒦ℯ𝓇𝓃ℯ𝓁(𝒜)𝒦ℯ𝓇𝓃ℯ𝓁𝒜\mathpzc{Kernel}(A)italic_script_K italic_script_e itali... | \scriptsize rank-$r$}}^{\dagger}~{}~{}=~{}~{}[\mathbf{u}_{1},\cdots,\mathbf{u}%
_{r}]\,[\mathbf{u}_{1},\cdots,\mathbf{u}_{r}]^{{\mbox{\tiny$\mathsf{H}$}}}= italic_A start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCR... | 𝐯∈𝒦ℯ𝓇𝓃ℯ𝓁(𝐟𝐱(𝐱∗))𝐯𝒦ℯ𝓇𝓃ℯ𝓁subscript𝐟𝐱subscript𝐱\mathbf{v}\,\in\,\mathpzc{Kernel}(\mathbf{f}_{\mathbf{x}}(\mathbf{x}_{*}))bold_v ∈ italic_script_K italic_script_e italic_script_r italic_script_n italic_script_e italic_script_l ( bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x start_POSTS... | A |
In order to analyze the performance of an online algorithm, we will rely on the well-established framework of competitive analysis, which provides strict, theoretical performance guarantees against worst-case scenarios. In fact, as stated in (?), bin packing has served as “an early proving ground for this type of analy... | In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity.
Ideally, one aims for a loss that is as small as o(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu... |
While the standard online framework assumes that the algorithm has no information on the input sequence, a recently emerged and very active direction in Machine Learning seeks to leverage predictions on the input. More precisely, the algorithm has access to some machine-learned information on the input, which, however... | Online bin packing has also been studied under the advice complexity model (?, ?, ?), in which the online algorithm has access to some error-free information on the input called advice. The objective is to quantify the tradeoffs between the competitive ratio and the size of the advice (i.e., the number of bits in the b... |
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i... | B |
Although modifications proposed by Bednarik et al. (2020) and Deng et al. (2020b) improve the quality of results, their objective is to fix deformations caused by the stitching of individual mappings. We postulate that by enforcing the local consistency of patch vertices within the objective function of a model, we ca... | Practically speaking, our approach transforms the embedding of point cloud obtained from the base model to parametrize the bijective function represented by the MLP network. This function aims to find a mapping between a canonical 2D patch to the 3D patch on the surface of the target mesh. We condition the positioning ... | To that end, we propose a novel framework, LoCondA, capable of generating and reconstructing high-quality 3D meshes. This framework extends the existing base hypermodels (Spurek et al., 2020a, b) with an additional module designed for mesh generation that relies on a parametrization of local surfaces, as shown in Fig. ... |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | Since we directly operate on points lying on surfaces of 3D objects, we use an existing solution based on hypernetworks HyperCloud (Spurek et al., 2020a) or HyperFlow (Spurek et al., 2020b)222We can also use conditioning framework introduced in (Yang et al., 2019; Chen et al., 2020a) instead of the classical encoder-de... | B |
R𝒵2=2mMx2(λmin+(𝐖𝐱))−2superscriptsubscript𝑅𝒵22𝑚superscriptsubscript𝑀𝑥2superscriptsuperscriptsubscript𝜆subscript𝐖𝐱2R_{\mathcal{Z}}^{2}={2m}M_{x}^{2}(\lambda_{\min}^{+}({\bf W}_{{\bf x}}))^{-2}italic_R start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 2 itali... | \bf x}\right\|_{{\mathcal{X}}}^{2}+\left\|{\bf p}\right\|_{{\mathcal{P}}}^{2}∥ ( bold_x , bold_p ) ∥ start_POSTSUBSCRIPT ( caligraphic_X , caligraphic_P ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ bold_x ∥ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERS... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... | Next, we introduce the second important component of the convergence rate analysis, namely the smoothness assumption on the objective F𝐹Fitalic_F.
To set the stage we first introduce a general definition of Lipschitz-smooth function of two variables. | To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
| C |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. | where L^=D^tD^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... | C |
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm(ℱ)≥δ(|ℱ|m)... |
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and... |
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit... | One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in... |
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ... | C |
The calculation is according to three validation metrics after we subtract their standard deviations.
The grouped bar chart presents the performance based on accuracy, weighted precision, and weighted recall and their standard deviations due to cross-validation (error margins in black). | To the best of our knowledge, little empirical evidence exists for choosing a particular measure over others. In general, target correlation and mutual information (both related to the influence between features and the dependent variable) may be good candidates for identifying important features [71]. After these firs... | (iv) during the detailed examination phase, check the different transformations of the features with statistical measures and compare the combinations of two or three features that result in newly-generated features (cf. Fig. 1(d)); and
(v) contrast the performances of the best predictive performance found so far vs. t... | Teal color encodes the current action’s score, and brown the best result reached so far. The choice of colors was made deliberately because they complement each other, and the former denotes the current action since it is brighter than the latter.
If the list of features is long, the user can scroll this view. | To verify each of our interactions, we continuously monitor the process through the punchcard, as shown in Fig. 6(c). From this visualization, we acknowledge that when F16 was excluded, we reached a better result. The feature generation process (described previously) led to the best predictive result we managed to acco... | C |
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... |
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou... | A |
We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u... | So far, there is no study comparing methods from either group comprehensively. Often papers fail to compare against recent methods and vary widely in the protocols, datasets, architectures, and optimizers used. For instance, the widely used Colored MNIST dataset, where colors and digits are spuriously correlated with e... | We first present the mean per group accuracy for all eight methods on all three datasets in Table. 1 to see if any method does consistently well across benchmarks. For this, we used class and gender labels as explicit biases for CelebA. For Biased MNISTv1, there are multiple ways to define explicit biases, but for this... |
For each dataset, we assess all bias mitigation methods with the same neural network architecture. For CelebA, we use ResNet-18 [29]. For Biased MNISTv1, we use a convolutional neural network with four ReLU layers consisting of a max pooling layer attached after the first convolutional layer. For GQA-OOD, we employ th... |
We compare seven state-of-the-art bias mitigation methods on classification tasks using Biased MNISTv1 and CelebA, measuring generalization to minority patterns, scalability to multiple sources of biases, sensitivity to hyperparameters, etc. We ensure fair comparisons by using the same architecture, optimizer, and per... | C |
Semi-supervised CNNs require both labeled and unlabeled images for optimizing networks. Wang et al. propose an adversarial learning approach to improve the model performance on the target subject/dataset [59].
As shown in Fig. 6, it requires labeled images in the training set as well as unlabeled images of the target s... | They regress gaze directions from the pictorial representation.
Wang et al. propose an adversarial learning approach to extract the domain/person-invariant feature [59]. They feed the features into an additional classifier and design an adversarial loss function to handle the appearance variations. | During training, the result of the regression network is used to supervise the evaluation network, the accuracy of the evaluation network determines the learning rate in the regression network.
They simultaneously train the two networks and improve the regression performance without additional inference parameters. | Semi-supervised CNNs require both labeled and unlabeled images for optimizing networks. Wang et al. propose an adversarial learning approach to improve the model performance on the target subject/dataset [59].
As shown in Fig. 6, it requires labeled images in the training set as well as unlabeled images of the target s... | They use the labeled data to supervise the gaze estimation network and design an adversarial module for semi-supervised learning.
Given these features used for gaze estimation, the adversarial module tries to distinguish their source and the gaze estimation network aims to extract subject/dataset-invariant features to ... | D |
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source l... | he2016deep has been successfully used in various pattern recognition tasks such as face and pedestrian detection mliki2020improved . It containing 50 layers trained on the ImageNet dataset. This network is a combination of Residual network integrations and Deep architecture parsing. Training with ResNet-50 is faster d... |
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source l... | The next step is to apply a cropping filter in order to extract only the non-masked region. To do so, we firstly normalize all face images into 240 ×\times× 240 pixels. Next, we partition a face into blocks. The principle of this technique is to divide the image into 100 fixed-size square blocks (24 ×\times× 24 pixels ... | Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (... | C |
Γ′⊢C′::Δ\Gamma^{\prime}\vdash C^{\prime}::\Deltaroman_Γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊢ italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ
Γ⊢C,C′::Δ\Gamma\vdash C,C^{\prime}::\Deltaroman_Γ ⊢ italic_C , italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ |
Configuration reduction →→\to→ is given as multiset rewriting rules [CS09] in Figure 4, which replace any subset of a configuration matching the left-hand side with the right-hand side. However, !!! indicates objects that persist across reductions. Principal cuts encountered in a configuration are resolved by passing ... |
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which... |
Now, let ΓΓ\Gammaroman_Γ and ΔΔ\Deltaroman_Δ be contexts that associate cell addresses to types. The configuration typing judgment given in Figure 3, Γ⊢C::Δ\Gamma\vdash C::\Deltaroman_Γ ⊢ italic_C : : roman_Δ, means that the objects in C𝐶Citalic_C are well-typed with sources in ΓΓ\Gammaroman_Γ and destinations in ΔΔ\... | To review SAX, let us make observations about proof-theoretic polarity. In the sequent calculus, inference rules are either invertible—can be applied at any point in the proof search process, like the right rule for implication—or noninvertible, which can only be applied when the sequent “contains enough information,” ... | A |
We emphasize that Part 1 is just an initialization step that only needs to be executed once before media sharing begins, while Part 2 is executed upon requesting of media content copy from each authorized user. Part 3 is only executed for each detected suspicious media content copy. | In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ... | Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However... | Therefore, in the case of a large number of users, the owner’s overhead in Part 2 should be the primary concern for a media sharing system.
Fortunately, in our design, by delegating the operations securely to the cloud, now in Part 2 the owner only needs to calculate and send a re-encryption key, which only incurs negl... | The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado... | C |
(2) By treating features as nodes and their pairwise feature interactions as edges, we bridge the gap between GNN and FM, and make it feasible to leverage the strength of GNN to solve the problem of FM.
(3) Extensive experiments are conducted on CTR benchmark and recommender system datasets to evaluate the effectivenes... |
In this work, we proposed a graph neural network-based approach to modeling feature interactions. We design a feature interaction selection mechanism, which can be seen as learning the graph structure by viewing the feature interactions as edges between features. | In summary, when dealing with feature interactions, FM suffers intrinsic drawbacks. We thus propose a novel model Graph Factorization Machine (GraphFM), which takes advantage of GNN to overcome the problems of FM for feature interaction modeling.
By treating features as nodes and feature interactions as the edges betwe... |
GraphFM(-S): interaction selection is the first component in each layer of GraphFM, which selects only the beneficial feature interactions and treat them as edges. As a consequence, we can model only these beneficial interactions with the next interaction aggregation component. To check the necessity of this component... | At each layer of GraphFM, we select the beneficial feature interactions and treat them as edges in a graph. Then we utilize a neighborhood/interaction aggregation operation to encode the interactions into feature representations.
By design, the highest order of feature interaction increases at each layer and is determi... | A |
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes wh... | Complexity comparison: Number of iterations needed to reach a solution with h(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) below ε𝜀\varepsilonitalic_ε for Problem 1.1 for Frank-Wolfe-type algorithms in the literature. The asterisk on FW-LLOO highlights the fact that the procedure is different from the standard LMO procedur... | the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020],
denoted by B-FW and B-AFW respectively. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... |
Research reported in this paper was partially supported through the Research Campus Modal funded by the German Federal Ministry of Education and Research (fund numbers 05M14ZAM,05M20ZBM) and the Deutsche Forschungsgemeinschaft (DFG) through the DFG Cluster of Excellence MATH+. We would like to thank the anonymous revi... | D |
Furthermore, we make some important observations about invariants that are preserved by operations of our algorithm which we will use later.
In Section 4, we prove the correctness of our algorithm. The approximation analysis as well as the proof of the pass complexity can be found in Section 5. In Section 6 we provide ... | The basic building block in the search for augmenting paths is to find semi-matchings between the vertices and their matched neighbors such that each vertex has a small amount of neighbors in the semi-matching.
In the case of bipartite graphs, they show that their method of searching for augmenting paths in a graph def... | Furthermore, we make some important observations about invariants that are preserved by operations of our algorithm which we will use later.
In Section 4, we prove the correctness of our algorithm. The approximation analysis as well as the proof of the pass complexity can be found in Section 5. In Section 6 we provide ... |
In the first pass, we apply a simple greedy algorithm to find a maximal matching, hence a 2222-approximation. This 2222-approximate maximum matching is our starting matching. The rest of our algorithm is divided into multiples phases. In each phase, we iteratively improve the approximation ratio of our current matchin... | In this section, we give a brief outline of our approach and discuss the challenges we overcome.
As the basic building block, we follow the classic approach by Hopcroft and Karp [HK73] of iteratively finding short augmenting paths to improve a 2222-approximate matching that can easily be found by a greedy algorithm. | D |
For example, the rapid development of distributed machine learning involves data whose size is getting increasingly large, and they are usually stored across multiple computing agents that are spatially distributed. Centering large amounts of data is often undesirable due to limited communication resources and/or priva... | The Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method introduced in [24, 25] modified the gradient tracking methods to deal with directed network topologies without the push-sum technique.
The algorithm uses a row stochastic matrix to mix the local decision variables and a column stochastic matr... |
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the... | Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols.
The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | C |
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | Note that in the proposed formulation (1) we consider both the centralized and decentralized cases. In the decentralized setting, all nodes are connected within a network, and each node can communicate/exchange information only with their neighbors in the network. While the centralized architecture consists of master-s... | We present a new SPP formulation of the PFL problem (1) as the decentralized min-max mixing model. This extends the classical PFL problem to a broader class of problems beyond the classical minimization problem. It furthermore covers various communication topologies and hence goes beyond the centralized setting.
| We propose a lower bounds both on the communication and the number of local oracle calls for a general algorithms class (that satisfy Assumption 3). The bounds naturally depend on the communication matrix W𝑊Witalic_W (as in the minimization problem), but our results apply to SPP (see ”Lower” rows in Table 1
for variou... | C |
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina... |
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina... | PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an... | Sheriff (Farina et al., 2019b) is a two-player, general-sum negotiation game. It consists of bargaining rounds between a smuggler, who is motivated to import contraband without getting caught, and a sheriff, who is motivated to find contraband or accept bribes. Figure 2(c) shows that JPSRO is capable of finding the opt... |
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t... | C |
One small extension of the present work would be to consider queries with range ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. It would also be interesting to extend our results to handle arbitrary normed spaces, using appropriate noise such as perhaps the Laplace mechani... |
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it... |
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mecha... |
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a seri... |
We hope that the mathematical toolkit that we establish in Appendix B to analyze our stability notion may find additional applications, perhaps also in context of privacy accounting. Furthermore, the max divergence can be generalized analogously to the “dynamic” generalization of Rényi divergence proposed in this pape... | D |
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... |
We show first that any z𝑧zitalic_z-properly colored antler prior to executing the algorithm remains z𝑧zitalic_z-properly colored after termination. Afterwards we argue that in Item 5, the pair (χV−1(𝖢˙),χV−1(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V... | All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... | We now show that a z𝑧zitalic_z-antler can be obtained from a suitable coloring χ𝜒\chiitalic_χ of the graph. The algorithm we give updates the coloring χ𝜒\chiitalic_χ and recolors any vertex or edge that is not part of a z𝑧zitalic_z-properly colored antler to color 𝖱˙˙𝖱\mathsf{\dot{R}}over˙ start_ARG sansserif_R e... |
To show the algorithm preserves properness of the coloring, we show that every individual recoloring preserves properness, that is, if an arbitrary z𝑧zitalic_z-antler is z𝑧zitalic_z-properly colored prior to the recoloring, it is also z𝑧zitalic_z-properly colored after the recoloring. | D |
Recently, some works treat shadow generation as an image-to-image translation task, and develop deep networks which translate input composite image without foreground shadow to the target image with foreground shadow. For instance, Zhan et al. [189] used an auto-encoder to predict the shadow mask with a pretrained illu... | Some other shadow generation methods are not designed for our task, i.e., generating shadow for the foreground object in a composite image, but they can be somehow adapted to our task.
Mask-ShadowGAN [54] explored conducting shadow removal and shadow generation with unpaired data at the same time, which satisfies cycli... |
Although it is feasible to generate paired data using rendering technique, the rendered images have large domain gap with real images. When applying the model trained on rendered images to real images, the performances are usually significantly degraded. To overcome this drawback, Hong et al. [52] constructed paired d... |
Similar to image harmonization in Section IV, composite images without foreground shadows can be easily obtained. Nonetheless, it is very difficult to obtain paired data, i.e., a composite image without foreground shadow and a ground-truth image with foreground shadow, which are required by supervised deep learning me... |
Other methods [203, 57, 92, 52] utilized paired training data (paired images with and without foreground shadow) to generate better shadow images. ShadowGAN [203] employed standard conditional GAN with reconstruction loss, local adversarial loss, and global adversarial loss to generate shadow for the inserted 3D foreg... | D |
We denote a spatio-temporal tensor for city c𝑐citalic_c (e.g. taxi flow values) as 𝐱c∈ℝTc×Wc×Hcsubscript𝐱𝑐superscriptℝsubscript𝑇𝑐subscript𝑊𝑐subscript𝐻𝑐\mathbf{x}_{c}\in\mathbb{R}^{T_{c}\times W_{c}\times H_{c}}bold_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_T... |
Our data collection covers a total of 7 cities, namely Beijing, Shanghai, Shenzhen, Chongqing, Xi’an111The original data were obtained from the HKUST-DiDi Joint Research Laboratory. Some of the data can be made available upon request after undergoing a process of desensitization., Chengdu††footnotemark: and Hong Kong... | In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp... | TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er... | In addition to the collection and processing of data, it is essential to identify and quantify the correlations between sub-datasets in CityNet to gain insights into the effective utilization of the multi-modal data. In this section, we leverage data mining tools to explore and visualize the relationships between servi... | A |
One can immediately expect that, analogous to general mean-variance estimators with a Gaussian prediction interval, this procedure does not give optimal intervals for data sets that do not follow a normal distribution. One of the consequences is that this model might suffer from the validity problems discussed in Secti... |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... | The idea behind deep ensembles lakshminarayanan2017simple is the same as for any ensemble technique: training multiple models to obtain a better and more robust prediction. The loss functions of most (deep) models have multiple local minima and by aggregating multiple models one hopes to take into account all these mi... | For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc... | The choice of data sets in this comparative study was very broad and no specific properties were taken into account a priori. After comparing the results of the different models, it did become apparent that certain assumptions or properties can have a major influence on the performance of the models. The main examples ... | B |
ASAP, the aligned scores & performances dataset compiled by \citeyearasap,444https://github.com/fosfrancesco/asap-dataset contains 1,068 MIDI performances of 222 Western classical music compositions from 15 composers, along with the MIDI performances of the 222 pieces compiled from the MAESTRO dataset \parencitehawtho... | POP909 comprises piano covers of 909 pop songs compiled by \textcitepop909.555https://github.com/music-x-lab/POP909-Dataset It is the only dataset among the five that provides melody, non-melody labels for each note. Specifically, each note is labelled with one of the following three classes: vocal melody (piano notes ... | Table 3: Testing metrics (in %) of “our model (performance) +CP” and other baseline methods for the two-class “melody versus non-melody” classification task using POP909, viewing vocal melody and instrumental melody as “melody” and accompaniment as “non-melody”.
| Specifically, we consider two formulations of the task. Firstly, we adhere to the original configuration of POP909 and perform three-class melody classification, classifying each Pitch into three categories: vocal melody, instrumental melody or accompaniment. Secondly, we merge vocal melody and instrumental melody into... | The skyline algorithm can only perform “melody versus non-melody” two-class classification for it cannot distinguish between vocal melody and instrumental melody—it uses the simple rule of taking the note with the highest pitch among the concurrent notes as the melody, while avoiding temporally overlapping notes \paren... | A |
And of course we have to use a different color for each vertex, so BBCλ(Kn,T)≥n𝐵𝐵subscript𝐶𝜆subscript𝐾𝑛𝑇𝑛BBC_{\lambda}(K_{n},T)\geq nitalic_B italic_B italic_C start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_T ) ≥ italic_n – thus BBCλ(Kn,T)... | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen... | In this section we will proceed as follows: we first introduce the so-called red-blue-yellow (k,l)𝑘𝑙(k,l)( italic_k , italic_l )-decomposition of a forest F𝐹Fitalic_F on n𝑛nitalic_n vertices, which finds a set Y𝑌Yitalic_Y of size at most l𝑙litalic_l such that we can split V(F)∖Y𝑉𝐹𝑌V(F)\setminus Yitalic_V ( it... | To achieve the same result for forest backbones we only need to add some edges that would make the backbone connected and spanning. However, we can always make a forest connected by adding edges between some leaves and isolated vertices and we will not increase the maximum degree of the forest, as long as Δ(F)≥2normal... | In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively).
Note that it has a natural in... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.