context
stringlengths
250
4.36k
A
stringlengths
250
4.12k
B
stringlengths
250
5.11k
C
stringlengths
250
5.1k
D
stringlengths
250
4.17k
label
stringclasses
4 values
x2⁢(x2−1)⁢d2d⁢x2⁢Rnm⁢(x)=[n⁢x2⁢(n+D)−m⁢(D−2+m)]⁢Rnm⁢(x)+x⁢[D−1−(D+1)⁢x2]⁢dd⁢x⁢Rnm⁢(x).superscript𝑥2superscript𝑥21superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥delimited-[]𝑛superscript𝑥2𝑛𝐷𝑚𝐷2𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑥delimited-[]𝐷1𝐷1superscript𝑥2𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥x^{2}(x^{2}-...
}\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m% }}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s...
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic...
^{2}-m^{2}\right]x^{2}\\ +D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP...
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
D
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices. Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar...
Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre...
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec...
Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r. Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero.
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left...
D
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscrip...
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien...
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
B
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle take nearly linear time [6, 1, 2, 3, 23], that is, O⁢(n⁢log⁡n)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O⁢(n⁢log2⁡n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP...
Using a Rotate-and-Kill process (which is shown in Algorithm 5), we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead.
Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-...
The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim...
in the Rotate-and-Kill process, and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2).
C
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
D
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
B
Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content. Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before...
The performance of Twitter features are stable over time from the beginning to the end. The 3 best of Twitter Features are all based on contained URLs in tweets: ContainNEWS, UrlRankIn5000, WotScore, as shown in Table 8. It is quite reasonable that the news event would have higher probability to be reported by news or ...
Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content. Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before...
The city police had to warn the population to refrain from spreading related news on Twitter as it was getting out of control: “Rumors are wildfires that are difficult to put out and traditional news sources or official channels, such as police departments, subsequently struggle to communicate verified information to t...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
C
Discounted Cumulative Gain (NDCG) and r⁢e⁢c⁢a⁢l⁢l⁢@⁢k𝑟𝑒𝑐𝑎𝑙𝑙@𝑘recall@kitalic_r italic_e italic_c italic_a italic_l italic_l @ italic_k (r⁢@⁢k𝑟@𝑘r@kitalic_r @ italic_k). We measure the retrieval effectiveness of each metric at 3 and 10 (m𝑚mitalic_m@3 and m𝑚mitalic_m@10, where m∈𝑚absentm\initalic_m ∈ {N⁢D⁢C⁢G,...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of b...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
B
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance ...
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali...
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
D
This very low threshold for now serves to measure very basic movements and to check for validity of the data. Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity).
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
A
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe...
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
A
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
A reason why this direct reduction from cutwidth to pathwidth has been overlooked might be that the literature on cutwidth and pathwidth approximation is focussed on more general approximation techniques (i. e., vertex and edge separators), which then yield approximation algorithms for these graph parameters. Another r...
The relationship between cutwidth and pathwidth revealed by this direct reduction is best illustrated via a third graph parameter that we call second order cutwidth. To the best of our knowledge, this parameter has not explicitly been studied before.
In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th...
C
Shallow learning models such as decision trees and Support Vector Machines (SVMs) are ‘inefficient’; meaning that they require a large number of computations during training/inference, large number of observations for achieving generalizability and significant human labour to specify prior knowledge in the model[8].
The literature phrase search is the combined presence of each one of the cardiology terms indicated by (∗*∗) in Table I with each one of the deep learning terms related to architecture, indicated by (+++) in Table II using Google Scholar111https://scholar.google.com, Pubmed222https://ncbi.nlm.nih.gov/pubmed/ and Scopus...
The network architecture was adapted from VGG-16, similar to the DeepLab architecture[164] while the final segmentation was refined using a Conditional Random Field (CRF). The authors show that the introduction of unlabelled data improves segmentation performance when the training set is small.
At each iteration the expert annotates the most uncertain ECG beats in the test set, which are then used for training, while the output of the network assigns the confidence measures to each test beat. Experiments performed on MITDB, INDB, SVDB indicate the robustness and computational efficiency of the method.
An one hidden layer network was used for the initial testing of all voxels to obtain a small number of candidates, followed by a more accurate classification with a deep network. The learned image features are further combined with Haar wavelet features to increase the detection accuracy.
A
We presented SimPLe, a model-based reinforcement learning approach that operates directly on raw pixel observations and learns effective policies to play games in the Atari Learning Environment. Our experiments demonstrate that SimPLe learns to play many of the games with just 100100100100K interactions with the envir...
In this paper our focus was to demonstrate the capability and generality of SimPLe only across a suite of Atari games, however, we believe similar methods can be applied to other environments and tasks which is one of our main directions for future work. As a long-term challenge, we believe that model-based reinforcem...
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world mod...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
C
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
A
As depicted in Fig. 10, for the step negotiation operation with a height of hℎhitalic_h, both ER⁢w<EC⁢wsubscript𝐸𝑅𝑤subscript𝐸𝐶𝑤E_{Rw}<E_{Cw}italic_E start_POSTSUBSCRIPT italic_R italic_w end_POSTSUBSCRIPT < italic_E start_POSTSUBSCRIPT italic_C italic_w end_POSTSUBSCRIPT and ER⁢r<EC⁢rsubscript𝐸𝑅𝑟subscript𝐸𝐶...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
To assess the efficacy of the suggested autonomous locomotion mode transition strategy, simulation experiments featuring step heights of h, 2h, and 3h were conducted. These simulations involved continuous tracking of energy consumption for both total body negotiation (ER⁢wsubscript𝐸𝑅𝑤E_{Rw}italic_E start_POSTSUBSCR...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
B
In this work, we address what is a significant drawback in the online advice model. Namely, all previous works assume that advice is, in all circumstances, completely trustworthy, and precisely as defined by the algorithm. Since the advice is infallible, no reasonable online algorithm with advice would choose to ignor...
Furthermore, we show an interesting difference between the standard advice model and the model we introduce: in the former, an advice bit can be at least as powerful as a random bit, since an advice bit can effectively simulate any efficient choice of a random bit. In contrast, we show that in our model, there are situ...
Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily a...
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem. Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size.
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution....
D
Traditionally, expert systems have been used to deal with complex problems that require the ability of human experts to be solved. These intelligent systems usually need knowledge engineers to manually code all the facts and rules acquired from human experts through interviews, for the system’s knowledge base (KB).
Traditionally, expert systems have been used to deal with complex problems that require the ability of human experts to be solved. These intelligent systems usually need knowledge engineers to manually code all the facts and rules acquired from human experts through interviews, for the system’s knowledge base (KB).
However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579292929Note that this is the same subject who was prev...
A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as “early classification”, in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy.
Nonetheless, This manual process is very expensive and error-prone since the KB of a real expert system includes thousands of rules. This, added to the rise of big data and cheaper GPU-powered computing hardware, are causing a major shift in the development of these intelligent systems in which machine learning is incr...
D
In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019). There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (...
𝐦t,k,k∈[K]subscript𝐦𝑡𝑘𝑘delimited-[]𝐾{\bf m}_{t,k},k\in[K]bold_m start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT , italic_k ∈ [ italic_K ] is called local momentum since it only accumulates local gradient information from worker k𝑘kitalic_k.
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information.
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse commun...
C
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
During validation we selected the models with the kernel size that achieved the best φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG out of all epochs. During testing we feed the test data into the selected model and calculate C⁢R−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIP...
C
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
We construct a UAV ad-hoc network in a post-disaster scenario with M𝑀Mitalic_M identical UAVs being randomly deployed, in which M𝑀Mitalic_M is a huge number compared with normal Multi-UAV system. All the UAVs have the same volume of battery E𝐸Eitalic_E and communication capability. The topological structure of Mult...
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
B
\right)\,/\,\widehat{r}^{2}\right)\right]= under¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT [ - over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ ( over¯ start_ARG italic_f end_ARG over¯ start_ARG bold_v end_ARG / over¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIP...
(\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s...
\eta}}/\mu_{0}\right)\,\,\left(\left(\overline{\widehat{\nabla}}\,\,\overline{% f}\right)\,/\,\widehat{r}\right)^{2}\right)}}\biggr{]}+ start_UNDERACCENT italic_η start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_J start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_UNDERACCE...
\widehat{\mathbf{\eta}}\,\,\left(\overline{\widehat{\nabla}}\,\,\overline{f}% \right)\,/\,\widehat{r}^{2}\right)\right\}= divide start_ARG 1 end_ARG start_ARG 2 italic_π end_ARG over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { - over¯ start_ARG over¯ start_ARG ∇ end_ARG e...
Here, η^=<η¯>e^𝜂superscriptexpectation¯𝜂𝑒\widehat{\mathbf{\eta}}=<\overline{\eta}>^{e}over^ start_ARG italic_η end_ARG = < over¯ start_ARG italic_η end_ARG > start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT, and ω^=<v¯ϕ/r¯>e^𝜔superscriptexpectationsubscript¯𝑣italic-ϕ¯𝑟𝑒\widehat{\mathbf{\omega}}=<\overline{v}_...
D
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
C
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requirin...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
B
Medical images, both 2D and volumetric, have in general, larger file sizes than natural images, which inhibits the ability to load them entirely onto the memory for processing. As such, they need to be processed either as patches or sub-volumes, making it difficult for the segmentation models to capture spatial relati...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add ...
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr...
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information...
D
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
Fig. 4 illustrates how the size of the cut γ⁢(𝐳)𝛾𝐳\gamma({\mathbf{z}})italic_γ ( bold_z ) induced by the spectral partition 𝐳𝐳{\mathbf{z}}bold_z changes as more edges are added and the original structure of the graph is corrupted (blue line). The figure also reports the size of the random cut (orange line) and the...
B
Neural random forest imitation enables an implicit transformation of random forests into neural networks. Usually, data samples are propagated through the individual decision trees and the split decisions are evaluated during inference. We propose a method for generating input-target pairs by reversing this process and...
We now compare the proposed method to state-of-the-art methods for mapping random forests into neural networks and classical machine learning classifiers such as random forests and support vector machines with a radial basis function kernel that have shown to be the best two classifiers across all UCI datasets (Fernán...
Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages. Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ...
The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes. Olson et al. (2018) evaluate the performance of modern...
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
B
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
C
In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements. We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza...
Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs. While quantization approaches obviously reduce the memory footprint of a DNN, the selected weight representation potentially also facilitates faster inference using cheaper arithmetic operations.
Furthermore, representations using fewer bits often facilitate faster computation. For instance, when quantization is driven to the extreme with binary weights w∈{−1,1}𝑤11w\in\{-1,1\}italic_w ∈ { - 1 , 1 } and binary activations x∈{−1,1}𝑥11x\in\{-1,1\}italic_x ∈ { - 1 , 1 }, floating-point or fixed-point dot products...
Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations. The reduction in memory requirements are obvious: Using fewer bits for the weights results in a lower memory overhead for storing the corresponding model, and using fewer bits for the activ...
The results for number of floating-point operations (FLOPs), parameters, activations, and memory (=== parameters +++ activations) are reported in Figure 4. When considering the number of FLOPs and parameters, which are the main metrics in the literature on resource-efficient DNNs, it is clear that channel and group pru...
C
\mathrm{VR}_{*}(X);\mathbb{F})\oplus\mathrm{PH}_{*}(\mathrm{VR}_{*}(Y);\mathbb% {F}).roman_PH start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( roman_VR start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_X ∨ italic_Y ) ; blackboard_F ) ≅ roman_PH start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( roman_VR start_POSTSUBSCRIPT ∗ end_POSTSU...
By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ...
Note that the tensor product of two simple persistence modules corresponding to intervals I𝐼Iitalic_I and J𝐽Jitalic_J is the simple persistence module corresponding to the interval I∩J𝐼𝐽I\cap Jitalic_I ∩ italic_J. Therefore, the first part of Theorem 8 implies that
We will first prove the claim surrounding Equation (5). By [23, Lemma 3.16 and Proposition 3.29], the multiplicity of (r,s]𝑟𝑠(r,s]( italic_r , italic_s ] in barckVR⁢(X;𝔽)subscriptsuperscriptbarcVR𝑘𝑋𝔽\mathrm{barc}^{\mathrm{VR}}_{k}(X;\mathbb{F})roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_PO...
The second part of the proposition above, equation (3), implies that the right endpoint of any interval I𝐼Iitalic_I (often referred to as the death time of I𝐼Iitalic_I) cannot exceed the radius rad⁢(X)rad𝑋\mathrm{rad}(X)roman_rad ( italic_X ) of X𝑋Xitalic_X (cf. Remark 9.1).
B
The rest of this paper is organized as follows. In the next two sections, we discuss literature that is related to visual, interactive assessment and interpretation of t-SNE projections as well as the necessary background information on how the t-SNE algorithm works. Section 4 presents our visualization approach inclu...
The problem that DR tries to solve is, in general, to find a low-dimensional representation of a high-dimensional data set that retains—as much as possible—its original structure. When used for visualization, the output is set to two or three dimensions, and the results are commonly visualized with scatterplots, where ...
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q...
A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible. Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a triv...
Most similarly to one of our proposed interactions (the Dimension Correlation, Subsection 4.4), in AxiSketcher [47] (and its prior version InterAxis [48]) the user can draw a polyline in the scatterplot to identify a shape, which results in new non-linear high-dimensional axes to match the user’s intentions. Since the...
C
Guided learning strategy: A novel update mechanism for metaheuristic algorithms design and improvement - 2024 [30]: This work provides guidelines for improving the performance of metaheuristics. The authors have developed a strategy for recalling the algorithm’s requirements based on the current population. Authors ann...
A Simple statistical test against origin-biased metaheuristics - 2024 [31]: The authors have developed a test to determine algorithm bias. The test is based on the idea that an unbiased algorithm can choose either direction for one of two different local optima in a function. If there is a difference in behavior betwee...
Benchmarks: The choice of benchmarking in algorithm evaluation can vary between real-world scenarios and comparisons against existing algorithms. Selecting the right benchmark is crucial, as study conclusions heavily rely on the test environment. However, chosen benchmarks often exhibit biases that can unfairly advanta...
In [21], authors perform a comparison between seven bio-inspired algorithms with various benchmarks and discovered that “these (algorithms) contain a centre-bias operator that lets them find optima in the centre of the benchmark set with ease”. The conclusion is that making more “comparison with other methods (that do ...
Metaheuristic optimization algorithms: a comprehensive overview and classification of benchmark test functions - 2024 [37]: This work focuses on the practical scenario of developing a new metaheuristic. It reviews over 200 mathematical test functions and more than 50 real-world engineering design problems. For each fu...
A
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
C
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ...
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ...
Statistics of IPID values distribution among tested servers are plotted in Figure 2. When ICMP is filtered, it results in ERROR, when run with TCP, the IPID values are often zero (i.e., ZERO IPID in graph) in Figure 2. To improve coverage of the IPID technique we merge the ICMP&TCP and ICMP&UDP results for each server ...
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this eva...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
B
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
The second comparison is between the weighted ensembles of SVMs, i.e., the state of the art [7], and the weighted ensembles of neural networks. For each batch, an SVM and a neural network were trained with that batch as the training set. Weighted ensembles were constructed for each batch T𝑇Titalic_T by assigning weig...
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ...
The context pathway is based on a recurrent neural network (RNN) approach. It reuses weights and biases across the steps of a sequence and can thus process variable-length sequences. The alternative was to use a long-short term memory (LSTM), which employs gating variables to better remember information in long sequen...
D
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Algorithm ℬℬ\mathcal{B}caligraphic_B is simply algorithm 𝒜𝒜\mathcal{A}caligraphic_A, but after every step it waits as long as necessary to make its expected running time for that step equal to the bound calculated for this step. To be precise, there are two types of waiting, best explained by an example.
3:Compute the sets ℬ1(1),…,ℬ|𝒯|+1(1)superscriptsubscriptℬ11…superscriptsubscriptℬ𝒯11\mathcal{B}_{1}^{(1)}\!,\ldots,\mathcal{B}_{|\mathcal{T}|+1}^{(1)}caligraphic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … , caligraphic_B start_POSTSUBSCRIPT | caligraphic_T | + 1 end_...
Note that the time waited is independent of Q𝑄Qitalic_Q. Together, these two types of waiting ensure that (i) the time needed by ℬℬ\mathcal{B}caligraphic_B is monotone in |Q|𝑄|Q|| italic_Q | and (ii) the total expected time needed by ℬℬ\mathcal{B}caligraphic_B equals the calculated upper bound for 𝒜𝒜\mathcal{A}cali...
B
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belong...
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem...
A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1⁢…⁢qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic...
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cann...
D
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating ...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi...
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p...
Following Selvaraju et al. (2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT trained on human attention maps...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
D
Table 3 shows the results for the answer sentence selection task comparing the performance between BERT and PrivBERT. Results from BERT are as reported by Ravichander et al. (2019). PrivBERT achieves state of the art results improving on the results of BERT by about 6%. PrivBERT therefore has been shown to achieve sta...
Duplicate and Near-Duplicate Detection. Examination of the corpus revealed that it contained many duplicate and near-duplicate documents. We removed exact duplicates by hashing all the raw documents and discarding multiple copies of exact hashes. Through manual inspection, we found that a number of privacy policies fro...
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents. We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross ver...
D
G4: Facilitate human interaction and offer guidance. During development of any VA tool, it is key to decide on concrete visual representations and interaction technologies between multiple coordinated views. It is not uncommon to find gaps between visualization design guidelines and their applicability in implemented ...
G5: Reveal and reduce cognitive biases. Visualizations should be carefully chosen in order to reduce cognitive biases. Cognitive bias is, in simple terms, a human judgment that drifts away from the actual information that should be conveyed by a visualization, i.e., it “involves a deviation from reality that is predic...
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ...
The use of visualization for ensemble learning could possibly introduce further biases to the already blurry situation based on the different ML models involved. Thus, the thorough selection of both interaction techniques and visual representations that highlight and potentially overcome any cognitive biases is a major...
A
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
D
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d...
B
Although the GP-based UAV’s position and attitude prediction results fit well with the position and attitude data, the prediction performance is effected by UAV’s mobility. When the UAV has higher mobility such as the more random trajectory and high velocity, the prediction error may influence the beam tracking. The c...
In this subsection, two beam tracking schemes with different types of antenna array are illustrated by simulation results. One is the proposed DRE-covered CCA scheme where all the t-UAVs are equipped with the CCA of the size Nt=64subscript𝑁𝑡64N_{t}=64italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 64, Mt=16...
The SEs of two array schemes against the transmit power with K=2𝐾2K=2italic_K = 2 t-UAVs are illustrated in Fig. 13. The TE-aware codeword selection uses the proposed Algorithm 2 and Algorithm 3. Serving as a reference, the minimum-beamwidth scheme always select the minimum beamwidth, i.e., the maximum number of anten...
Without loss of generality, let us focus on the TE-aware codeword selection for the k𝑘kitalic_k-th t-UAV at the r-UAV side. The beam gain is selected as the optimization objective, and the problem of beamwidth control is translated to choose the appropriate subarray size, which corresponds to the appropriate layer in ...
As shown in Fig. 11, the SE of the CCA codebook scheme and the traditional codebook scheme is compared. The proposed DRE-covered CCA codebook is used in the CCA codebook scheme. In the traditional codebook scheme, the codebook without subarray partition is used. The CCA on the r-UAV is equally partitioned into K𝐾Kital...
B
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the ri...
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping. This completes the proof for case 2 when the assumptions (a1) and (a2) hold.
A
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
B
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ...
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ...
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduce...
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne...
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in...
B
commute. As I𝐼Iitalic_I is non empty, consider some i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, we have gi=idi∘g=idi∘g′subscript𝑔𝑖subscriptid𝑖𝑔subscriptid𝑖superscript𝑔′g_{i}=\mathrm{id}_{i}\circ g=\mathrm{id}_{i}\circ g^{\prime}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_id start_POSTSUBSCRIPT italic_i...
uniqueness in 𝐓𝐨𝐩𝐓𝐨𝐩\mathbf{Top}bold_Top. This shows that {fi:X→Xi}i∈Isubscriptconditional-setsubscript𝑓𝑖→𝑋subscript𝑋𝑖𝑖𝐼\{f_{i}\colon X\to X_{i}\}_{i\in I}{ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT ita...
that the identity map idi:X→Yi:subscriptid𝑖→𝑋subscript𝑌𝑖\mathrm{id}_{i}:X\to Y_{i}roman_id start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a spectral map for all i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I.
in 𝐓𝐨𝐩𝐓𝐨𝐩\mathbf{Top}bold_Top. Since the maps idi:X→Xi:subscriptid𝑖→𝑋subscript𝑋𝑖\mathrm{id}_{i}\colon X\to X_{i}roman_id start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are spectral, Lemma 7.2
Assume that {gi:Z→Yi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑍subscript𝑌𝑖𝑖𝐼\{g_{i}\colon Z\to Y_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Z → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT is a collection o...
C
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l...
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in distortion rectification. However, two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l...
Second, the ordinal distortion is homogeneous as all its elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task anymore. Compared to the distortion parameters wi...
D
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
C
Given a newly arriving scenario A𝐴Aitalic_A, we can set (HA,πA)←←subscript𝐻𝐴superscript𝜋𝐴absent(H_{A},\pi^{A})\leftarrow( italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_π start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ) ←GreedyCluster(A,R,−R)𝐴𝑅𝑅(A,R,-R)( italic_A , italic_R , - italic_R ),...
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
There is a polynomial-time 3333-approximation for homogeneous RW-MatSup. There is a 3333-approximation algorithm for RW-MuSup, with runtime poly(n,m,Λ)normal-poly𝑛𝑚normal-Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
B
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
C
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
D
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
A
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
B
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni...
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi...
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t...
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
C
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ...
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ...
A
Conventional KG embedding approaches broadly fall into two types: Triplet-based and GNN-based methods. Triplet-based methods include translational methods [11, 27, 28], semantic matching methods [29, 30, 31, 32], and neural methods [33, 34]. For a detailed understanding, interested readers can refer to surveys [3, 35, ...
The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ...
GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no...
The proposed DAN is compatible with most existing GNN-based methods, allowing these methods to leverage our DAN as the GNN module for entity encoding. Furthermore, the computational cost is comparable to that of existing methods. Therefore, we offer an efficient and general GNN architecture for KG embedding.
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti...
B
9:     Taking stochastic gradient ascent tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times to maximize LVDMsubscript𝐿VDML_{\rm VDM}italic_L start_POSTSUBSCRIPT roman_VDM end_POSTSUBSCRIPT and update parameters (φ,ψ,θ)𝜑𝜓𝜃(\varphi,\psi,\theta)( italic_φ , italic_ψ , italic_θ...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode...
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The ...
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch...
C
However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Tr...
Even though the constructive version of the Weierstrass Theorem given by Serge Bernstein [6] provides a recipe for computing such approximations it only delivers a linear convergence rate. In 1D, however, interpolation on Chebyshev and Legendre nodes is known to avoid Runge’s phenomenon for a generic class of functions...
Further, we recognize that the Vandermonde approach is inaccurate and even becomes numerically unstable (rising errors) for higher degrees. It is therefore inappropriate for approximating strongly varying functions, such as the Runge function. As expected, (Chebyshev) polynomial interpolation on uniform grids (uniform)...
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
In summary: We answer Questions 1–2 by establishing an efficient m𝑚mitalic_mD interpolation scheme that can approximate a generic class of functions and, at least empirically, reaches the proposed exponential approximation rate for strongly varying Trefethen functions, such as the Runge function f⁢(x)=1/(1+10⁢‖x‖2)𝑓�...
B
^{\prime})\leq B_{\nu}.≤ italic_B start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT , roman_sup start_POSTSUBSCRIPT italic_y , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ supp ( italic_ν ) end_POSTSUBSCRIPT d ( italic_y , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≤ italic_B start_POSTSUBSCRIPT ital...
The supports of μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are denoted as supp⁢(μ)supp𝜇\text{supp}(\mu)supp ( italic_μ ) and supp⁢(ν)supp𝜈\text{supp}(\nu)supp ( italic_ν ), respectively. We assume that both μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are unknown distributions, and the supports of them belong to the metric space (ℝd,d)s...
However, the diameter cannot be chosen arbitrarily large since otherwise the sample complexity bound will become too conservative. For instance, when the distribution μ𝜇\muitalic_μ is known to be sub-Gaussian with parameter σ𝜎\sigmaitalic_σ, we restrict the support to be (𝔼μ⁢[X]−2⁢log⁡(1/η)⁢σ,𝔼μ⁢[X]+2⁢log⁡(1/η)⁢σ)s...
The supports of target distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν have finite diameters, Bμsubscript𝐵𝜇B_{\mu}italic_B start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT and Bνsubscript𝐵𝜈B_{\nu}italic_B start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT, respectively:
Assumption 1(II) does not hold when distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν have unbounded supports. In that case, we restrict the target distribution in a bounded support such that the probability of locating in such support is relatively large.
D
The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
D
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
C
Initially, the Koopman operator framework was used extensively for dynamics over reals (or complex) state space, and the function space is infinite-dimensional, which leads to resorting to finite-dimensional numerical approximations of the Koopman operator [28, 29] for practical computations. In our setting of dynamica...
This paper defines a linear representation of polynomial maps F𝐹Fitalic_F over finite fields 𝔽𝔽\mathbb{F}blackboard_F as matrices M𝑀Mitalic_M over 𝔽𝔽\mathbb{F}blackboard_F of smallest size N𝑁Nitalic_N. The number N𝑁Nitalic_N is defined as the Linear Complexity of F𝐹Fitalic_F over 𝔽𝔽\mathbb{F}blackboard_F. Th...
A finite field, by definition, is a finite set, and the set of all permutation polynomials over the finite field forms a group under composition. Given a finite subset of such permutations, we can compute a group generated by this set. In this paper, we propose a representation of such a group using the concept of lin...
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
A finite group, GFsubscript𝐺𝐹G_{F}italic_G start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, can be generated from Fisubscript𝐹𝑖F_{i}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using composition as the group operation. In this section, we devise a procedure to compute the linear representation of the gro...
C
In order to compare the different meta-learners in terms of classification and view selection performance, we perform a series of simulations. We generate multi-view data with V=30𝑉30V=30italic_V = 30 or V=300𝑉300V=300italic_V = 300 disjoint views, where each view 𝑿(v),v=1,…,Vformulae-sequencesuperscript𝑿𝑣𝑣1…𝑉\b...
Any simulation study is limited by its choice of experimental factors. In particular, in our simulations we assumed that all features corresponding to signal have the same regression weight, and that all views contain an equal number of features. The correlation structures we used are likely simpler than those encounte...
Table 3: Results of applying MVS with different meta-learners to the breast cancer data. ANSV denotes the average number of selected views. H denotes the H measure (Hand, \APACyear2009). In computing the H measure we assume that the misclassification cost is the same for each class. Φ^^Φ\hat{\Phi}over^ start_ARG roman...
For each experimental condition, we simulate 100 multi-view data training sets. For each such data set, we randomly select 10 views. In 5 of those views, we determine all of the features to have a relationship with the outcome. In the other 5 views, we randomly determine 50% of the features to have a relationship with...
Table 2: Results of applying MVS with different meta-learners to the colitis data. ANSV denotes the average number of selected views. H denotes the H measure (Hand, \APACyear2009). In computing the H measure we assume that the misclassification cost is the same for each class. Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end...
C
We systematically and empirically study the performance of representative off-the-shelf techniques and their combinations in the DepAD framework. We identify two well-performing dependency-based methods. The two DepAD algorithms consistently outperform nine benchmark algorithms on 32 datasets.
To address these gaps, this paper introduces a Dependency-based Anomaly Detection framework (DepAD) to provide a general approach to dependency-based anomaly detection. For each phase of the DepAD framework, this paper analyzes what and how to utilize the off-the-shelf techniques in the context of anomaly detection. We...
The rest of the paper is organized as follows. In Section 2, we survey the related work. Section 3 introduces the DepAD framework and presents the outline of the algorithms instantiated from DepAD. In Section 4, we empirically study the performance of the DepAD methods and present the comparison of the proposed methods...
In this section, we introduce the DepAD framework. We begin with an overview of the framework and then proceed to explain each phase in detail. For each phase, we discuss its goal, key considerations, and the off-the-shelf techniques that can be utilized. Finally, we present the algorithm for instantiating the DepAD f...
In the subsection, we answer the question, i.e., compared with state-of-the-art anomaly detection methods, how is the performance of the instantiated DepAD algorithms? We choose the two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, to compare them with the nine state-of-the-art anomaly detection methods shown in Ta...
B
Comparison with Abeille et al. [2021]  Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
Comparison with Abeille et al. [2021]  Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Comparison with Amani & Thrampoulidis [2021] While the authors in Amani & Thrampoulidis [2021] also extend the algorithms of Faury et al. [2020] to a multinomial problem, their setting is materially different from ours. They model various click-types for the same advertisement (action) via the multinomial distribution...
D
Recent temporal action localization methods can be generally classified into two categories based on the way they deal with the input sequence. In the first category, the works such as BSN [21], BMN [20], G-TAD [44], BC-GNN [3] re-scale each video to a fixed temporal length (usually a small length such as 100 snippets...
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur...
For example, BSN relies on the startness/endness curves to identify proposal candidates, but when more frames are used, the curves will have too many peaks and valleys to generate meaningful proposals. In G-TAD, if too many snippets are interpolated and neighboring snippets become similar, it tends to find graph neighb...
they find themselves interested in a short video clip that just fleeted away. They would scroll back to the clip and re-play it with a lower speed, by pause-and-play for example. We mimic this process when preparing a video before feeding it into a neural network. We propose to focus on a short period of a video, and m...
Graph neural networks (GNN) are a useful model for exploiting correlations in irregular structures [17]. As they become popular in different computer vision fields [13, 38, 40], researchers also find their application in temporal action localization [3, 44, 46]. G-TAD [44] breaks the restriction of temporal locations o...
B
\raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C2}}}}⃝ seems redundant because of the \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C4}}}}⃝ and \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C5}}}}⃝ that improve similar cases (d.2). If we look at (d.3) and (d.4), both visualizations display MLP...
Support for (1) selecting proper validation metrics for balanced and imbalanced data sets and (2) directing the experts’ attention to different classes for the given problem constitute two of the critical open challenges in ML. For instance, accuracy is preferred to the g-mean metric for a balanced data set [BDA13].
Evolutionary optimization and majority-voting ensembles inspired us to focus on the three aforementioned questions that constitute open research challenges. In this paper, we present a visual analytics (VA) tool, called VisEvol (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimiz...
The available metrics are divided into two groups: balanced data sets (→→\rightarrow→ accuracy, precision, recall, and f1-score) and imbalanced data sets (→→\rightarrow→ g-mean, ROC AUC, log loss, and MCC). For the initialization of VisEvol, the user should direct his/her attention to the top-left panel shown in VisEvo...
The tool consists of eight main interactive visualization panels (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization): (a) data sets and validation metrics (→→\rightarrow→ G1), (b) process tracker and algorithms/models selector (→→\rightarrow→ G3), (c) overall performance for e...
A
Consensus protocols, in contrast to Markov chains, operate without the limitations of non-negative nodes and edges or the requirement for the sum of nodes to equal one [18]. This broader scope enables consensus protocols to address a significantly wider range of problem spaces. Therefore, there is a significant interes...
Consensus protocols form an important field of research that has a strong connection with Markov chains [18]. Consensus protocols are a set of rules used in distributed systems to achieve agreement among a group of agents on the value of a variable [19, 20, 21, 22].
we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs. Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analy...
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
D
However, extracting a point-wise correspondence from a functional map matrix is not trivial [17, 57]. This is mainly because of the low-dimensionality of the functional map, and the fact that not every functional map matrix is a representation of a point-wise correspondence [51]. In [44], the authors simultaneously sol...
It was shown that deep learning is an extremely powerful approach for extracting shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me...
The identification of correspondences between 3D shapes, also known as the shape matching problem, is a longstanding challenge in visual computing. Correspondence problems have a high relevance due to their plethora of applications, including 3D reconstruction, deformable object tracking, style transfer, shape analysis...
Due to their low-dimensionality and continuous representation, functional maps also serve as the backbone of many deep learning architectures for 3D correspondence. One of the first examples is FMNet [40], which has also been extended for unsupervised learning settings recently [27, 3, 59].
The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos...
C
On the side of path graphs, we believe that, compared to algorithms in [3, 22], our algorithm is simpler for several reasons: the overall treatment is shorter, the algorithm does not require complex data structures, its correctness is a consequence of the characterization in [1], and there are a few implementation deta...
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly...
We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
A
Given (n,P,Θ,Π)𝑛𝑃ΘΠ(n,P,\Theta,\Pi)( italic_n , italic_P , roman_Θ , roman_Π ), we can generate a random adjacency matrix A𝐴Aitalic_A under DCMM. For convenience, we denote the DCMM model as D⁢C⁢M⁢M⁢(n,P,Θ,Π)𝐷𝐶𝑀𝑀𝑛𝑃ΘΠDCMM(n,P,\Theta,\Pi)italic_D italic_C italic_M italic_M ( italic_n , italic_P , roman_Θ , roman...
In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-network...
In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm.
This paper makes one major contribution: modified SLIM methods to mixed membership community detection under the DCMM model. When dealing with large networks in practice, we apply Mixed-SLIMa⁢p⁢p⁢r⁢osubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o ...
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random...
B
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals.
Second, the functional optimization problem associated with the variational representation of F𝐹Fitalic_F can be solved by any supervised learning methods such as deep learning (LeCun et al., 2015; Goodfellow et al., 2016; Fan et al., 2019) and kernel methods (Friedman et al., 2001; Shawe-Taylor et al., 2004), which o...
variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes. However,
See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (...
See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018...
A
3) MetaVIM outperforms Individual RL, MetaLight and PrssLight with 827, 423 and 411, respectively. The main reason is that they learn the traffic signal’s policy only using its own observation and ignore the influence of the neighbors, while MetaVIM considers the neighbors as the unobserved part of the current signal ...
1) Colight needs full state information in both training and testing, hence it cannot be used for a new scenario which contains different number intersections compared with the training scenario. That is, the heterogeneous scenarios will cause heterogeneous inputs of the policy network, which makes the network fail to...
4) The neighbors’ information is modeled in CoLight and it performs well.It indicates modeling neighbors is critical for the coordination. The results of MetaVIM is superior to CoLight on each scenario and configuration, resulting mean 43 improvement. Compared to Colight, MetaVIM proposes an intrinsic reward to help th...
We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compa...
In this section, we propose Meta Variationally Intrinsic Motivated (MetaVIM) method to achieve Eq. 1 and Eq. 2, as illustrated in Fig. 3. MetaVIM employs latent variable to represent each task to make the reward, observation transition and policy functions shareable. At the same time, MetaVIM makes the approximations ...
B
staying in Sτ⁢(𝐱∗)subscript𝑆𝜏subscript𝐱S_{\tau}(\mathbf{x}_{*})italic_S start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) for every fixed 𝐲~∈Σ0~𝐲subscriptΣ0\tilde{\mathbf{y}}\in\Sigma_{0}over~ start_ARG bold_y end_ARG ∈ roman_Σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIP...
Resetting 𝐱0=𝐱~∈Sτ⁢(𝐱∗)subscript𝐱0~𝐱subscript𝑆𝜏subscript𝐱\mathbf{x}_{0}\,=\,\tilde{\mathbf{x}}\in S_{\tau}(\mathbf{x}_{*})bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = over~ start_ARG bold_x end_ARG ∈ italic_S start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ), ...
𝐱~∈S23⁢τ⁢(𝐱∗)¯⊂Sτ⁢(𝐱∗)~𝐱¯subscript𝑆23𝜏subscript𝐱subscript𝑆𝜏subscript𝐱\tilde{\mathbf{x}}\in\overline{S_{\frac{2}{3}\tau}(\mathbf{x}_{*})}\subset S_{% \tau}(\mathbf{x}_{*})over~ start_ARG bold_x end_ARG ∈ over¯ start_ARG italic_S start_POSTSUBSCRIPT divide start_ARG 2 end_ARG start_ARG 3 end_ARG italic_τ end_PO...
6⁢τ′< 2⁢τ<δ6superscript𝜏′2𝜏𝛿6\,\tau^{\prime}\,<\,2\,\tau\,<\,\delta6 italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT < 2 italic_τ < italic_δ, Sτ⁢(𝐱∗)⊂Sδ⁢(𝐱∗)⊂Ω1subscript𝑆𝜏subscript𝐱subscript𝑆𝛿subscript𝐱subscriptΩ1S_{\tau}(\mathbf{x}_{*})\subset S_{\delta}(\mathbf{x}_{*})\subset\Omega_{1}italic_S start_P...
for all 𝐱^,𝐱ˇ∈Sδ⁢(𝐱∗)^𝐱ˇ𝐱subscript𝑆𝛿subscript𝐱\hat{\mathbf{x}},\,\check{\mathbf{x}}\in S_{\delta}(\mathbf{x}_{*})over^ start_ARG bold_x end_ARG , overroman_ˇ start_ARG bold_x end_ARG ∈ italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ), 𝐳∈Sτ⁢(𝐱∗)𝐳subscr...
B
Note that Corollary 12 bounds the expected competitive ratio of a randomized algorithm which commits to its choice (that is, it executes either O⁢N∗Osuperscript𝑁{\text{O}N}^{*}O italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT or ProfilePacking, and this decision is made once the sample is revealed). In contrast, ...
We give the first theoretical and experimental study of online bin packing with machine-learned predictions. Previous work on this problem has assumed ideal and error-free predictions that must be provided by a very powerful oracle, without any learnability considerations, as we discuss in more detail in Section 1.2. I...
In the experiments that we discussed in Section 6.3, we reported the performance of the algorithm on a typical sequence. More precisely, we considered a single randomly generated sequence, as opposed to averaging the cost of the algorithm over multiple input sequences, because each input sequence is associated with it...
In this section, we present an experimental evaluation of the performance of our algorithms111The code on which the experiments are based is available at https://github.com/shahink84/BinPackingPredictions.. Specifically, in Section 6.1 we describe the benchmarks and the input generation model; in Section 6.2, we expan...
In terms of analysis techniques, we note that the theoretical analysis of the algorithms we present is specific to the setting at hand and treats items “collectively”. In contrast, almost all known online bin packing algorithms are analyzed using a weighting technique (?), which treats each bin “individually” and indep...
C
To address the problem mentioned above, most of the methods extend the Chamfer loss function of basic AtlasNet with additional terms. Bednarik et al. (2020) added terms to prevent patch collapse, reduce patch overlap and calculate the exact surface properties analytically rather than approximating them. Deng et al. (20...
To mitigate the issue of the discrete atlas, we define Continuous Atlas, a novel paradigm for meshing any object with an atlas that is leveraged in our method. In the first step, we construct a mapping that models a local structure of the object S𝑆Sitalic_S. By Continuous Atlas (𝒞⁢𝒜𝒞𝒜\mathcal{CA}{}caligraphic_C c...
In this paper we propose a different approach to solve such a problem - we reformulate the classical definition of atlas to obtain maps which are correctly connected. Therefore, our method tries to suppress the issue before it even occurs in the first place.
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
The proposed framework overcomes the limitations of previous methods. First, we theoretically solve the problem of stitching partial meshes since every chart is informed about its local neighborhood. Second, our method can easily fill the missing spaces in the final mesh by adding a new mapping for the region of inter...
B
R𝒵2=2⁢m⁢Mx2⁢(λmin+⁢(𝐖𝐱))−2superscriptsubscript𝑅𝒵22𝑚superscriptsubscript𝑀𝑥2superscriptsuperscriptsubscript𝜆subscript𝐖𝐱2R_{\mathcal{Z}}^{2}={2m}M_{x}^{2}(\lambda_{\min}^{+}({\bf W}_{{\bf x}}))^{-2}italic_R start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 2 itali...
To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t...
\bf x}\right\|_{{\mathcal{X}}}^{2}+\left\|{\bf p}\right\|_{{\mathcal{P}}}^{2}∥ ( bold_x , bold_p ) ∥ start_POSTSUBSCRIPT ( caligraphic_X , caligraphic_P ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ bold_x ∥ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERS...
Next, we introduce the second important component of the convergence rate analysis, namely the smoothness assumption on the objective F𝐹Fitalic_F. To set the stage we first introduce a general definition of Lipschitz-smooth function of two variables.
D
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
B
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm⁢(ℱ)≥δ⁢(|ℱ|m)...
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ...
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and...
One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in...
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit...
C
Using our approach, we managed to achieve the same accuracy as before, 89%, compared to 83% reported by Mansouri et al. [94] for the additional external data set. For precision and recall, we always use macro-average, which is identical to Mansouri et al. [94]. On the one hand, the precision was 4% lower in both test a...
Next, as XGBoost [29] is a nonlinear ML algorithm, we also train a linear classifier (a logistic regression [83] model with the default Scikit-learn’s hyperparameters [84]) to compute the coefficients matrix and then use Recursive Feature Elimination (RFE) [40] to rank the features from the best to the worst in terms o...
We derived the analytical tasks described in this section from the in-depth analysis of the related work in Section 2. The three analytical tasks from Krause et al. [50], the three experts who expressed their requirements in Zhao et al. [32], and the user tasks acquired through expert interviews from Collaris and van ...
Following the guidelines from prior works [97, 98, 99, 68], we conducted online semi-structured interviews with three experts to collect qualitative feedback about our system’s effectiveness. The first ML expert (E1) is a senior lecturer in mathematics with a PhD in this field.
Visualization and interaction. E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv...
C
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
We first optimize the performance of the simulated positioning system by adding a receding horizon MPCC stage where we pre-optimize the position and velocity references provided to the low level controller. This is enabled by the high repeatability of the system which results in run-to-run deviations of 3⁢μ⁢m3𝜇𝑚3\mu ...
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system. We leverage the repeatability of the system, which is higher than the integrated encoder error of 3⁢μ⁢m3𝜇𝑚3\mu m3 italic_μ italic_m,
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
B
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ⁢(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betait...
Hyperparameters for each method were chosen using a grid search with unbiased accuracy on each dataset’s validation set. To make this tractable, we first ran a grid search for the learning rate over {10−3,10−4,10−5}superscript103superscript104superscript105\{10^{-3},10^{-4},10^{-5}\}{ 10 start_POSTSUPERSCRIPT - 3 end_...
We select hyperparameters based on the best unbiased validation set accuracy on each dataset, which is reflective of the unbiased test distribution. For all datasets and methods, we first perform a grid search over the learning rates ∈\in∈ {1e-3, 1e-4, 1e-5} and weight decays ∈\in∈ {0, 0.1, 1e-3, 1e-5}, and then tune t...
In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu...
Results. For CelebA, methods generally show large variance on the minority patterns (blond haired male celebrities), and lower variance on the majority patterns (mean over rest of the groups), whereas for Biased MNISTv1, we find that methods only work for certain set of hyperparameters and show degraded results on both...
B
They use GRU [77] to build the network. Cai et al.  [78] use a transformer encoder [22] to aggregate face and eye features. They feed face and two eye features into the transformer encoder and concatenate the outputs of the encoder for gaze estimation.
Different types of input have been explored to extract features. Kellnhofer et al. directly extract features from facial images [43]. Zhou et al. combine the feature extracted from facial and eye images [84]. Palmero et al. use facial images, binocular images and facial landmarks to generate the feature vectors [79]. D...
Some works seek to decompose the gaze into multiple related features and construct multi-task CNNs to estimate these feature. Yu et al. introduce a constrained landmark-gaze model for modeling the joint variation of eye landmark locations and gaze directions [119]. As shown in Fig. 9, they build a multi-task CNN to est...
The geometric feature includes the angles between the pupil center as the reference point and the facial landmarks of the eyes and the tip of the nose. The detected facial landmarks can also be used for unsupervised gaze representation learning. Dubey et al.  [83] collect the face images from the web and annotate their...
Palmero et al. combine individual streams (face, eyes region and face landmarks) in a CNN [79]. Dias et al. extract the facial landmarks and directly regress gaze from the landmarks [80, 81]. The network outputs the gaze direction as well as an estimation of its own prediction uncertainty. Jyoti et al. further extract ...
D
The rest of this paper is organized as follows: Section 2 presents the related works. In Section 3 we present the motivation and contribution of the paper. The proposed method is detailed in Section 4. Experimental results are presented in Section 5. Conclusion ends the paper.
The COVID-19 can be spread through contact and contaminated surfaces, therefore, the classical biometric systems based on passwords or fingerprints are not anymore safe. Face recognition is safer without any need to touch any device. Recent studies on coronavirus have proven that wearing a face mask by a healthy and in...
Occlusion removal approach: In order to avoid a bad reconstruction process, these approaches aim to detect regions found to be occluded in the face image and discard them completely from the feature extraction and classification process. Segmentation based approach is one of the best methods that detect firstly the oc...
Occlusion is a key limitation of real-world 2D face recognition methods. Generally, it comes out from wearing hats, eyeglasses, masks as well as any other objects that can occlude a part of the face while leaving others unaffected. Thus, wearing a mask is considered the most difficult facial occlusion challenge since ...
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face...
C
Γ′⊢C′::Δ\Gamma^{\prime}\vdash C^{\prime}::\Deltaroman_Γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊢ italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ Γ⊢C,C′::Δ\Gamma\vdash C,C^{\prime}::\Deltaroman_Γ ⊢ italic_C , italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT : : roman_Δ
Now, let ΓΓ\Gammaroman_Γ and ΔΔ\Deltaroman_Δ be contexts that associate cell addresses to types. The configuration typing judgment given in Figure 3, Γ⊢C::Δ\Gamma\vdash C::\Deltaroman_Γ ⊢ italic_C : : roman_Δ, means that the objects in C𝐶Citalic_C are well-typed with sources in ΓΓ\Gammaroman_Γ and destinations in ΔΔ\...
To review SAX, let us make observations about proof-theoretic polarity. In the sequent calculus, inference rules are either invertible—can be applied at any point in the proof search process, like the right rule for implication—or noninvertible, which can only be applied when the sequent “contains enough information,” ...
Configuration reduction →→\to→ is given as multiset rewriting rules [CS09] in Figure 4, which replace any subset of a configuration matching the left-hand side with the right-hand side. However, !!! indicates objects that persist across reductions. Principal cuts encountered in a configuration are resolved by passing ...
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which...
C
‘−--’ indicates that the property is not scored because the involvement of cloud is not considered. ✓∖limit-from✓\checkmark\mkern-11.0mu{\smallsetminus}✓ ∖ means that the privacy of cloud media is protected, but that protection is not IND-CPA secure. [3]-I and [3]-II represent the first scheme and the second scheme in ...
Finally, the comparison between the two proposed schemes and the existing relevant schemes is summarized in Table I. As can be seen therein, the two proposed schemes FairCMS-I and FairCMS-II have advantages over the existing works. In addition, the two proposed schemes offer owners the flexibility to choose. If the sec...
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
This paper solves the three problems faced by cloud media sharing and proposes two schemes FairCMS-I and FairCMS-II. FairCMS-I gives a method to transfer the management of LUTs to the cloud, enabling the calculation of each user’s D-LUT in the ciphertext domain and its subsequent distribution. However, utilizing the s...
A
Since our proposed approach selects the beneficial feature interactions and models them in an explicit manner, it has high efficiency in analyzing high-order feature interactions and thus provides rationales for the model outcome. Through extensive experiments conducted on CTR benchmark and recommender system datasets,...
(2) By treating features as nodes and their pairwise feature interactions as edges, we bridge the gap between GNN and FM, and make it feasible to leverage the strength of GNN to solve the problem of FM. (3) Extensive experiments are conducted on CTR benchmark and recommender system datasets to evaluate the effectivenes...
As a consequence, no feature interactions are guaranteed to be captured. This interaction part is also the most significant difference between GraphFM and GNN, and the resulting gap in term of performance indicates that GraphFM is able to leverage the strength of FM to solve the drawbacks of GNN in modeling feature int...
In this work, we disclose the relationship between FM and GNN, and seamlessly combine them to propose a novel model GraphFM for feature interaction learning. The proposed model leverages the strength of FM and GNN and also solve their respective drawbacks.
Overall, the main contributions of this work are threefold: (1) We analyze the shortcomings and strengths of FM and GNN in modeling feature interactions. To solve their problems and leverage strengths, we propose a novel model GraphFM for feature interaction modeling.
D
Table 3: Complexity comparison for B-FW (Algorithm 3) when minimizing over a (κ,q)𝜅𝑞(\kappa,q)( italic_κ , italic_q )-uniformly convex set: Number of iterations needed to reach an ε𝜀\varepsilonitalic_ε-optimal solution in h⁢(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) for Problem 1.1 in several cases of interest. We use...
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
When the domain 𝒳𝒳\mathcal{X}caligraphic_X is a polytope, one can obtain linear convergence in primal gap for a generalized self-concordant function using the well known Away-step Frank-Wolfe (AFW) algorithm [Guélat & Marcotte, 1986, Lacoste-Julien & Jaggi, 2015] shown in Algorithm 5
For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5 if the Frank-Wolfe gap g⁢(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ) is greater than the away gap ⟨∇f⁢(𝐱t),𝐚t−𝐱t⟩∇𝑓subscript𝐱𝑡subscript𝐚𝑡subscript𝐱𝑡\left\langle\nabla f(\mathbf{x}_{t}),\m...
C
Then, the algorithm Backtrack-Stuck-Structures backtracks on Pαsubscript𝑃𝛼P_{\alpha}italic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, meaning that the new active path of 𝒮αsubscript𝒮𝛼\mathcal{S}_{\alpha}caligraphic_S start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT becomes (α,a1,…,ak−1)𝛼subscript𝑎1…subscri...
Assume that during the pass A𝐴Aitalic_A was reduced to A′superscript𝐴′A^{\prime}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT during an invocation of Reduce-Label-and-Overtake. Let arc a′superscript𝑎′a^{\prime}italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT be the head of A′superscript𝐴′A^{\prime}italic...
The edge e𝑒eitalic_e will remain in the memory of our algorithm until the end of the phase. Suppose that after an edge from the stream is processed the arcs a𝑎aitalic_a and b←←𝑏\overleftarrow{b}over← start_ARG italic_b end_ARG do not belong to the same structure anymore, e.g., due to an invocation of Reduce-Label-an...
We now illustrate why the Condition (2) above is important, i.e., why the algorithm does not backtrack on an active path that was shrunk due to invoking Reduce-Label-and-Overtake. Let A𝐴Aitalic_A be an active path at the beginning of the current pass.
Note that v𝑣vitalic_v cannot be a free vertex, as otherwise Algorithm 3 or Algorithm 3 would evaluate to true. The algorithm now checks whether it is possible to reduce the label of a⋆superscript𝑎⋆{a^{\star}}italic_a start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT (Algorithm 3) by an alternating path whose length does no...
C
\bm{\mathit{A}}}\right)^{k}\overline{\bm{\mathit{v}}}^{\prime}_{1:4},over~ start_ARG bold_italic_d end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : 4 end_POSTSUBSCRIPT ≤ italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_ρ ( over~ start_ARG bold_italic_A end_ARG ) start_POSTSU...
In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25]. In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies.
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun...
In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com...
In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP...
A
20:     uxmk+1=δk⁢uxmk+(1−δk)⁢xmk+1subscriptsuperscript𝑢𝑘1subscript𝑥𝑚superscript𝛿𝑘subscriptsuperscript𝑢𝑘subscript𝑥𝑚1superscript𝛿𝑘superscriptsubscript𝑥𝑚𝑘1u^{k+1}_{x_{m}}=\delta^{k}u^{k}_{x_{m}}+(1-\delta^{k})x_{m}^{k+1}italic_u start_POSTSUPERSCRIPT italic_k + 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ita...
Our first two methods make several iterations between communications when λ𝜆\lambdaitalic_λ is small (or vice versa, for big λ𝜆\lambdaitalic_λ make some communications between one local iteration). The following method (Algorithm 3) is also sharpened on the alternation of local iterations and communications, but it m...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler...
Unlike (2), the formulation (1) penalizes not the difference with the global average, but the sameness with other connected local nodes. Thereby the decentralized case can be artificially created in centralized architecture, e.g., if we want to create the network and W𝑊Witalic_W matrix to connect only some clients bas...
A
We compare against common MS including uniform, α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019; Muller et al., 2020), Projected Replicator Dynamics (PRD) (Lanctot et al., 2017) which is an NE approximator, and random vertex (coarse) correlated equilibrium (RV(C)CE) which randomly selects a solution on the vertices o...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
PSRO finds a set of policies, (πp∈Πp)p=1..n(\pi_{p}\in\Pi_{p})_{p=1..n}( italic_π start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ roman_Π start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_p = 1 . . italic_n end_POSTSUBSCRIPT, and a distribution over this set for each player, (σp)p=1..n(\sigma...
Kuhn Poker (Kuhn, 1950; Southey et al., 2009; Lanctot, 2014) is a zero-sum poker game with only two actions per player. The two-player variant is solvable with PSRO, however the three-player version benefits from JPSRO. The results in Figure 2(a) show rapid convergence to equilibrium.
Recent success in tackling two-player, constant-sum games (Silver et al., 2016; Vinyals et al., 2019) has outpaced progress in n-player, general-sum games despite a lot of interest (Jaderberg et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019; Lockhart et al., 2020; Gray et al., 2020; Anthony et al., 2020). One ...
C
The similarity function serves as a measure of the local sensitivity of the issued queries with respect to the replacement of the two datasets, by quantifying the extent to which they differ from each other with respect to the query q𝑞qitalic_q. The case of noise addition mechanisms provides a natural intuitive interp...
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it...
We measure the harm that past adaptivity causes to a future query by considering the query as evaluated on a posterior data distribution and comparing this with its value on a prior. The prior is the true data distribution, and the posterior is induced by observing the responses to past queries and updating the prior. ...
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a seri...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
A
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
C
We report the results of Poisson image blending [121], GP-GAN [172], Zhang et al. [198], and MLF [194]. We also report the ground-truth composite image obtained using ground-truth alpha matte for comparison. From Fig. 9, it can be seen that the obtained composite images using predicted alpha mattes are very close to t...
Backward adjustment: In contrast with manually adjusting the foreground of composite image to create harmonized image, some other works [156, 22, 18] adopted an inverse approach, i.e., adjusting the foreground of real image to create synthetic composite image. Specifically, they treat a real image as harmonized image, ...
Training deep learning models requires abundant pairs of composite images and ground-truth harmonized images. Existing works have designed different schemes to construct image harmonization dataset. We categorize the existing schemes into three groups: forward adjustment, backward adjustment, and replacement. Note that...
Figure 11: In the first (resp., second, third) row, we show two examples from RealHM [60] (resp., HFlickr in iHarmony4 [9], HVIDIT [45]) dataset. From left to right in each example, we show the composite image, the foreground mask, and the ground-truth harmonized image.
Figure 10: In the left subfigure, we summarize three ways to construct image harmonization dataset and list the corresponding datasets: RealHM [60], iHarmony4 [9] (HCOCO, HFlickr, HAdobe5k, Hday2night), ccHarmony [113], GMS [140], HVIDIT [45], RdHarmony [9]. We also mark the dataset based on real (resp., rendered) ima...
D
Degradation under data scarcity. Our findings reveal that when only 3-day training data are available, non-deep learning models such as LR achieve similar performances as compared to using full data, whereas LSTM models suffer from an increased error rate of 50%, as observed in the case of Chengdu. This suggests that ...
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One e...
Domain Selection. Our experimental results consistently demonstrate that using Beijing as the source city yields the best performance, irrespective of the target city and the choice of algorithms. One possible explanation for this observation is that Beijing comprises the highest number of regions, and therefore exhib...
TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er...
Inter-city correlations. Our results demonstrate that transfer learning leads to error reductions in all source-target pairs, as compared to using target data only. Notably, the largest reduction of approximately 15% is observed in the case of Shenzhen and Chongqing. These findings suggest that there exist sufficient ...
D
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc...
B
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split). With the same batch size of 12, we fine-tune the pre-trained our model for each ta...
Fig. 2(b) shows the fine-tuning architecture for note-level classification. While the Transformer uses the hidden vectors to recover the masked tokens during pre-training, it has to predict the label of an input token during fine-tuning, by learning from the labels provided in the training data of the downstream task ...
In our experiments, we will use the same pre-trained model parameters to initialise the models for different downstream tasks. During fine-tuning, we fine-tune the parameters of all the layers, including the self-attention and token embedding layers.
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split). With the same batch size of 12, we fine-tune the pre-trained our model for each ta...
We now present our PTM, a pre-trained Transformer encoder with 111M parameters for piano MIDI music. We adopt as the model backbone the BERTBASEBASE{}_{\text{BASE}}start_FLOATSUBSCRIPT BASE end_FLOATSUBSCRIPT model \parencitebert, a classic multi-layer bi-directional Transformer encoder with 12 layers of multi-head sel...
B
And of course we have to use a different color for each vertex, so B⁢B⁢Cλ⁢(Kn,T)≥n𝐵𝐵subscript𝐶𝜆subscript𝐾𝑛𝑇𝑛BBC_{\lambda}(K_{n},T)\geq nitalic_B italic_B italic_C start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_T ) ≥ italic_n – thus B⁢B⁢Cλ⁢(Kn,T)...
To achieve the same result for forest backbones we only need to add some edges that would make the backbone connected and spanning. However, we can always make a forest connected by adding edges between some leaves and isolated vertices and we will not increase the maximum degree of the forest, as long as Δ⁢(F)≥2normal...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
In this section we will proceed as follows: we first introduce the so-called red-blue-yellow (k,l)𝑘𝑙(k,l)( italic_k , italic_l )-decomposition of a forest F𝐹Fitalic_F on n𝑛nitalic_n vertices, which finds a set Y𝑌Yitalic_Y of size at most l𝑙litalic_l such that we can split V⁢(F)∖Y𝑉𝐹𝑌V(F)\setminus Yitalic_V ( it...
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
A