context
stringlengths
250
5.39k
A
stringlengths
250
7.25k
B
stringlengths
250
4.32k
C
stringlengths
250
8.2k
D
stringlengths
250
11.4k
label
stringclasses
4 values
dd⁢x⁢Rnm⁢(x)𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d}{dx}R_{n}^{m}(x)divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) =\displaystyle==
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR...
+3dd⁢xxmd2d⁢x2F(a,b;c;z)+xmd3d⁢x3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}% \frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic...
(−1)a⁢(b−1−a)⁢[dd⁢x⁢xm⁢F⁢(a,b;c;z)+xm⁢dd⁢x⁢F⁢(a,b;c;z)];superscript1𝑎binomial𝑏1𝑎delimited-[]𝑑𝑑𝑥superscript𝑥𝑚𝐹𝑎𝑏𝑐𝑧superscript𝑥𝑚𝑑𝑑𝑥𝐹𝑎𝑏𝑐𝑧\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d}{dx}x^{m}F(a,b;c;z)+x^{m}% \frac{d}{dx}F(a,b;c;z)\Big{]};( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRI...
D
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in...
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set a...
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application...
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in...
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
A
λ~hf=−P⁢(T⁢λ0+T⁢λ~h0+T~⁢g).superscriptsubscript~𝜆ℎ𝑓𝑃𝑇superscript𝜆0𝑇subscriptsuperscript~𝜆0ℎ~𝑇𝑔\tilde{\lambda}_{h}^{f}=-P(T\lambda^{0}+T\tilde{\lambda}^{0}_{h}+\tilde{T}{g}).over~ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = - it...
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}...
We start by recasting the continuous problem in a weak formulation that depends on a polyhedral regular mesh 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and let ℱHsubscriptℱ𝐻\mathcal{F}_{H}caligraphic_F start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT be the set of faces...
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT.
D
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
D
Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
B
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
D
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level...
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
A
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
A
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali...
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it...
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ...
C
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
C
Table 4: The results after evaluating our model with respect to its computational efficiency. We tested five versions trained on different eye tracking datasets, each receiving input images of their preferred sizes in pixels (px). After running each network on 10,000 test set instances from the ImageNet database for 10...
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
The proposed encoder-decoder model was evaluated on five publicly available eye tracking datasets that yielded qualitative and quantitative results. First, we provide a brief description of the images and empirical measurements utilized in this study. Second, the different metrics commonly used to assess the predictive...
A
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11...
j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111. This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act position...
We first prove pw⁡(Gα)≤2⁢loc⁡(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α...
This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending po...
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration)...
D
Xia et al.[88] compared two CNNs, with three and two layers, that were fed with spectrograms of signals from AFDB using Short-Term Fourier Transform and stationary WT respectively. Their experiments concluded that the use of stationary WT achieves a slightly better accuracy for this task.
Then, they segmented the RR intervals to 30 samples each and fed them to a network with two layers followed by a pooling layer and a LSTM layer with 100 units. The method was validated on MITDB and NSRDB achieving an accuracy that indicates its generalizability.
They trained a five layer CNN in a sequence of short windows with movement artifacts and its output was combined with features calculated based on beat-to-beat variability and the signal quality index. An accuracy of 91.8% in AF detection was achieved by the method and in combination with its computational efficiency i...
Gotlibovych et al.[117] trained an one layer CNN network followed by a LSTM using 180h of PPG wearable data to detect AF. Use of the LSTM layer allows the network to learn variable-length correlations in contrast with the fixed length of the convolutional layer.
Experiments by the authors showed that the three layer 1D CNN created better and more stable results. In[101] the authors trained a network with an one convolutional layer with dropout followed by two RNNs to identify stress using short-term ECG data.
A
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control. Our video models of Atari en...
A
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification. Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke.
Figure 1: High level overview of a feed-forward pass of the combined methods. xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ...
The names of the classes are depicted at the right along with the predictions for this example signal. The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
C
This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks...
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in...
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
B
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution....
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation. Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg...
The above observations were recently made in the context of online algorithms with machine-learned predictions. Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ...
D
Since ⊕1subscriptdirect-sum1\oplus_{1}⊕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the addition, instead of processing the whole document again, we could update the already computed vector, (0.15,3.65,2.0,0.15)0.153.652.00.15(0.15,3.65,2.0,0.15)( 0.15 , 3.65 , 2.0 , 0.15 ), by adding it to the new sentence confidence v...
However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579292929Note that this is the same subject who was prev...
Another important aspect of this incremental approach is that since this confidence vector is a value that “summarizes the past history”, keeping track of how this vector changes over time should allow us to derive simple and clear rules to decide when the system should make an early classification. As an example of th...
In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings. In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of th...
We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed. For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be...
B
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee...
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo...
We improve DEF-A by changing its local momentum to global momentum, getting a new method called GMC+. The detail of GMC+ is shown in Algorithm 2. We also adopt parameter server architecture for illustration. GMC+ can also be easily implemented on all-reduce frameworks.
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
B
For the same task as the previous one but for 2D, we use MNIST which consists of a training dataset of 60000600006000060000 greyscale images with handwritten digits and a test dataset of 10000100001000010000 images each one having size of 28×28282828\times 2828 × 28.
The first two fully connected layers are followed by a ReLU while the last one produces the predictions. The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output. DNNs were successful in utilizi...
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
B
We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative ga...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
With the rapid commercialization of UAVs, a lot of research has emerged in this field [16]. To efficiently deploy UAVs, studies have been made to find out UAV distribution on network graph [9] and a graphical model has been proposed for channels reuse [17]. The resource allocation of channel and time is also a hot are...
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity o...
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg...
B
, 𝐏2¯=(v¯z/r¯)⁢𝐳^¯subscript𝐏2subscript¯𝑣𝑧¯𝑟^𝐳\overline{\mathbf{P}_{2}}=\left(\overline{v}_{z}\,/\,\overline{r}\right)% \widehat{\mathbf{z}}over¯ start_ARG bold_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG = ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT / over¯ start_ARG i...
\theta}\cdot\left(\overline{\widehat{\nabla}}\,\,\overline{f}\right)\right)\,/% \,\left(\mu_{0}\,\overline{r}\,\,\overline{\rho}\right)\right\}\right\}= italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { - ( over¯ star...
\mathbf{z}}over¯ start_ARG bold_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG = ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT / over¯ start_ARG italic_r end_ARG ) over^ start_ARG bold_r end_ARG + ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT / ...
\widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG e...
(\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s...
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
C
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requirin...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
B
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th...
In the following sections, we discuss deep semantic image segmentation improvements under different categories visualized in Figure 1. For each category, we first review the improvements on non-medical datasets, and in a subsequent section, we survey the improvements for medical images.
In contrast to natural images, it is difficult to tabulate and summarize the performance of medical image segmentation methods because of the vast number of (a) medical imaging modalities and (b) medical image segmentation datasets. Figure 15 presents a breakdown of the various attributes of the medical image segmentat...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th...
A
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals. See the supplementary material for an illustration.
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
B
The input data is normalized to [−1,1]11[-1,1][ - 1 , 1 ]. For generating a wide variety of data, the prioritization of the current path wpath∼1+|𝒩⁢(0,5)|similar-tosubscript𝑤path1𝒩05w_{\text{path}}\sim 1+\lvert\mathcal{N}(0,5)\rvertitalic_w start_POSTSUBSCRIPT path end_POSTSUBSCRIPT ∼ 1 + | caligraphic_N ( 0 , 5 ) |...
In all our experiments, stochastic gradient descent with Nesterov momentum as optimizer and cross-entropy loss are used. The initial learning rate is set to 0.10.10.10.1, momentum to 0.90.90.90.9, and weight decay to 0.00050.00050.00050.0005. The batch size is set to 128128128128 and 512512512512, respectively, for gen...
A new random forest is trained every 100100100100 epochs to average the influence of the stochastic process, and the generated data samples are mixed. In the following, training on generated data will be denoted as NRFI (gen) and training on generated and original data as NRFI (gen+ori). The fraction of NRFI data is se...
fraction of NRFI data wgensubscript𝑤genw_{\text{gen}}italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT is varied, which weights the loss of the generated data. Accordingly, the weight for the original data is set to wori=1−wgensubscript𝑤ori1subscript𝑤genw_{\text{ori}}=1-w_{\text{gen}}italic_w start_POSTSUBSCRIPT or...
Figure 6: Analyzing the influence of training with original data, NRFI data, and combinations of both for different number of samples per class. Using only NRFI data (wgen=100%subscript𝑤genpercent100w_{\text{gen}}=100\%italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT = 100 %) achieves better results than using only...
B
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
D
In MobileNet (Howard et al., 2017a) depthwise separable convolutions are used to split a standard convolution in another way: (i) a depthwise convolution and (ii) a 1×1111\times 11 × 1 convolution. The depthwise convolution applies a K×K𝐾𝐾K\times Kitalic_K × italic_K filter to each channel separately without taking t...
Similar ideas are used in SqueezeNet (Iandola et al., 2016) which employs 1×1111\times 11 × 1 convolutions to reduce the number of input channels of subsequent parallel 1×1111\times 11 × 1 and 3×3333\times 33 × 3 convolutions. In addition, SqueezeNet uses the global average pooling output of per-class channels directly...
In MobileNet (Howard et al., 2017a) depthwise separable convolutions are used to split a standard convolution in another way: (i) a depthwise convolution and (ii) a 1×1111\times 11 × 1 convolution. The depthwise convolution applies a K×K𝐾𝐾K\times Kitalic_K × italic_K filter to each channel separately without taking t...
In particular, the residual path performs a 1×1111\times 11 × 1 convolution to increase the number of channels, followed by a cheap depthwise 3×3333\times 33 × 3 convolution, followed by another 1×1111\times 11 × 1 convolution to reduce the number of channels again. They show that their inverted structure is more memor...
A typical residual block with bottleneck structure in ResNet (He et al., 2016) contains a 1×1111\times 11 × 1 bottleneck convolution to reduce the number of channels, followed by a 3×3333\times 33 × 3 convolution, followed by another 1×1111\times 11 × 1 convolution to restore the original number of channels again. Cont...
D
If X𝑋Xitalic_X is a hyperbolic geodesic metric space, then for any k≥1𝑘1k\geq 1italic_k ≥ 1 and I=(u,v]∈barckVR⁢(X;𝔽)𝐼𝑢𝑣subscriptsuperscriptbarcVR𝑘𝑋𝔽I=(u,v]\in\mathrm{barc}^{\mathrm{VR}}_{k}(X;\mathbb{F})italic_I = ( italic_u , italic_v ] ∈ roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_PO...
In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence b...
As proved in [68] via the notion of core of a metric graph or as a consequence of [50, Proposition 2.2], the unit circle 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT and the join X𝑋Xitalic_X of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPER...
A hyperconvex metric space is one where any collection of balls with non-empty pairwise intersections forces the non-empty intersection of all balls. These were studied by Aronszajn and Panitchpakdi [8] who showed that every hyperconvex space is an absolute 1-Lipschitz retract. Isbell [52] proved that every metric spac...
Observe that metric trees are both 00-hyperbolic and hyperconvex. A recent paper by Joharinad and Jost [53] analyzes the persistent homology of metric spaces satisfying the hyperconvexity condition (which is equivalent to injectivity) as well as that of spaces satisfying a relaxed version of hyperconvexity.
D
Adaptive PCP vs. PCP   Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of ...
Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it. The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when availab...
Adaptive Parallel Coordinates Plot   Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (...
Adaptive PCP vs. PCP   Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of ...
To briefly present the benefits of using our technique, we employ the Single Proton Emission Computed Tomography (SPECTF) data set [58] with 44 dimensions. In Figure 12, we can observe that the standard PCP is cluttered, especially for the case without any selection. Thus, it is hard to see why the normal class is actu...
D
Neighborhood based differential vector: In this subcategory, each solution is affected only by solutions in its local neighborhood. Table 26 compiles all algorithms that are classified in this subcategory. A notable example in this list is BFOA [148], in which all solutions in the neighborhood impact on the computation...
The second and third most influential algorithms are GA, a very classic algorithm, and DE, a well-known algorithm whose natural inspiration resides only in the evolution of a population. Both have been used by around 5% of all reviewed nature-inspired algorithms, and they are the most representative approach in the Evo...
This category is composed of algorithms that explore the domain search by generating new solutions, not by moving existing ones. This group is a significant ratio (almost 31%) of all proposals, and includes many classical algorithms like GA [98]. A very widely exploited advantage of these methods is the possibility to...
Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous solution. The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations. This soluti...
Bearing the above criteria in mind, Figure 5 shows the classification reached after our literature analysis. The plot indicates, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that in most nature- and bio-inspired algorithms, new solut...
B
where φ⁢(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12⁢A~⁢D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 e...
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
A
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
IPID technique. When spoofing is not filtered the counter on the server will be incremented - which is the test action. At the probing phase the counter’s value will equal or large than the expected value after the increment phase. The repeated measurements ensure that we do not accidentally interpret noise (i.e., pac...
Methodology. We send a DNS request to the tested network from a spoofed IP address belonging to the tested network. If the network does not enforce ingress filtering, the request will arrive at the DNS resolver on that network. A query from a spoofed source IP address will cause the response to be sent to the IP addres...
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremente...
D
All neural networks in this section were trained using stochastic gradient descent with momentum [24] on the loss function ℒℒ\mathcal{L}caligraphic_L. The learning rate was set to 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT and the momentum factor to 0.90.90.90.9. Networks were trained fo...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
The skill network approach incorporates all available data into a single training set, disregarding the sequential structure between batches of the dataset. For each batch T𝑇Titalic_T, a network was trained using batches 1111 through T−1𝑇1T-1italic_T - 1 as the training set and evaluated on batch T𝑇Titalic_T.
First, the effect of sensor drift on classification accuracy is demonstrated using classifiers trained on a single batch. For each batch 1111 through 10101010, a feedforward model was trained on that batch. Training of a new model was repeated 30 times on each batch. The accuracy of all classifiers were evaluated on ev...
In order to improve performance, Vergara et al. [7] employed an ensemble technique on the SVM classifiers (Fig. 2B). The same technique was reimplemented and tested on the modified dataset in this paper. The ensemble meant to generalize to batch T𝑇Titalic_T was constructed by training a collection of single-batch cla...
B
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
B
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ...
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ...
C
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p...
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P⁢(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ...
While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented...
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
C
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w...
We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod...
Content Extraction. Manual inspection of the English language web pages showed that they included content other than the main text: often they had a header, a footer, a navigation menu, and banners. We refer to this extra content in a web page as boilerplate. Boilerplate draws away from the focus of the main content i...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
C
In our VA system, the user can explore how models perform on each class of the data set, and the performance metrics are instilled into a combined user-driven value. Manifold [66] generates pairs of models and compares them over all classes of a data set, including feature selection. We adopt a similar approach, but in...
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c....
Figure 5(a) presents ensemble \raisebox{-.0pt} {\tiny\bfS3}⃝, with all models still included. Figure 5(a+b) show the same projection but with different color-encodings for two selected performance metrics: f2-score and MCC. They allow us to decide which models are vital in order to stabilize the performance of the ense...
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
For instance, the more recent tool iFuseML [48] operates with prediction errors in order to present ensemble models with more accurate predictions to the users. The comparison of models is very different in our approach: we use preliminary results from performance metrics in order to select the appropriate models that ...
D
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
B
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
To answer RQ2, we find the fine-tuning epochs for each task in Persona where its BLEU and C Score reaches the best respectively to find the impact of data quantity and the task profile (persona description) on fine-tuning. (Table 1) We cluster the tasks with similar best fine-tuning epoch number and calculate the aver...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
C
&&&{\mathcal{S}_{k}^{\text{r}}}\mathop{\cap}{\mathcal{S}_{j}^{\text{r}}}=% \emptyset,\thinspace k\neq j.\end{aligned}start_ROW start_CELL end_CELL start_CELL start_UNDERACCENT bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIP...
From the aforementioned two properties of the CCA, we know that the optimal beamforming and combining vector 𝒇k⁢(t)subscript𝒇𝑘𝑡\boldsymbol{f}_{k}(t)bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) and 𝒘k⁢(t)subscript𝒘𝑘𝑡\boldsymbol{w}_{k}(t)bold_italic_w start_POSTSUBSCRIPT italic_k end...
ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively. In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent...
After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ...
The t-UAV needs to select an appropriate codeword 𝒗⁢(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) from our proposed codebook 𝒱ksubscript𝒱𝑘\mathcal{V}_{k}caligraphic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to solve the subarray partition and AWV selecti...
B
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping. This completes the proof for case 2 when the assumptions (a1) and (a2) hold.
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the ri...
D
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
at the mean-field limit with ϵ→0+→italic-ϵsuperscript0\epsilon\rightarrow 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and m→∞→𝑚m\rightarrow\inftyitalic_m → ∞. Such a correspondence allows us to use the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3....
The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019). In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se...
The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati...
D
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f...
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform...
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
A
φ∈𝖥𝖮⁢[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ], if A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ, then there exists a finite structure Afinsubscript𝐴finA_{\mathrm{fin}}italic_A start_POSTSUBSCRIPT roman_fin end_POSTSUBSCRIPT such that
\neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i...
X≜{x→∈∏i∈IXi∣∀i≤j∈I,xj=fi,j⁢(xi)}≜𝑋conditional-set→𝑥subscriptproduct𝑖𝐼subscript𝑋𝑖formulae-sequencefor-all𝑖𝑗𝐼subscript𝑥𝑗subscript𝑓𝑖𝑗subscript𝑥𝑖X\triangleq\left\{\vec{x}\in\prod_{i\in I}X_{i}\mid\forall i\leq j\in I,x_{j}=% f_{i,j}(x_{i})\right\}italic_X ≜ { over→ start_ARG italic_x end_ARG ∈ ∏ start_POST...
∃x1,…⁢xs.(⋀1≤i≤sα(r)⁢(xi)∧⋀1≤i<j≤sd>2⁢r⁢(xi,xj)),subscript𝑥1.…subscript𝑥𝑠subscript1𝑖𝑠superscript𝛼𝑟subscript𝑥𝑖subscript1𝑖𝑗𝑠superscript𝑑absent2𝑟subscript𝑥𝑖subscript𝑥𝑗\exists x_{1},\dots x_{s}\mathbin{.}\big{(}\bigwedge_{1\leq i\leq s}\alpha^{(r% )}(x_{i})\wedge\bigwedge_{1\leq i<j\leq s}d^{>2r}(x_{i},x_...
_{i}^{1})_{1\leq i\leq n};(y\models\psi_{i}^{2})_{1\leq i\leq n}\right)=1\;.∀ italic_x ∈ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y ∈ italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_f ( italic_x , italic_y ) ⊧ italic_φ ⇔ italic_β ( ( italic_x ⊧ italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSU...
C
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like...
2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. Also, the devised distortion-aware perception layer boosts the feature extraction of different degrees of distortion.
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
After predicting the distortion labels of a distorted image, it is direct to use the distance metric loss such as ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss or ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT loss to learn the network paramete...
B
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/. We set aside 20% of the samples as the test set and divide the rema...
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD. The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework.
We compare SNGM with four baselines: MSGD, LARS [34], EXTRAP-SGD [19] and CLARS [12]. For LARS, EXTRAP-SGD and CLARS, we adopt the open source code 222https://github.com/NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch 333http://proceedings.mlr.press/v119/lin20b.html 444https://github.com/slowbull/largebatch
If we avoid these tricks, these methods may suffer from severe performance degradation. For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective.
B
{\mathcal{F}}roman_support ( caligraphic_D ) ⊆ 2 start_POSTSUPERSCRIPT caligraphic_C end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT caligraphic_F end_POSTSUPERSCRIPT and, in the black-box setting, |𝒟|𝒟|\mathcal{D}|| caligraphic_D | may be uncountably infinite.
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ...
Stochastic optimization, first introduced in the work of Beale [4] and Dantzig [8], provides a way to model uncertainty in the realization of the input data. In this paper, we give approximation algorithms for a family of problems in stochastic optimization, and more precisely in the 2222-stage recourse model [27]. Our...
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
D
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d...
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function...
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments...
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an...
and show how various random factors affect the convergence rate of the algorithm in Theorem III.4. In [6], the convergence rates of the distributed stochastic gradient descent algorithm with precise communications were analyzed under the conditions that the communication graphs are i.i.d. and the mean graph is connecte...
B
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr...
A
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
B
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
C
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to ...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to ...
B
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ...
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
A
In order to evaluate the efficacy of each module, we implement several alternative methods: decentRL w/ infoNCE, decentRL w/ L2 denote the versions where we replace self-distillation with InfoNCE [61] and L2, respectively. decentRL w/ self-entity denotes the version involving self-entity.
The results in Table 10 demonstrate that all variants of decentRL achieves state-of-the-art performance on Hits@1, empirically proving the superiority of using neighbor context as the query vector for aggregating neighbor embeddings. The proposed decentRL outperforms both decentRL w/ infoNCE and decentRL w/ L2, provid...
Table 6 and Table 7 present the results for conventional entity prediction. decentRL demonstrates competitive or even superior performance when compared to state-of-the-art methods on the FB15K and WN18 benchmarks, showcasing its efficacy in entity prediction. While on the FB15K-237 and WN18RR datasets, the performanc...
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg...
The performance of decentRL at the input layer notably lags behind that of other layers and AliNet. As discussed in previous sections, decentRL does not use the embedding of the central entity as input when generating its output embedding. However, this input embedding can still accumulate knowledge by participating i...
A
The agent interacts with the environment as follows. In each time step, the agent obtains the current state stsubscript𝑠𝑡s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, takes action atsubscript𝑎𝑡a_{t}italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, interacts with the environment, receives the ...
The complete procedure of self-supervised exploration with VDM is summarized in Algorithm 1. In each episode, the agent interacts with the environment to collect the transition st,at,st+1subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1s_{t},a_{t},s_{t+1}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_...
In what follows, we introduce TRPO and PPO algorithms [17, 18]. TRPO updates policy by iteratively maximizing the expected cumulative reward with an extra constraint on KL-divergence between the updated policy and the current policy, which is solved via conjugate gradient algorithm. PPO simplifies the optimization pro...
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-err...
The goal of RL is to find a policy that maximizes the expected cumulative reward. Policy gradient methods solve RL problem by iteratively following a parameterized policy, sampling data from the parameterized policy, and updating the parameters of policy by policy gradient. The gradient of vanilla policy gradient [16] ...
D
However, even if P𝑃Pitalic_P is unisolvent, as is well known and shown in our previous work [51], the inversion of the matrix V𝑉Vitalic_V becomes numerically ill-conditioned when represented in the canonical basis qα⁢(x)=xαsubscript𝑞𝛼𝑥superscript𝑥𝛼q_{\alpha}(x)=x^{\alpha}italic_q start_POSTSUBSCRIPT italic_α end...
Therefore, alternative interpolation schemes with better numerical condition and lower computational complexity are desirable. While previous approaches to addressing this problem relied on tensorial interpolation schemes [33, 48, 59, 75], we here propose a different approach.
Though, approximations of lower accuracy might be reached faster then by polynomial interpolation, this makes these approaches incapable for answering Question 1 when higher-precision approximations are required. The multivariate polynomial interpolation method presented here reaches this goal.
where the Chebyshev extremes Chebn0superscriptsubscriptCheb𝑛0\mathrm{Cheb}_{n}^{0}roman_Cheb start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT defined in Eq. (7.1) are Leja ordered [61]. Since these PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT for...
This allowed us to extend the classic 1D Newton and Lagrange interpolation methods to multivariate schemes in a numerically stable and efficient way, resulting in a practically implemented algorithm with 𝒪⁢(|A|2)𝒪superscript𝐴2\mathcal{O}(|A|^{2})caligraphic_O ( | italic_A | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIP...
A
Several variants of the Wasserstein distance have been developed in the literature to address these two issues. The smoothed Wasserstein distance is designed to reduce the computational cost [35] and improve the sample complexity [36] by using entropic regularizations.
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
Some projection-based variants of the Wasserstein distance are also discussed to address the computational complexity issue, including the sliced [37] and the max-sliced [38] Wasserstein distances. Sliced Wasserstein distance is based on the average Wasserstein distance between two projected distributions along infinit...
Several variants of the Wasserstein distance have been developed in the literature to address these two issues. The smoothed Wasserstein distance is designed to reduce the computational cost [35] and improve the sample complexity [36] by using entropic regularizations.
C
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
C
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working...
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
Graph described in Fig.  4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma...
D
where x∈𝔽n𝑥superscript𝔽𝑛x\in\mathbb{F}^{n}italic_x ∈ blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is the state and A∈𝔽n×n𝐴superscript𝔽𝑛𝑛A\in\mathbb{F}^{n\times n}italic_A ∈ blackboard_F start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT is the state transition map represented as ...
The first statement of Theorem 3 does not imply an equivalence between the cycle structure of the permutation polynomial and the cycle set of the linear dynamics (19), and the former is a subset of the latter. This is because the linear dynamics evolve over a larger set 𝔽Nsuperscript𝔽𝑁\mathbb{F}^{N}blackboard_F star...
When the dynamics is non-linear, the computation of the cycle set is a computationally hard problem. Apart from brute force computations, the work [26] gives an algorithmic procedure to estimate the cycle set of a non-linear dynamical system over finite fields by using the Koopman operator and constructing a reduced Ko...
Irrespective of whether the dynamics (2) being linear or not, the Koopman operator 𝐊𝐊\mathbf{K}bold_K is a linear operator over the function space ℱ⁢(𝔽n)ℱsuperscript𝔽𝑛\mathcal{F}(\mathbb{F}^{n})caligraphic_F ( blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ). This linearity of the Koopman operator...
Initially, the Koopman operator framework was used extensively for dynamics over reals (or complex) state space, and the function space is infinite-dimensional, which leads to resorting to finite-dimensional numerical approximations of the Koopman operator [28, 29] for practical computations. In our setting of dynamica...
B
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi...
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi...
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi...
Stacked penalized logistic regression (StaPLR) (Van Loon \BOthers., \APACyear2020) is a method specifically developed to tackle the joint classification and view selection problem. Compared with a variant of the lasso for selecting groups of features (the so-called group lasso (M. Yuan \BBA Lin, \APACyear2007)), StaPLR...
In this study we only considered different meta-learners within the MVS framework. Of course, many other algorithms for training classifiers exist. Some of those classifiers may be expected to perform better in terms of classification performance than the classifiers presented here, but not many have the embedded view...
D
Another line of research in anomaly detection exploits the dependency among variables, assuming normal objects follow the dependency while anomalies do not. Dependency-based methods [4, 5] evaluate the anomalousness of objects through how much they deviate from normal dependency possessed by the majority of objects.
A common way of examining dependency deviations in the dependency-based approach is to check the difference between the observed value and the expected value of an object, where the expected value is estimated based on the underlying dependency between variables [7, 4, 5]. Thus, dependency-based approach naturally lead...
Dependency-based approach is fundamentally different from proximity-based approach because it considers the relationship among variables, while proximity-based approach examines the relationship among objects. We use an example to explain the difference between the two approaches.
The dependency-based approach works under the assumption that anomalies deviate from the normal dependency among variables, and the extend of anomalousness is evaluated based on this deviation. While the proximity-based approach that focuses on relationships among objects, the dependency-based approach emphasizes on t...
This example highlights the fundamental difference between proximity-based and dependency-based methods. Dependency-based methods focus on identifying anomalies based on underlying relationships between variables, whereas proximity-based methods rely on object similarity in terms of proximity. In cases like this, where...
B
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to th...
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward mo...
In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe...
B
Video self-stitching (VSS). For both datasets, VSS shows its effectiveness in improving short actions whether used with or without xGPN. For THUMOS, because most actions are short, the overall performance also has a boost with VSS. For ActivityNet, VSS sacrifices long actions since it reduces the bias towards long act...
Cross-scale graph pyramid network (xGPN). From Table 3 and 4, we can see that xGPN obviously improves the performance of short actions as well as the overall performance. On the one hand, xGPN utilizes long-range correlations in multi-level features and benefits actions of various lengths. On the other hand, xGPN enabl...
Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period...
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates ...
Video self-stitching (VSS). For both datasets, VSS shows its effectiveness in improving short actions whether used with or without xGPN. For THUMOS, because most actions are short, the overall performance also has a boost with VSS. For ActivityNet, VSS sacrifices long actions since it reduces the bias towards long act...
A
The analytical requirements (R1–R5) originate from the analysis of the related work in Section 2, including the three analytical needs from Park et al. [PNKC21], the three key decisions from Wang et al. [WMJ∗19], and the five sub-steps from Li et al. [LCW∗18]. Also, our own experiences played a vital role, for instance...
Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters. T...
The analytical requirements (R1–R5) originate from the analysis of the related work in Section 2, including the three analytical needs from Park et al. [PNKC21], the three key decisions from Wang et al. [WMJ∗19], and the five sub-steps from Li et al. [LCW∗18]. Also, our own experiences played a vital role, for instance...
R1: Identify effective hyperparameters. Interviews performed by Park et al. [PNKC21] showed that users usually sort the models based on a validation metric and then check the hyperparameters of the most performant models (commonly less than 10) for the generated outcomes.
R3: Send the remaining models for improvement and handle crossover and mutation procedures. Configuring hyperparameter optimization methods was found unpredictable and disturbing by the interviewees of the investigation by Park et al. [PNKC21]. Participants from the interview by Wang et al. [WMJ∗19] stated that they re...
C
In terms of the convergence rate, these algorithms are only effective in cases with high transition capabilities. Additionally, the performance of these algorithms is highly sensitive to hyperparameters and requires careful selection for optimum results in each experiment.
Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen...
Graph temporal logic (GTL) is introduced in [16] to impose high-level task specifications as a constraint to the Markov chain synthesis. Markov chain synthesis is formulated as mixed-integer nonlinear programming (MINLP) feasibility problem and the problem is solved using a coordinate descent algorithm. In addition, an...
For the fastest mixing Markov chain synthesis, the problem is formulated as a convex optimization problem in [5], assuming that the Markov chain is symmetric. This paper also presents an extension to the method that involves synthesizing the fastest mixing reversible Markov chain with a given desired distribution. Furt...
It is worth noting that the bins comprising the operational region, as defined in Definition 6, determine the vertices of the uniform graph in Definition 1. Consequently, these vertices correspond to the states of the Markov chain defined in Definition 3. Similarly, the transition constraints of the swarm, defined by a...
B
Fig. LABEL:fig:teaser shows that our method finds the correct correspondence among the partial shape collection, while being cycle-consistent. Partial functional maps are rectangular and low-rank [58], and this experiments shows that our method can also handle this more general case. More details can be found in the su...
While (near)-isometric shape matching has been studied extensively for the case of matching a pair of shapes, the isometric multi-shape matching problem, where an entire collection of (near-isometric) shapes is to be matched, is less explored. Important applications of isometric multi-shape matching include learning lo...
In this work we fill this gap by introducing a generalisation of state-of-the-art isometric two-shape matching approaches towards isometric multi-shape matching. We demonstrate that explicitly exploiting the isometry property leads to a natural and elegant formulation that achieves improved results compared to previous...
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar...
It was shown that deep learning is an extremely powerful approach for extracting shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me...
D
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
We present the algorithm RecognizePG. Note that it is an implementation of Theorem 6 with very small changes. W.l.o.g., we assume that G𝐺Gitalic_G is connected, indeed a graph G𝐺Gitalic_G is a path graph if and only if all its connected components are path graphs. Moreover, we can obtain the clique path tree of G𝐺Gi...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
The first three steps of algorithm RecognizePG are implied by the first part of Theorem 6. By following Theorem 6, we have to check that there are no full antipodal triangle in UpperCsubscriptUpper𝐶\text{Upper}_{C}Upper start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT (this is made in Step 4), and we have to find f:ΓC→[...
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some parti...
D
In experiments 1(a) and 1(b), we study how the fraction of pure nodes affects the behaviors of these mixed membership community detection methods under MMSB and DCMM, respectively. We fix (x,ρ)=(0.4,0.1)𝑥𝜌0.40.1(x,\rho)=(0.4,0.1)( italic_x , italic_ρ ) = ( 0.4 , 0.1 ) and let n0subscript𝑛0n_{0}italic_n start_POSTSUB...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
C
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (...
See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018...
See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm. Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL⁢[p⁢(z),p⁢(z|x))...
variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes. However,
B
Except MaxPressure analysed above, GeneraLight achieves the best in Hangzhou with the mixedl configuration, while performs poorly in other scenarios. The reason is that GeneraLight trains several models on diverse generated traffic flows, and select the model in testing by matching the flow. Hence, it limits the genera...
We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compa...
3) MetaVIM outperforms Individual RL, MetaLight and PrssLight with 827, 423 and 411, respectively. The main reason is that they learn the traffic signal’s policy only using its own observation and ignore the influence of the neighbors, while MetaVIM considers the neighbors as the unobserved part of the current signal ...
To learn effective decentralized policies, there are two main challenges. Firstly, it is impractical to learn an individual policy for each intersection in a city or a district containing thousands of intersections. Parameter sharing may help. However, each intersection has a different traffic pattern, and a simple sh...
The most straightforward RL baseline considers each intersection independently and models the task as a single agent RL problem [12]. However, the observation, received reward and dynamics of each traffic signal are closely related to its neighbors, and the coordination between signals should be modeled. Hence, optimiz...
B
J⁢(𝐱)≡A≡Jrank-r⁢(𝐱)𝐽𝐱𝐴subscript𝐽rank-r𝐱J(\mathbf{x})\,\equiv\,A\,\equiv\,J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x})italic_J ( bold_x ) ≡ italic_A ≡ italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x ). From 𝐱0∈ℂnsubscript𝐱0superscriptℂ𝑛\mathbf{x}_{0}\,\in\,\mathbbm{C}^{n}bold_x start_POST...
A†⁢𝐛+N⁢𝐳0=A†⁢𝐛+N⁢N𝖧⁢𝐱0=A†⁢𝐛+(I−A†⁢A)⁢𝐱0superscript𝐴†𝐛𝑁subscript𝐳0superscript𝐴†𝐛𝑁superscript𝑁𝖧subscript𝐱0superscript𝐴†𝐛𝐼superscript𝐴†𝐴subscript𝐱0A^{\dagger}\,\mathbf{b}+N\,\mathbf{z}_{0}~{}~{}=~{}~{}A^{\dagger}\,\mathbf{b}+% N\,N^{{\mbox{\tiny$\mathsf{H}$}}}\,\mathbf{x}_{0}~{}~{}=~{}~{}A^{\dagger}...
‖𝐱1−𝐱∗‖normsubscript𝐱1subscript𝐱\displaystyle\|\mathbf{x}_{1}-\mathbf{x}_{*}\|∥ bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∥ ≤‖𝐱1−𝐱0‖2+‖𝐱0−𝐱∗‖2absentsubscriptnormsubscript𝐱1subscript𝐱02subscriptnormsubscript𝐱0subscript𝐱2\displaystyle~{}~{}\leq~{}~{}\|\mat...
In that case the Jacobian of 𝐟𝐟\mathbf{f}bold_f at any particular 𝐱0∈Ωsubscript𝐱0Ω\mathbf{x}_{0}\,\in\,\Omegabold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ roman_Ω is denoted by 𝐟𝐱⁢(𝐱0)subscript𝐟𝐱subscript𝐱0\mathbf{f}_{\mathbf{x}}(\mathbf{x}_{0})bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x s...
𝐱1=𝐱0−A†⁢(A⁢𝐱0−𝐛)=A†⁢𝐛+(I−A†⁢A)⁢𝐱0subscript𝐱1subscript𝐱0superscript𝐴†𝐴subscript𝐱0𝐛superscript𝐴†𝐛𝐼superscript𝐴†𝐴subscript𝐱0\mathbf{x}_{1}~{}~{}=~{}~{}\mathbf{x}_{0}-A^{\dagger}\,(A\,\mathbf{x}_{0}-% \mathbf{b})~{}~{}=~{}~{}A^{\dagger}\,\mathbf{b}+(I-A^{\dagger}\,A)\,\mathbf{x}%
D
The Weibull distribution is specified by two parameters: the shape parameter s⁢h𝑠ℎshitalic_s italic_h and the scale parameter s⁢c𝑠𝑐scitalic_s italic_c (with s⁢h,s⁢c>0𝑠ℎ𝑠𝑐0sh,sc>0italic_s italic_h , italic_s italic_c > 0). The shape parameter defines the spread of item sizes: lower values indicate greater skew tow...
For the remaining benchmarks, namely “Randomly_Generated”, “Schoenfield_Hard28”, and “Wäscher”, the relative performance of the algorithms is similar to that for the GI benchmark, with the difference that the divergence of the algorithms becomes observable at different values of the prediction error.
publicly available benchmarks, such as the BPPLIB benchmarks (?), but also on distributions studied specifically in the context of offline bin packing, such as the Weibull distribution (?). The results show that our algorithms outperform the known efficient algorithms without any predictions. We also evaluate a heurist...
Figure 6 depicts the number of bins opened by Adaptive(w𝑤witalic_w) as a function of w𝑤witalic_w for different benchmarks. Here, we report the average cost of the algorithms over 20 randomly generated sequences. We observe that for the Weibull and “GI” benchmarks, there is a relatively wide range for w𝑤witalic_w tha...
The second type of benchmarks is generated from the BPPLIB library (?), a collection of bin packing benchmarks used in various works on (offline) algorithms for bin packing. In particular, we report results on the benchmarks “GI” (?), “Schwerin” (?), “Randomly_Generated” (?), “Schoenfield_Hard28” (?) and “Wäscher” (?)....
D
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level...
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of W⁢T𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
B
}{2}}over^ start_ARG bold_t end_ARG start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT bold_t start_POSTSUPERSCRIPT italic_k + divide start_ARG 1 end_ARG st...
The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ...
As it was noted above, the standard analysis of Mirror-Prox requires the feasible sets to be compact. Although we run Mirror-Prox algorithm on problem (54) with unconstrained variables 𝐬𝐬{\bf s}bold_s and 𝐳𝐳{\bf z}bold_z, we still can bound these variables according to Theorem 2.4.
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
B
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio...
C
(m+1)𝑚1(m+1)( italic_m + 1 )-tuples of ℱℱ\mathcal{F}caligraphic_F with nonempty intersection. In other words, πm+1⁢(ℱ)subscript𝜋𝑚1ℱ\pi_{m+1}(\mathcal{F})italic_π start_POSTSUBSCRIPT italic_m + 1 end_POSTSUBSCRIPT ( caligraphic_F ) is at least δ′=defρ/(m⁢tm+1)superscriptdefsuperscript𝛿′𝜌binomial𝑚𝑡𝑚1\delta^{\prim...
If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
Lemma 4.6 assumes that the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F has the property that for 0≤j<dimK0𝑗dimension𝐾0\leq j<\dim K0 ≤ italic_j < roman_dim italic_K and for every colorful subfamily 𝒢𝒢\mathcal{G}caligraphic_G of ℱℱ\mathcal{F}caligraphic_F, the j𝑗jitalic_jth reduced Betti number β~j⁢(⋂F∈�...
The rest of Section 4.1 is devoted to the proof of Lemma 4.2. The proof first handles the case k=m𝑘𝑚k=mitalic_k = italic_m, and then uses it to prove the case k<m𝑘𝑚k<mitalic_k < italic_m. Note that for k>m𝑘𝑚k>mitalic_k > italic_m the lemma is trivial, as the chain group contains only a trivial chain and we can ta...
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ...
A
All visual encodings designed for the panels of FeatureEnVi are summarized in Table II. On the right-hand side, we can observe the optimal states for the available statistical measures. However, in reality, many of the statistical measures will be contradictory to each other, and human decisions are essential on such ...
The radial tree had three collapsed data subspaces (a.2–a.4) except for All and Worst subspaces. We performed this action because there are too many features to be explored at once, and FeatureEnVi provides this capability to alter the layouts in order to scale for high-dimensional data sets. Basically, the core statis...
Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S...
Figure 5: The process of features’ exploration in a vehicle recognition scenario. (a.1) to (a.4) depict the change of the thresholds for the different data slices to intensify the responses for borderline instances. In (b), the user excluded unimportant features and then validates the remaining features through the rad...
To the best of our knowledge, little empirical evidence exists for choosing a particular measure over others. In general, target correlation and mutual information (both related to the influence between features and the dependent variable) may be good candidates for identifying important features [71]. After these firs...
D
As expected, adding the global tracking error constraint increases the traversal time, but maintains the maximal deviation within the bounds (see the table in 5). This tracking error constraint results in a dramatic 5-fold decrease of the maximum deviation ‖e^c‖∞subscriptnormsubscript^𝑒𝑐\|\hat{e}_{c}\|_{\infty}∥ ove...
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
To reduce the number of times this experimental “oracle” is invoked, we employ Bayesian optimization (BO) [16, 17], which is an effective method for controller tuning [13, 18, 19] and optimization of industrial processes [20]. The constrained Bayesian optimization samples and learns both the objective function and the ...
A
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI...
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI...
Results. In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicit...
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ⁢(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betait...
Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain compara...
D
Krafka et al. replace the fully-connected layer with an SVM and fine-tune the SVM layer to predict the gaze location [42]. Zhang et al. split the CNN into three parts: the encoder, the feature extractor, and the decoder [133]. They fine-tune the encoder and decoder in each target domain.
Xiong et al. introduce a random effect parameter to learn the person-specific information in gaze estimation [114]. They utilize the variational expectation-maximization algorithm [115] and stochastic gradient descent [116] to estimate the parameters of the random effect network during training. They use another networ...
Salvalaio et al. implicitly collect calibration data when users are using computers. They collect data when the user is clicking a mouse, this is based on the assumption that users are gazing at the position of the cursor when clicking the mouse [146]. They use online learning to fine-tune their model with the calibrat...
Inter-subject bias. Chen et al. observe the inter-subject bias in most datasets [131, 132]. They make the assumption that there exists a subject-dependent bias that cannot be estimated from images. Thus, they propose a gaze decomposition method. They decompose the gaze into the subject-dependent bias and the subject-in...
They learn the person-specific feature during fine-tuning. Linden et al. introduce user embedding for recording personal information. They obtain user embedding of the unseen subjects by fine-tuning using calibration samples [136]. Chen et al.  [131, 132] observe the different gaze distributions of subjects. They use t...
D
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
has been successfully employed for image classification tasks krizhevsky2017imagenet . This deep model is pre-trained on a few millions of images from the ImageNet database through eight learned layers, five convolutional layers and three fully-connected layers. The last fully-connected layer allows to classify one tho...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b...
simonyan2014very is trained on the ImageNet dataset which has over 14 million images and 1000 classes. Its name VGG-16 comes from the fact that it has 16 layers. It contains different layers including convolutional layers, Max Pooling layers, Activation layers, and Fully Connected (fc) layers. There are 13 convolution...
C
)}_{i,j,j>0\vdash(i,j-1)<(i,j)\text{ checked}}\}\}under⏟ start_ARG roman_tail italic_t end_ARG start_POSTSUBSCRIPT italic_j > 0 assumed end_POSTSUBSCRIPT ⇒ italic_p start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ← under⏟ start_ARG italic_o start_POSTSUPERSCRIPT roman_R end_POSTSUPERSCRIPT . roman_rest italic_p start_POSTS...
If the processor issues a “get,” then the head of the input stream is consumed, recursing on its tail. Otherwise, the output stream is constructed recursively, first issuing the element received from the processor. It is clear that the program terminates by lexicographic induction on (i,j)𝑖𝑗(i,j)( italic_i , italic_j...
The even-indexed substream retains the head of the input, but its tail is the odd-indexed substream of the input’s tail. The odd-indexed substream, on the other hand, is simply the even-indexed substream of the input’s tail. Operationally, the heads and tails of both substreams are computed on demand similar to a lazy...
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i]⁢[z/x]⁢Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita...
Such functions may consume finitely many elements of type A𝐴Aitalic_A from the input stream (the inductive part spA,Bμ⁡[i]subscriptsuperscriptsp𝜇𝐴𝐵𝑖\operatorname{sp}^{\mu}_{A,B}[i]roman_sp start_POSTSUPERSCRIPT italic_μ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A , italic_B end_POSTSUBSCRIPT [ italic_i ]) bef...
A
In this paper, facing these problems and challenges, we set out to solve them. First, to achieve data protection and access control, we adopt the lifted-ElGamal based PRE scheme, as discussed in [16, 17, 18, 19, 20], whose most prominent characteristic is that it satisfies the property of additive homomorphism. Then t...
This paper solves the three problems faced by cloud media sharing and proposes two schemes FairCMS-I and FairCMS-II. FairCMS-I gives a method to transfer the management of LUTs to the cloud, enabling the calculation of each user’s D-LUT in the ciphertext domain and its subsequent distribution. However, utilizing the s...
Aiming at the situation that the existing techniques can-not fully meet the security/privacy requirements of cloud media sharing, we propose two novel schemes, namely FairCMS-I and FairCMS-II, to solve Problems 1, 2, and 3 with different privacy/efficiency trade-offs, which are also qualified in terms of owner-side ef...
In this section, we bring forward two cloud media sharing schemes, namely FairCMS-I and FairCMS-II. FairCMS-I essentially delegates the re-encryption management of LUTs to the cloud, thus significantly reducing the overhead of the owner side. Nevertheless, FairCMS-I cannot achieve IND-CPA security for the media conten...
According to the above idea, we propose two cloud media sharing schemes in this paper, i.e., FairCMS-I and FairCMS-II, which solve the above three problems with different privacy/efficiency trade-offs. Among them, FairCMS-I consumes fewer cloud resources, while FairCMS-II achieves better protection for the media conten...
D
The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4. H...
Since the features along with selected beneficial feature interactions are treated as a graph, it can provide human readable interpretations on the prediction. Here we visualize heat maps of estimated edge weights of two cherry-pick instances on MovieLens-1M dataset in Fig. 4. We show the measured edge weights of each ...
We find that in the first layer, which models the second order feature interactions, these feature fields are hard to distinguish when selecting the beneficial interactions. This suggests that almost all the second-order feature interactions are useful, which also why we sample all of them in the first layer, i.e., m1=...
The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4. H...
This proves that our model can indeed select meaningful feature combination and model feature interactions of increasing orders with multiple layers in most cases, rather than select the redundant feature combinations of same feature fields. We can also find some meaningful feature combinations in common cases. For exa...
D
which is reminiscient of the 𝒪⁢(Lf𝒳⁢D2/t)𝒪superscriptsubscript𝐿𝑓𝒳superscript𝐷2𝑡\mathcal{O}(L_{f}^{\mathcal{X}}D^{2}/t)caligraphic_O ( italic_L start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_X end_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_t )...
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
Moreover, as the upper bound on the Bregman divergence holds for ν=2𝜈2\nu=2italic_ν = 2 regardless of the value of d2⁢(𝐱,𝐲)subscript𝑑2𝐱𝐲d_{2}(\mathbf{x},\mathbf{y})italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_x , bold_y ), we can modify the proof of Theorem 2.4 to obtain a convergence rate of the form:...
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5 if the Frank-Wolfe gap g⁢(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ) is greater than the away gap ⟨∇f⁢(𝐱t),𝐚t−𝐱t⟩∇𝑓subscript𝐱𝑡subscript𝐚𝑡subscript𝐱𝑡\left\langle\nabla f(\mathbf{x}_{t}),\m...
C
Here, we make the observation that by combining the prefixes of P𝑃Pitalic_P and P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT until the edge ajsubscript𝑎𝑗a_{j}italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, we obtain an augmenting path. On a high level, our approach is to sh...
If the alternating path Pγsubscript𝑃𝛾P_{\gamma}italic_P start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT starting from γ𝛾\gammaitalic_γ was of length i′>isuperscript𝑖′𝑖i^{\prime}>iitalic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT > italic_i, then it could be that γ𝛾\gammaitalic_γ did not find β𝛽\betaitalic_β si...
For the rest of the graph, [EKMS12] show that it is enough to store the length of the shortest alternating path that has reached each matched edge. This length is called label. In the first challenge, we considered the possibility that a vertex γ𝛾\gammaitalic_γ “blocks” the DFS exploration of α𝛼\alphaitalic_α and dis...
Therefore, we have an augmenting path from γ𝛾\gammaitalic_γ to α𝛼\alphaitalic_α, which will be detected in Algorithm 3 of Algorithm 3. This implies that the augmenting path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β will be removed from the graph in Pass-Bundle τ𝜏\tauitalic_τ.
Nodes α𝛼\alphaitalic_α, β𝛽\betaitalic_β, and γ𝛾\gammaitalic_γ are free. The black single-segments are unmatched and black (full) double-segments are matched edges. The path P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT corresponding to a DFS branch of γ𝛾\gammaitalic_γ is shown by th...
B
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16]. Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv...
For directed networks, however, constructing a doubly stochastic mixing matrix usually requires a weight-balancing step, which could be costly when carried out in a distributed manner. Therefore, the push-sum technique [17] was utilized to overcome this issue.
Specifically, the push-sum based subgradient method in [18] can be implemented over time-varying directed graphs, and linear convergence rates were achieved in [19, 20] for minimizing strongly convex and smooth objective functions by applying the push-sum technique to EXTRA.
The Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method introduced in [24, 25] modified the gradient tracking methods to deal with directed network topologies without the push-sum technique. The algorithm uses a row stochastic matrix to mix the local decision variables and a column stochastic matr...
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16]. Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv...
A
SPPs cover a wider range of problems than minimization ones and has numerous important practical applications [6]. These include well-known and famous examples from game theory or optimal control [7]. In recent years, saddle point problems have become popular in several other respects.
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ...
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing [10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net...
D
Sheriff (Farina et al., 2019b) is a two-player, general-sum negotiation game. It consists of bargaining rounds between a smuggler, who is motivated to import contraband without getting caught, and a sheriff, who is motivated to find contraband or accept bribes. Figure 2(c) shows that JPSRO is capable of finding the opt...
There has been significant recent interest in solving the equilibrium selection problem (Ortiz et al., 2007; Omidshafiei et al., 2019). This paper provides a novel approach which is computationally tractable, supports general-support solutions, and has favourable scaling properties when the solution is full-support.
Recent success in tackling two-player, constant-sum games (Silver et al., 2016; Vinyals et al., 2019) has outpaced progress in n-player, general-sum games despite a lot of interest (Jaderberg et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019; Lockhart et al., 2020; Gray et al., 2020; Anthony et al., 2020). One ...
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th...
There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve ...
A
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay...
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay...
One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In  Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individua...
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so ...
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. (2021); Steinke and Zakynthinou (2020). Using these methods Feldman and Steinke (2018) presented a ...
D
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
D
The existing deep image blending works [172, 198, 194] adopt the following evaluation metrics: 1) calculating realism score using the pretrained model [209] which reflects the realism of a composite image; 2) conducting user study by asking engaged users to select the most realistic images; 3) Zhang et al. [194] deem t...
Figure 14: In the first row, we show two examples from Shadow-AR dataset [92], which is constructed based on rendered images. In the second row, we show two examples from DESOBA dataset [52], which is constructed based on real images. From left to right in each example, we show the composite image without foreground sh...
We evaluate different image blending methods conditioned on the matting results. First, we create composite images using the alpha matts predicted by the state-of-the-art trimap-based image matting methods [23, 98, 95]. Then, we hope that image blending methods can refine the obtained composite images. We sample 500 fo...
During image composition, the foreground is usually extracted using image segmentation [108] or matting [180] methods. However, the segmentation or matting results may be noisy and the foregrounds are not precisely delineated. When the foreground with jagged boundaries is pasted on the background, there will be abrupt...
By taking LFPNet [95] as an example matting method, we predict the alpha mattes and obtain the composite images. We observe that LFPNet can generally achieve satisfactory results except some challenging cases. We pick out its several failure cases to verify the effectiveness of image blending methods.
B
Our analyses and experiments on CityNet have yielded valuable insights for researchers. Our studies have confirmed the correlations among sub-datasets and have demonstrated that urban modeling and analyses can be enhanced by appropriately utilizing the mutual knowledge among correlated sub-datasets. To this end, we hav...
The paper is structured as follows. Section II outlines the pre-processing procedure of all sub-datasets in CityNet, along with their basic statistics. In Section III, we employ data mining tools to reveal and elucidate the correlations between contexts and service data. In Section IV, we conduct machine learning exper...
In this section, we present the empirical findings of machine learning tasks supported by CityNet, encompassing spatio-temporal predictions, transfer learning, and reinforcement learning. The primary objective of these experiments is to offer the following valuable insights:
In addition to the collection and processing of data, it is essential to identify and quantify the correlations between sub-datasets in CityNet to gain insights into the effective utilization of the multi-modal data. In this section, we leverage data mining tools to explore and visualize the relationships between servi...
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem...
A
Γintα⁢(𝐱∗):=[l^⁢(𝐱∗)−α∗,u^⁢(𝐱∗)+α∗].assignsubscriptsuperscriptΓ𝛼intsuperscript𝐱^𝑙superscript𝐱superscript𝛼^𝑢superscript𝐱superscript𝛼\displaystyle\Gamma^{\alpha}_{\text{int}}(\mathbf{x}^{*}):=\left[\hat{l}(% \mathbf{x}^{*})-\alpha^{*},\hat{u}(\mathbf{x}^{*})+\alpha^{*}\right]\,.roman_Γ start_POSTSUPERSCRIPT it...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
The idea behind this construction is very similar to that for point predictors. One estimates the amount by which the constructed intervals are too small (or wide) on average on the calibration set and corrects for these errors in the future. Two other normalized nonconformity measures were considered in kivaranovic202...
In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th...
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
B
It has been widely shown in NLP and related fields \parencitespeechbert,vilbert,videobert,proteinbert that, by storing knowledge in huge numbers of parameters and carrying out task-specific fine-tuning, the knowledge implicitly encoded in the parameters of a PTM can be transferred to benefit a variety of downstream tas...
Fig. 2(b) shows the fine-tuning architecture for note-level classification. While the Transformer uses the hidden vectors to recover the masked tokens during pre-training, it has to predict the label of an input token during fine-tuning, by learning from the labels provided in the training data of the downstream task ...
For PTMs, an unsupervised or self-supervised, pre-training task is needed to set the objective function for learning. We employ the masked language modelling (MLM) pre-training strategy of BERT, randomly masking 15% tokens of an input sequence and the Transformer will reconstruct these masked tokens from the context of...
Figure 2: Illustration of the (a) pre-training procedure of our model for a CP sequence, where the model learns to predict (reconstruct) randomly-picked super tokens masked in the input sequence (each consisting of four tokens, as the example one shown in the middle with time step t𝑡titalic_t); and (b), (c) the fine-t...
As a self-supervised method, MLM needs no labelled data relating to the downstream tasks for pre-training. Following BERT, among all the masked tokens, we replace 80% by MASK tokens, 10% by a randomly chosen token and leave the last 10% unchanged. Doing so has the effect of helping mitigate the mismatch between pre-tra...
A
Observe that for a tree on n𝑛nitalic_n vertices we can compute for every vertex v𝑣vitalic_v and its neighbor u𝑢uitalic_u functions f⁢(v,u)𝑓𝑣𝑢f(v,u)italic_f ( italic_v , italic_u ) and g⁢(v,u)𝑔𝑣𝑢g(v,u)italic_g ( italic_v , italic_u ) denoting the sizes of subsets of C1⁢(T)subscript𝐶1𝑇C_{1}(T)italic_C start_PO...
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the...
The idea is to start from any vertex w𝑤witalic_w, and then jump to its neighbor with the largest component size in T−w𝑇𝑤T-witalic_T - italic_w, until we hit a vertex with desired property. Note that for any vertex v𝑣vitalic_v there can be at most one neighbor u𝑢uitalic_u such that its connected component Tusubscri...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
In every tree T𝑇Titalic_T there exists a central vertex v∈V⁢(T)𝑣𝑉𝑇v\in V(T)italic_v ∈ italic_V ( italic_T ) such that every connected component of T−v𝑇𝑣T-vitalic_T - italic_v has at most |V⁢(T)|2𝑉𝑇2\frac{|V(T)|}{2}divide start_ARG | italic_V ( italic_T ) | end_ARG start_ARG 2 end_ARG vertices.
A