context
stringlengths
250
7.19k
A
stringlengths
250
4.62k
B
stringlengths
250
4.17k
C
stringlengths
250
4.99k
D
stringlengths
250
8.2k
label
stringclasses
4 values
2⁢F′⁢(a,b;c;z)+4⁢x2⁢F′′⁢(a,b;c;z);2superscript𝐹′𝑎𝑏𝑐𝑧4superscript𝑥2superscript𝐹′′𝑎𝑏𝑐𝑧\displaystyle 2F^{\prime}(a,b;c;z)+4x^{2}F^{\prime\prime}(a,b;c;z);2 italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_a , italic_b ; italic_c ; italic_z ) + 4 italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ...
(−1)a(b−1−a)[d3d⁢x3xmF(a,b;c;z)+3d2d⁢x2xmdd⁢xF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+% 3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div...
+3dd⁢xxmd2d⁢x2F(a,b;c;z)+xmd3d⁢x3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}% \frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic...
d3d⁢x3⁢F⁢(a,b;c;z)superscript𝑑3𝑑superscript𝑥3𝐹𝑎𝑏𝑐𝑧\displaystyle\frac{d^{3}}{dx^{3}}F(a,b;c;z)divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z )
2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR...
C
Having computed the T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we begin the main ‘for’ loop of Algorithm 3, running through the columns of g𝑔gitalic_g in reverse order. Observe that r𝑟ritalic_r takes each value 1,…,d1…𝑑1,\dots,d1 , … , italic_d exactly once as we run through the columns of ...
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left...
At this point in each pass of the main ‘for’ loop of Algorithm 3, we call the subroutine LeftUpdate[i𝑖iitalic_i] for i=r+2,…,d𝑖𝑟2…𝑑i=r+2,\ldots,ditalic_i = italic_r + 2 , … , italic_d, unless r≥d−1𝑟𝑑1r\geq d-1italic_r ≥ italic_d - 1, in which case the current column will have already been cleared. The role of thi...
Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r. Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero.
If we are in the (unique) column where r=d𝑟𝑑r=ditalic_r = italic_d then there is no ‘column clearing’ to do and we skip straight to the row clearing stage. For each other column, we start by calling the subroutine FirstTransvections[r𝑟ritalic_r] (Algorithm 4).
D
To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}...
Except for (ii), all steps above above can be performed efficiently as the matrices involved are sparse and either local or independent of hℎhitalic_h. Solving (25) on the other hand involves computing the hℎhitalic_h-dependent, global operator P𝑃Pitalic_P, leading to a dense matrix in (25). From now on, we concentrat...
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
Above, and in what follows, c𝑐citalic_c denotes an arbitrary constant that does not depend on H𝐻Hitalic_H, ℋℋ{\mathscr{H}}script_H, hℎhitalic_h, 𝒜𝒜\mathcal{A}caligraphic_A, depending only on the shape regularity of the elements of 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POST...
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
A
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle take nearly linear time [6, 1, 2, 3, 23], that is, O⁢(n⁢log⁡n)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O⁢(n⁢log2⁡n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP...
Using a Rotate-and-Kill process (which is shown in Algorithm 5), we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead.
in the Rotate-and-Kill process, and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2).
The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim...
Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-...
C
. We showcase here a study of the Munich shooting. We first show the event timeline at an early stage. Next we discuss some examples of misclassifications by our “weak” classifier and show some analysis on the strength of some highlighted features. The rough event timeline looks as follows.
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Cred...
A
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_...
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
B
For this task, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 3.2). Fo...
Settings. For the time series classification model, we only report the best performing classifiers, SVM and Random Forest, here. The parameters of SVM with RBF kernel are tuned via grid search to C=3.0𝐶3.0C=3.0italic_C = 3.0, γ=0.2𝛾0.2\gamma=0.2italic_γ = 0.2. For Random Forest, the number of trees is set to be 350. ...
But if we fit the models of the first few hours with limited data, the result of learning parameters is not so accurate. We show the performance of fitting these two model with only the first 10 hours tweets’ volume in Figure 4. As we can see except for the first one, the fitting results of other three are not good eno...
The experiments’ results of the testing models are shown in Table 3. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are even worse. So the cl...
For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). For the experiments, we implement the 3 non-neural network models with Scikit-learn library111111scikit-learn.org/. Furthermore,...
D
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector of ℳL⁢R2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ...
B
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it...
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali...
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ...
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
D
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)). Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)).
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
A
A quantitative comparison of results on independent test datasets was carried out to characterize how well our proposed network generalizes to unseen images. Here, we were mainly interested in estimating human eye movements and regarded mouse tracking measurements merely as a substitute for attention. The final outcome...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)...
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai...
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone...
B
There is a (polynomial-time) O⁡(log⁡(opt)⁢log⁡(h))Ooptℎ\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(h))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_h ) )-approximation algorithm and an O⁡(log⁡(opt)⁢opt)Ooptopt\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\oper...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u...
In this paper, we investigate the problem of computing the locality number (in the exact sense as well as fixed-parameter algorithms and approximations) and, by doing so, we establish an interesting connection to the graph parameters cutwidth and pathwidth with algorithmic implications for approximating cutwidth. In th...
In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th...
D
In their article Kiranyaz et al.[77] trained patient-specific CNNs that can be used to classify long ECG data stream or for real-time ECG monitoring and early alert system on a wearable device. The CNN consisted of three layers of an adaptive implementation of 1D convolution layers.
They achieved 99% and 97.6% in classifying ventricular and supraventricular ectopic beats respectively. In[78] the authors used mean removal for dc removal, moving average filter for high frequency removal, derivative-based filter for baseline wander removal and a comb filter for power line noise removal.
In[83] the authors created a two layer CNN using the DeepQ[41] and MITDB to classify four arrhythmias types. The signals are heavily preprocessed with denoising filters (median, high-pass, low-pass, outlier removal) and they are segmented to 0.6 seconds around the R-peak.
In their article Luo et al.[79] utilized quality assessment to remove low quality heartbeats, two median filters for removing power line noise, high-frequency noise and baseline drift. Then, they used a derivative-based algorithm to detect R-peaks and time windows to segment each heartbeat.
Accuracy121212There is a wide variability in results reporting. The results of [77] is for ventricular/supraventricular ectopic beats, [78] is for three types of arrhythmias, [82] is for five types of arrhythmias, [84] report precision, [90] report SNR and multiple results depending on added noise, the result of [91] ...
A
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen...
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
B
For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure. The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels).
For the ‘signal as image’ module, we normalized the amplitude of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to the range [1,178]1178[1,178][ 1 , 178 ]. The results were inverted along the y-axis, rounded to the nearest integer and then they were used as the y-indices for the pixels with...
The two layer module consists of two 1D convolutional layers (kernel sizes of 3333 with 8888 and 16161616 channels) with the first layer followed by a ReLU activation function and a 1D max pooling operation (kernel size of 2222). The feature maps of the last convolutional layer for both modules are then concatenated al...
For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure. The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels).
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
B
It is important to emphasize that the locomotion mode transitions are only meaningful when both rolling and walking modes are capable of handling a step negotiation. And in the step negotiation simulations, it has been observed that the rolling locomotion can not transverse over steps with height more than three time ...
Fig. 7 illustrates the hierarchical control design for the autonomous locomotion mode transition. The decision-making process for this transition is accomplished in MATLAB, whereas the control of each separate locomotion mode is enacted in CoppeliaSim. The connection between MATLAB and the physical robot model in Copp...
In order to account for the robot’s dynamics and precisely quantify energy consumption during step negotiation, we utilized the Vortex physical engine incorporated within CoppeliaSim (previously known as V-REP) robotics simulation software [25]. This ensured robust management of the robot’s intricate dynamics and inter...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
A
In this work, we address what is a significant drawback in the online advice model. Namely, all previous works assume that advice is, in all circumstances, completely trustworthy, and precisely as defined by the algorithm. Since the advice is infallible, no reasonable online algorithm with advice would choose to ignor...
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution....
Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily a...
Furthermore, we show an interesting difference between the standard advice model and the model we introduce: in the former, an advice bit can be at least as powerful as a random bit, since an advice bit can effectively simulate any efficient choice of a random bit. In contrast, we show that in our model, there are situ...
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem. Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size.
A
Thus, all these other models were also implemented in Python 2.7, using the sklearn library181818https://scikit-learn.org/, version 0.17. Vectorization was done with the TfidfVectorizer class, with the standard English stop words list. Additionally, terms having a document frequency lower than 20 were ignored. Finally,...
In the context of online environments such as social media, an ADD scenario that is gaining interest, as we will see in Subsection 2.2, is the one known as early depression detection (EDD). In EDD the task is, given users’ data stream, to detect possible depressive people as soon and accurate as possible.
As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in whic...
In the first one, we performed experiments in accordance with the original eRisk pilot task definition, using the described chunks. However, since this definition assumes, by using chunks, that the total number of user’s writings is known in advance191919Which is not true when working with a dynamic environment, such a...
In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings. In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of th...
C
DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5). In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=...
Figure 2(b), 2(c) and 2(d) show the distances to the global optimal point when using different s𝑠sitalic_s for the case when d=20𝑑20d=20italic_d = 20. We can find that, compared with the local momentum methods, the global momentum method GMC converges faster and more stably.
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo...
We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca...
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse commun...
B
For the same task as the previous one but for 2D, we use MNIST which consists of a training dataset of 60000600006000060000 greyscale images with handwritten digits and a test dataset of 10000100001000010000 images each one having size of 28×28282828\times 2828 × 28.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
The first two fully connected layers are followed by a ReLU while the last one produces the predictions. The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function.
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output. DNNs were successful in utilizi...
A
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin...
We propose the synchronous payoff-based binary log-linear learning algorithm (SPBLLA) which has the following properties: 1) SPBLLA can learn with restricted information; 2) In certain conditions, SPBLLA approaches NE with constrained strategies sets; 3) SPBLLA allows UAVs to update strategies synchronously, which sign...
Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of sy...
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin...
In summary, our work differs significantly from each of the above-mentioned works, and other literatures in UAV ad-hoc networks. As far as we know, our proposed algorithm is capable of learning previous utilities and strategies, achieve NE with restricted information and constrained strategies sets, and update strategi...
D
dissipated by viscous effects until tc⁢o⁢m⁢p=45⁢μsubscript𝑡𝑐𝑜𝑚𝑝45μt_{comp}=45\upmuitalic_t start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 45 roman_μs, when magnetic compression is initiated and velocities rise sharply in reaction to
the currents in the levitation and compression coils in the FEMM models, used to obtain boundary conditions for ψl⁢e⁢vsubscript𝜓𝑙𝑒𝑣\psi_{lev}italic_ψ start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT and ψc⁢o⁢m⁢psubscript𝜓𝑐𝑜𝑚𝑝\psi_{comp}italic_ψ start_POSTSUBSCRIPT italic_c italic_o italic_m ita...
the various dynamics associated with compression. The structures in the profiles of vϕsubscript𝑣italic-ϕv_{\phi}italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT and vzsubscript𝑣𝑧v_{z}italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT near the entrance to the
are solved, si/3subscript𝑠𝑖3s_{i}/3italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT / 3 is the area, and risubscript𝑟𝑖r_{i}italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the radial coordinate, associated with node i𝑖iitalic_i. The summation is over all nodes in the
Pre-compression profiles of vϕsubscript𝑣italic-ϕv_{\phi}italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT and vzsubscript𝑣𝑧v_{z}italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT are shown at 25⁢μ25μ25\upmu25 roman_μs in figures 19(c) and (e). The particularly
B
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
C
For this experiment, we designed a customized environment modeled after the Gridworld problem (Figure 4) the state space contains pairs of points from a 2D discrete grid (S=(x,y)x,y∈0,1,2,3,4S={(x,y)_{x},_{y}\in 0,1,2,3,4}italic_S = ( italic_x , italic_y ) start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , start_POSTSUBS...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the DQN loss, ADAM optimizer was used[25].
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25].
D
Cityscapes: The Cityscapes dataset (Cordts et al., 2016) contains annotated images of urban street scenes. The data was collected during daytime from 50 cities and exhibits variance in the season of the year and traffic conditions. Semantic, instance wise, and dense pixel-wise annotations are provided, with ‘fine’ anno...
Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al., 2017; Fu et al., 2019b; Lin et al., 2017a; Peng et al., 2017; Pohlen et al., 2017; Wojna et al., 2017; Zhang et al., 2018d). Recently in 2018, De...
ADE20K: The ADE20K dataset (Zhou et al., 2017) contains 25,210 images from other existing datasets, e.g, the LabelMe (Russell et al., 2008), the SUN (Xiao et al., 2010), and the Places (Zhou et al., 2014) datasets. The images are annotated with labels belonging to 150 classes for “scenes, objects, parts of objects, and...
Khosravan et al. (2019) proposed an adversarial training framework for pancreas segmentation from CT scans. Son et al. (2017) applied GANs for retinal image segmentation. Xue et al. (2018) used a fully convolutional network as a segmenter in the generative adversarial framework to segment brain tumors from MRI images....
The standard CE loss function and its weighted versions, as discussed in Section 4, have been applied to numerous medical image segmentation problems (Isensee et al., 2019; Li et al., 2019b; Lian et al., 2018; Ni et al., 2019; Nie et al., 2018; Oktay et al., 2018; Schlemper et al., 2019). However, Milletari et al. (20...
B
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals. See the supplementary material for an illustration.
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
B
In this work, we presented a novel method for transforming random forests into neural networks. Instead of a direct mapping, we introduced a process for generating data from random forests by analyzing the decision boundaries and guided routing of data samples to selected leaf nodes.
Compared to state-of-the-art methods, the presented implicit transformation significantly reduces the number of parameters of the networks while achieving the same or even slightly improved accuracy due to better generalization. Our approach has shown that it scales very well and is able to imitate highly complex class...
Experiments demonstrate that the accuracy of the imitating neural network is equal to the original accuracy or even slightly better than the random forest due to better generalization while being significantly smaller. To summarize, our contributions are as follows:
Our method significantly reduces the number of parameters of the generated networks while reaching the same or even slightly better accuracy. The current best-performing methods generate networks with an average number of parameters of either 142 000142000142\,000142 000, if sparse processing is available, or 748 00074...
First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class. For each method, the average number of parameters of the generated networks across all...
A
Theoretically, we establish the sample efficiency of OPPO in an episodic setting of Markov decision processes (MDPs) with full-information feedback, where the transition dynamics are linear in features (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020). In particular, we allow the trans...
Moreover, we prove that, even when the reward functions are adversarially chosen across the episodes, OPPO attains the same regret in terms of competing with the globally optimal policy in hindsight (Cesa-Bianchi and Lugosi, 2006; Bubeck and Cesa-Bianchi, 2012). In comparison, existing algorithms based on value iterati...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po...
A
The newly emerging NAS approaches are promising candidates to automate the design of application-specific architectures with little user interaction. However, it appears unlikely that current NAS approaches will discover new fundamental design principles as the resulting architectures highly depend on a-priori knowledg...
The computational cost of performing inference should match the (usually limited) resources in deployed systems and exploit the available hardware optimally in terms of time and energy. Computational efficiency, in particular, also includes mapping the representational efficiency to available hardware structures.
In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1. Note that this requires observing overall constraints such as pre...
In experiments, we demonstrated on two benchmark data sets the difficulty of finding a good trade-off among prediction quality, representational efficiency and computational efficiency. Considering three embedded hardware platforms, we showed that massive parallelism is required for inference efficiency and that quanti...
In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements. We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza...
C
FillRad}(\mathbb{S}^{n})-\frac{\pi}{3}2 | roman_FillRad ( italic_M ; blackboard_F ) - roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) | < 2 roman_F roman_i roman_l roman_l roman_R roman_a roman_d ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) - divide start_ARG ita...
Note that the definition of the filling radius does not require the metric dMsubscript𝑑𝑀d_{M}italic_d start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT on M𝑀Mitalic_M to be Riemannian – it suffices that dMsubscript𝑑𝑀d_{M}italic_d start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT generates the manifold topology. We call ...
In Section 9, we give some applications of our ideas to the filling radius of Riemannian manifolds and also study consequences related to the characterization of spheres by their persistence barcodes and some generalizations and novel stability properties of the filling radius.
In [64], Liu studies the mapping properties of the filling radius. His results can be interpreted as providing certain guarantees for how the filling radius changes under multiplicative distortion of metrics. Here we study the effect of additive distortion.
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic...
C
Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it. The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when availab...
C2: Interpretation of Patterns  One salient pattern that stands out in the projection (Figure 7(c)) is the long curved shape of cluster C2. As opposed to C1 and C3, which look like ordinary (formless) clusters, the points in C2 have been laid out in the 2-D projection in an elongated shape going from top to bottom, wit...
It is important to notice that the goal of the Dimension Correlation tool is not to dictate exactly which are the dimensions that cause the formation of a shape in a t-SNE projection. We propose a way to suggest the most interesting dimensions according to a detected visual pattern, in order to help analysts to priorit...
Most similarly to one of our proposed interactions (the Dimension Correlation, Subsection 4.4), in AxiSketcher [47] (and its prior version InterAxis [48]) the user can draw a polyline in the scatterplot to identify a shape, which results in new non-linear high-dimensional axes to match the user’s intentions. Since the...
Dimension Correlation   Supporting the interpretation of clusters is definitely one important step towards interpreting t-SNE, but it does not cover the entire picture. As it has been noted by Wattenberg et al. [14], t-SNE commonly generates visual patterns with different shapes, which may or may not faithfully repres...
D
Which number of subcategories into which to divide a category: the criterion followed in this regard must produce meaningful subcategories. In order to ensure a reduced number of subcategories, we consider that not all algorithms inside one category must be a member of one of its subcategories. In that way, we avoid in...
Figure 2 depicts the classification we have reached, indicating, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that the largest group of all is Swarm Intelligence category (more than a half of the proposed, 53%), inspired in the Swarm...
In order to know which are the most influential reference algorithms used to design other bio-inspired algorithms, we have grouped together reviewed proposals that could be considered to be versions of the same classical algorithm. Figure 6 shows the classification of each algorithm based on its behavior, and the numbe...
Bearing the above criteria in mind, Figure 5 shows the classification reached after our literature analysis. The plot indicates, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that in most nature- and bio-inspired algorithms, new solut...
It has not been until relatively recent times that the community has embraced the need for arranging the myriad of existing bio-inspired algorithms and classifying them under principled, coherent criteria. In 2013, [74] presented a classification of meta-heuristic algorithms as per their biological inspiration that di...
A
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo...
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
B
The correlation between egress and ingress filtering in previous work shows that the measurements of ingress filtering also provide a lower bound on the number of networks that enforce egress filtering of spoofed outbound packets. Therefore our results on networks that do not enforce ingress filtering imply that at lea...
∙∙\bullet∙ Limited coverage. Previous studies infer spoofability based on measurements of a limited set of networks, e.g., those that operate servers with faulty network stack (Kührer et al., 2014) or networks with volunteers that execute the measurement software (Beverly and Bauer, 2005; Beverly et al., 2009; Mauch, ...
The downside of this approach is that the Spoofer Project requires users to download, compile and execute a software - which also needs administrative privileges to run - once per measurement. This requires not only technically knowledgeable volunteers that agree to run untrusted code, but also networks which agree to...
Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin...
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi...
C
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal ox...
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missin...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design...
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal ox...
A
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
3:Compute the sets ℬ1(1),…,ℬ|𝒯|+1(1)superscriptsubscriptℬ11…superscriptsubscriptℬ𝒯11\mathcal{B}_{1}^{(1)}\!,\ldots,\mathcal{B}_{|\mathcal{T}|+1}^{(1)}caligraphic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … , caligraphic_B start_POSTSUBSCRIPT | caligraphic_T | + 1 end_...
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Algorithm ℬℬ\mathcal{B}caligraphic_B is simply algorithm 𝒜𝒜\mathcal{A}caligraphic_A, but after every step it waits as long as necessary to make its expected running time for that step equal to the bound calculated for this step. To be precise, there are two types of waiting, best explained by an example.
Note that the time waited is independent of Q𝑄Qitalic_Q. Together, these two types of waiting ensure that (i) the time needed by ℬℬ\mathcal{B}caligraphic_B is monotone in |Q|𝑄|Q|| italic_Q | and (ii) the total expected time needed by ℬℬ\mathcal{B}caligraphic_B equals the calculated upper bound for 𝒜𝒜\mathcal{A}cali...
C
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13]. In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left,
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
C
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 an...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi...
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p...
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 an...
C
Content Extraction. Manual inspection of the English language web pages showed that they included content other than the main text: often they had a header, a footer, a navigation menu, and banners. We refer to this extra content in a web page as boilerplate. Boilerplate draws away from the focus of the main content i...
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. D...
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
Document Classification. Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria. To separate privacy policies from other web documents we used a supervised machine learning approach. Two researchers in the team labe...
We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with...
C
Figure 2(a.2) displays overlapping barcharts for depicting the per-class performances for each algorithm, i.e., two colors for the two classes in our example. The more saturated bar in the center of each class bar represents the altered performance when the parameters of algorithms are modified. Note that the view only...
Wang et al. [62] experimented with alternative visualization designs for selecting parameters, and they found that a parallel coordinates plot is a solid representation for this context as it is concise and also not rejected by the users. A drawback is the complexity of it compared to multiple simpler scatterplots. Fig...
The selection of \raisebox{-.0pt} {\tiny\bfS2}⃝ leads us to 170 models, cf. Figure 7(d). By selecting these models, we get a new prediction space projection, shown in Figure 7(b). While some predictions are clearly in the positive or negative class, we focus on the unclear cases and select them using the lasso tool. Th...
Figure 2(a.1) shows that KNN models perform well, but not all of them. We can click the KNN boxplot and further explore and tune the model parameters for KNN with an interactive parallel coordinates plot, as shown in Figure 2(b), where six models are selected by filtering.
Figure 7: The exploration of the models’ and predictions’ spaces and the metamodel’s results. (a) presents the initial models’ space and how it can be simplified with the removal of unnecessary models. The predictions’ space is then updated, and the user is able to select instances that are not well classified by the ...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
A
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
D
Figure 6: The subarray patterns on the cylinder and the corresponding expanded cylinder. (a) The t-UAV subarray partition pattern. (b) The r-UAV subarray partition pattern with conflict. (c) The r-UAV subarray partition pattern without conflict. (d) The t-UAV subarray partition pattern with beamwidth selection.
Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast m...
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as
The r-UAV needs to select multiple appropriate AWVs 𝒗⁢(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , ital...
Without loss of generality, let us focus on the TE-aware codeword selection for the k𝑘kitalic_k-th t-UAV at the r-UAV side. The beam gain is selected as the optimization objective, and the problem of beamwidth control is translated to choose the appropriate subarray size, which corresponds to the appropriate layer in ...
B
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
A
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Suc...
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
D
As for the costs, the decoder depth has a strong impact on inference speed, as the decoder has to be computed once for each decoding step during auto-regressive decoding Kasai et al. (2021); Xu et al. (2021c), and the use of only deep encoders Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a); Chai et al. (20...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin...
Multilingual translation uses a single model to translate between multiple language pairs Firat et al. (2016); Johnson et al. (2017); Aharoni et al. (2019). Model capacity has been found crucial for massively multilingual NMT to support language pairs with varying typological characteristics Zhang et al. (2020); Xu et ...
For the convergence of deep Transformers, Bapna et al. (2018) propose the Transparent Attention mechanism which allows each decoder layer to attend weighted combinations of all encoder layer outputs. Wang et al. (2019) present the Dynamic Linear Combination of Layers approach that additionally aggregates shallow layers...
C
\rrbracket_{X_{i}}\rangleroman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≜ ⟨ roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⟩ and 𝒦∘...
Let (Xi,θi)i∈Isubscriptsubscript𝑋𝑖subscriptθ𝑖𝑖𝐼(X_{i},\uptheta_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT be a family of pre-spectral spaces. The product space X≜∏i∈IXi≜�...
\left(X_{i}\right)caligraphic_S ( ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ≃ ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT caligraphic_S ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) [18, Theorem 8.4.8]. Therefore, ...
\neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i...
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_P...
D
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l...
Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ...
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul...
Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re...
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen...
D
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
A
Given a newly arriving scenario A𝐴Aitalic_A, we can set (HA,πA)←←subscript𝐻𝐴superscript𝜋𝐴absent(H_{A},\pi^{A})\leftarrow( italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_π start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ) ←GreedyCluster(A,R,−R)𝐴𝑅𝑅(A,R,-R)( italic_A , italic_R , - italic_R ),...
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
There is a polynomial-time 3333-approximation for homogeneous RW-MatSup. There is a 3333-approximation algorithm for RW-MuSup, with runtime poly(n,m,Λ)normal-poly𝑛𝑚normal-Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
C
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
A
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene...
Results from Figure 10 show that the increase of l𝑙litalic_l lowers the information loss but raises the relative error rate. It is mainly because the number of tuples in each group increases with the growth of l𝑙litalic_l. On the one hand, in random output tables, the probabilities that tuples have to cover on the Q...
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ...
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and mo...
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene...
B
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
C
Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al....
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to ...
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
Bandit problems can be viewed as a special case of MDP problems with unit planning horizon. It is the simplest model that captures the exploration-exploitation tradeoff, a unique feature of sequential decision-making problems. There are several ways to define nonstationarity in the bandit literature. The first one is ...
B
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,S⁢D=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
C
We conduct experiments to investigate the performance gain concerning entity degrees. Typically, an entity with a higher degree indicates that it has more neighboring entities. Consequently, the computation of attention scores to aggregate these neighbors becomes crucial.
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg...
The results on the ZH-EN dataset are depicted in Figure 7. For entities with only a few neighbors, the advantage of leveraging DAN is not significant. However, as the degree increases, incorporating DAN yields more performance gain. This upward trend halts until the degree exceeds 20. Overall, DAN exhibits significant...
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex...
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da...
B
Optimization detail. We update the parameters of VDM for tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times after each episode by using Adam optimizer with the learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The hyper-parameter tvdmsub...
Figure 6: The evaluation curve in Atari games. The first 6 games are hard exploration tasks. The different methods are trained with different intrinsic rewards, and extrinsic rewards are used to measure the performance. Our method performs best in most games, both in learning speed and quality of the final policy. The ...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to ...
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-err...
C
If we would add nodes to make the grid symmetric or tensorial, then the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪⁢(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}ital...
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros...
Here, we answer Questions 1–2. To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us...
for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π, find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 ...
B
The last two plots correspond to covariance-shifted Gaussian distributions, where Fig. 1c) examines the power for different n𝑛nitalic_n with fixed d=60𝑑60d=60italic_d = 60, and Fig. 1d) examines the power for different d𝑑ditalic_d with fixed n=75𝑛75n=75italic_n = 75. We can see that the power of all methods increas...
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
The Wasserstein distance, as a particular case of IPM, is popular in many machine learning applications. However, a significant challenge in utilizing the Wasserstein distance for two-sample tests is that the empirical Wasserstein distance converges at a slow rate due to the complexity of the associated function space....
The finite-sample convergence of general IPMs between two empirical distributions was established. Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality.
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
C
Learning disentangled factors h∼qϕ⁢(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpreta...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
A
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
B
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
A finite field, by definition, is a finite set, and the set of all permutation polynomials over the finite field forms a group under composition. Given a finite subset of such permutations, we can compute a group generated by this set. In this paper, we propose a representation of such a group using the concept of lin...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
A finite group, GFsubscript𝐺𝐹G_{F}italic_G start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, can be generated from Fisubscript𝐹𝑖F_{i}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using composition as the group operation. In this section, we devise a procedure to compute the linear representation of the gro...
B
In order to compare the different meta-learners in terms of classification and view selection performance, we perform a series of simulations. We generate multi-view data with V=30𝑉30V=30italic_V = 30 or V=300𝑉300V=300italic_V = 300 disjoint views, where each view 𝑿(v),v=1,…,Vformulae-sequencesuperscript𝑿𝑣𝑣1…𝑉\b...
Table 3: Results of applying MVS with different meta-learners to the breast cancer data. ANSV denotes the average number of selected views. H denotes the H measure (Hand, \APACyear2009). In computing the H measure we assume that the misclassification cost is the same for each class. Φ^^Φ\hat{\Phi}over^ start_ARG roman...
Table 2: Results of applying MVS with different meta-learners to the colitis data. ANSV denotes the average number of selected views. H denotes the H measure (Hand, \APACyear2009). In computing the H measure we assume that the misclassification cost is the same for each class. Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end...
Any simulation study is limited by its choice of experimental factors. In particular, in our simulations we assumed that all features corresponding to signal have the same regression weight, and that all views contain an equal number of features. The correlation structures we used are likely simpler than those encounte...
For each experimental condition, we simulate 100 multi-view data training sets. For each such data set, we randomly select 10 views. In 5 of those views, we determine all of the features to have a relationship with the outcome. In the other 5 views, we randomly determine 50% of the features to have a relationship with...
D
This phase can utilize off-the-shelf feature selection methods [29, 30] to identify the relevant variables. When choosing a feature selection method, the following factors should be considered: (1) The prediction models used in the prediction model training phase; (2) The interpretability of the selected variables; an...
For Phase 1, five feature selection methods, including 2 causal and 3 non-causal methods, are used in our experiments. FBED and HITON-PC are causal feature selection techniques. FBED is used for MB (Markov Blanket) discovery and HITON-PC is for PC (Parents and Children) selection. MI, IEPC and DA are non-causal feature...
This phase can utilize off-the-shelf feature selection methods [29, 30] to identify the relevant variables. When choosing a feature selection method, the following factors should be considered: (1) The prediction models used in the prediction model training phase; (2) The interpretability of the selected variables; an...
Among the filter feature selection methods, causal feature selection methods [31, 29, 32] are recommended choices as they identify the causal factors of a target variable, offering better interpretability. These methods select the parents and children (PC) or Markov blanket (MB) of a target variable in a Bayesian netw...
Feature selection methods fall into three categories, wrapper, filter and embedded methods [30]. Conceptually, methods from the three categories can all be utilized for relevant variable selection in DepAD. However, filter feature selection is preferred due to its high efficiency and independence of prediction models.
D
Figure 1: Illustration of the impact of the κ𝜅\kappaitalic_κ parameter (logistic case, multinomial logit case closely follows): A representative plot of the derivative of the reward function. The x-axis represents the linear function x⊤⁢θsuperscript𝑥top𝜃x^{\top}\thetaitalic_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSC...
Motivated by these issues, we consider the dynamic assortment optimization problem. In every round, the retailer offers a subset (assortment) of products to a consumer and observes the consumer response. Consumers purchase (at most one product from each assortment) products that maximize their utility, and the retaile...
We note that Ou et al. [2018] also consider a similar problem of developing an online algorithm for the MNL model with linear utility parameters. Though they establish a regret bound that does not depend on the aforementioned parameter κ𝜅\kappaitalic_κ, they work with an inaccurate version of the MNL model. More speci...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe...
B
Graph neural networks (GNN) are a useful model for exploiting correlations in irregular structures [17]. As they become popular in different computer vision fields [13, 38, 40], researchers also find their application in temporal action localization [3, 44, 46]. G-TAD [44] breaks the restriction of temporal locations o...
Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period...
Inspired by FPN [22], which computes multi-scale features with different levels, we propose a cross-scale graph pyramid network (xGPN). It progressively aggregates features from cross scales as well as from the same scale at multiple network levels via a hybrid module of a temporal branch and a graph branch. As shown ...
Graph neural networks (GNN) are a useful model for exploiting correlations in irregular structures [17]. As they become popular in different computer vision fields [13, 38, 40], researchers also find their application in temporal action localization [3, 44, 46]. G-TAD [44] breaks the restriction of temporal locations o...
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur...
D
Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters. T...
(iv) control the evolutionary process by setting the number of models that will be used for crossover and mutation in each algorithm (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b)); and (v) compare the performances of the best so far identified ensemble against the acti...
Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters. T...
Evolutionary optimization and majority-voting ensembles inspired us to focus on the three aforementioned questions that constitute open research challenges. In this paper, we present a visual analytics (VA) tool, called VisEvol (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimiz...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
D
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
we propose the decentralized state-dependent Markov chain synthesis (DSMC) algorithm that achieves convergence to the desired distribution with an exponential rate and minimal state transitions. Additionally, we present a shortest path algorithm that can be integrated with the DSMC algorithm, as utilized in [7, 14, 15]...
We then present a decentralized Markov-chain synthesis (DSMC) algorithm based on the proposed consensus protocol and we prove that the resulting DSMC algorithm satisfies these mild conditions. This result is employed to prove that the resulting Markov chain has a desired steady-state distribution and that all initial d...
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen...
B
Due to their low-dimensionality and continuous representation, functional maps also serve as the backbone of many deep learning architectures for 3D correspondence. One of the first examples is FMNet [40], which has also been extended for unsupervised learning settings recently [27, 3, 59].
It was shown that deep learning is an extremely powerful approach for extracting shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me...
Due to their low-dimensionality and continuous representation, functional maps also serve as the backbone of many deep learning architectures for 3D correspondence. One of the first examples is FMNet [40], which has also been extended for unsupervised learning settings recently [27, 3, 59].
Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc...
Other learning methods rely on a given template for each class [25] or local neighbourhood encoding to learn a compact representation [39]. The recently conducted SHREC correspondence contest on isometric and non-isometric 3D shapes [20] revealed that there is still room for improvement in both fields.
D
On the side of path graphs, we believe that compared to [3, 22], our algorithm provides a simpler and very shorter treatment (the whole explanation is in Section 4). Moreover, it does not need complex data structures while algorithm in [3] is based on PQR-trees and algorithm in [22] is a complex backtracking algorithm...
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly...
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
C
\end{bmatrix}.italic_P start_POSTSUBSCRIPT ( italic_i italic_j ) end_POSTSUBSCRIPT = italic_p [ start_ARG start_ROW start_CELL 0.2 end_CELL start_CELL 0.05 end_CELL start_CELL 0.05 end_CELL end_ROW start_ROW start_CELL 0.05 end_CELL start_CELL 0.2 end_CELL start_CELL 0.05 end_CELL end_ROW start_ROW start_CELL 0.05 end_...
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
The numerical results of these two sub-experiments are shown in panels (i) and (j) of Figure 1, from which we can find that: all procedures enjoy improvement performances when the simulated network becomes denser; Mixed-SLIM outperforms the other three approaches, especially under the DCMM setting.
D
Specifically, a distributional optimization problem is given by minp∈𝒫2⁢(𝒳)⁡F⁢(p)subscript𝑝subscript𝒫2𝒳𝐹𝑝\min_{p\in\mathcal{P}_{2}(\mathcal{X})}F(p)roman_min start_POSTSUBSCRIPT italic_p ∈ caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( caligraphic_X ) end_POSTSUBSCRIPT italic_F ( italic_p ), where 𝒫2⁢(...
Many machine learning problems fall into such a category. For instance, in Bayesian inference (Gelman et al., 2013), the probability distribution describes the belief based on the observations and the functional of interest is the Kullback-Leibler (KL) divergence.
Moreover, following the principle of maximum entropy (Guiasu and Shenitzer, 1985; Shore and Johnson, 1980), we can further incorporate a Kullback-Leibler (KL) divergence regularizer into (3.3). Specifically, letting p0∈𝒫2⁢(𝒳)subscript𝑝0subscript𝒫2𝒳p_{0}\in\mathcal{P}_{2}(\mathcal{X})italic_p start_POSTSUBSCRIPT 0 ...
Second, the functional optimization problem associated with the variational representation of F𝐹Fitalic_F can be solved by any supervised learning methods such as deep learning (LeCun et al., 2015; Goodfellow et al., 2016; Fan et al., 2019) and kernel methods (Friedman et al., 2001; Shawe-Taylor et al., 2004), which o...
One commonly used ambiguity set is the level set of KL divergence, i.e., ℳ:={p:KL⁢(p,p0)≤ϵ}assignℳconditional-set𝑝KL𝑝subscript𝑝0italic-ϵ\mathcal{M}:=\{p:\textrm{KL}(p,p_{0})\leq\epsilon\}caligraphic_M := { italic_p : KL ( italic_p , italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ≤ italic_ϵ } for some ϵ>0italic-ϵ...
A
The intuitive idea is: random noise (random latent variable) is equivalent to the fixed ϵitalic-ϵ\epsilonitalic_ϵ in ϵitalic-ϵ\epsilonitalic_ϵ-greedy, and the learnable Gaussian distribution (learnable latent variable) is equivalent to the adjustable ϵitalic-ϵ\epsilonitalic_ϵ in ϵitalic-ϵ\epsilonitalic_ϵ-greedy, which ...
2) The performances of Individual RL and PressLight drop 38% and 41% when the model is transferred. It shows that the models learned by the regular RL algorithms indeed rely on the training scenario. MetaLight is more robust to various scenarios than Individual RL and PressLight, and it indicates the advantage of the m...
(2) Modelling Latent Coordination with Neighbors: In our method, the neighbor information is available only in training, and the decoders are abandoned in execution. Only using individual observation as the input of policy may ignore the latent neighbors information. As shown in Eq. 1, the observation transition is cau...
Latent variable is an additional input of the policy network except for the observation, which could be regarded as prior knowledge about the task. Previous research [64] has shown that random Gaussian distribution could be regarded as the noise disturbance and provide randomness of the policy. Specifically, the regula...
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a...
B
σj,𝐮j,𝐯jsubscript𝜎𝑗subscript𝐮𝑗subscript𝐯𝑗\sigma_{j},\,\mathbf{u}_{j},\,\mathbf{v}_{j}italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , bold_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , bold_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, for j= 1,…,r𝑗1…𝑟j\,=\,1,\ldots,ritalic_j = 1 , … , itali...
calculate 𝐲=[𝐮1𝖧⁢𝐛σ1,…,𝐮r𝖧⁢𝐛σr]⊤𝐲superscriptsuperscriptsubscript𝐮1𝖧𝐛subscript𝜎1…superscriptsubscript𝐮𝑟𝖧𝐛subscript𝜎𝑟top\mathbf{y}\,=\,\left[\frac{\mathbf{u}_{1}^{{\mbox{\tiny$\mathsf{H}$}}}\,% \mathbf{b}}{\sigma_{1}},\ldots,\frac{\mathbf{u}_{r}^{{\mbox{\tiny$\mathsf{H}$}%
Arank-r:=σ1⁢(A)⁢𝐮1⁢𝐯1𝖧+⋯+σr⁢(A)⁢𝐮r⁢𝐯r𝖧assignsubscript𝐴rank-rsubscript𝜎1𝐴subscript𝐮1superscriptsubscript𝐯1𝖧⋯subscript𝜎𝑟𝐴subscript𝐮𝑟superscriptsubscript𝐯𝑟𝖧A_{\mbox{\scriptsize rank-$r$}}~{}~{}:=~{}~{}\sigma_{1}(A)\,\mathbf{u}_{1}\,% \mathbf{v}_{1}^{{\mbox{\tiny$\mathsf{H}$}}}+\cdots+\sigma_{r}(A)\,\ma...
Arank-r†⁢𝐛=(I−N⁢N𝖧)⁢[μ⁢N𝖧A]†⁢[𝟎𝐛].superscriptsubscript𝐴rank-r†𝐛𝐼𝑁superscript𝑁𝖧superscriptdelimited-[]𝜇superscript𝑁𝖧𝐴†delimited-[]0𝐛A_{\mbox{\scriptsize rank-$r$}}^{\dagger}\,\mathbf{b}~{}~{}=~{}~{}(I-N\,N^{{% \mbox{\tiny$\mathsf{H}$}}})\,\left[\begin{array}[]{c}\mu\,N^{{\mbox{\tiny$%
Namely 𝐳0=N𝖧⁢(𝐱0−A†⁢𝐛)=N𝖧⁢𝐱0subscript𝐳0superscript𝑁𝖧subscript𝐱0superscript𝐴†𝐛superscript𝑁𝖧subscript𝐱0\mathbf{z}_{0}\,=\,N^{{\mbox{\tiny$\mathsf{H}$}}}\,(\mathbf{x}_{0}-A^{\dagger}% \,\mathbf{b})\,=\,N^{{\mbox{\tiny$\mathsf{H}$}}}\,\mathbf{x}_{0}bold_z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_N st...
A
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i...
Concerning the application of frequency predictions in competitive online optimization, we note that, perhaps surprisingly, such predictions have not been used widely, despite their simplicity and effectiveness. (?) gave improved competitive ratios for a generalized online matching problem motivated by advertisement sp...
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small. The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given t...
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n...
We gave the first results on the competitive analysis of online bin packing, in a setting in which the algorithm has access to learnable predictions concerning the size frequencies. Our approach exploits the concept of profile packing, which can be applicable in more generalized packing problems, such as two-dimensiona...
A
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
Finally, we empirically show the proposed framework produces high-fidelity and watertight meshes. It means that it solves the initial problem of disjoint patches occurring in the original AtlasNet (Groueix et al., 2018). To evaluate the continuity of output surfaces, we propose to use the following metric.
D
O⁢(n2ε⁢n⁢ln⁡n⁢maxi,j⁡Ci⁢j2⁢χ).𝑂superscript𝑛2𝜀𝑛𝑛subscript𝑖𝑗superscriptsubscript𝐶𝑖𝑗2𝜒O\left(\frac{n^{2}}{\varepsilon}\sqrt{n\ln n}\max_{i,j}C_{ij}^{2}\chi\right).italic_O ( divide start_ARG italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_ε end_ARG square-root start_ARG italic_n ro...
We comment on the complexity of the DMP algorithm compared to the existing state-of-the-art methods: the iterative Bregman projections (IBP) algorithm, its accelerated versions and primal dual algorithm (ADCWB), see Table 1. All of these methods use entropic regularization of Wasserstein metric with parameter γ𝛾\gamma...
We demonstrate the performance of the DMP algorithm on different network architectures with different conditional number χ𝜒\chiitalic_χ: complete graph, star graph, cycle graph and the Erdős-Rényi random graphs with the probability of edge creation p=0.5𝑝0.5p=0.5italic_p = 0.5 and p=0.4𝑝0.4p=0.4italic_p = 0.4 under...
parameter γ𝛾\gammaitalic_γ to solve the WB problem. We ran the IBP and the ADCWB algorithms with different values of the regularization parameter γ𝛾\gammaitalic_γ starting from γ=0.1𝛾0.1\gamma=0.1italic_γ = 0.1 and gradually decreasing its value to γ=10−4𝛾superscript104\gamma=10^{-4}italic_γ = 10 start_POSTSUPERSCR...
Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter probl...
A
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ...
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
B
N=N⁢(b,k,m,ℓ)𝑁𝑁𝑏𝑘𝑚ℓN=N(b,k,m,\ell)italic_N = italic_N ( italic_b , italic_k , italic_m , roman_ℓ ) such that for every n≥N𝑛𝑁n\geq Nitalic_n ≥ italic_N and any group homomorphism h:Ck⁢(G⁢[n]m)→(ℤ2)b:ℎ→subscript𝐶𝑘𝐺superscriptdelimited-[]𝑛𝑚superscriptsubscriptℤ2𝑏h:C_{k}(G[n]^{m})\to\left(\mathbb{Z}_{2}\right)...
1111111111111111001111001111111100111111110011110000111111110011110011110000111100111100001111111111111111001111001111000000001111111111110000000000001111111111111111001111111111110011110011111111001111111100000000000000000011111111111111110000000011110000111111111111001111001111111100001111000000000011111111111100111...
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP...
Two central problems in this line of research are to identify the weakest possible assumptions under which the classical theorems generalize, and to determine their key parameters, for instance the Helly number (d+1𝑑1d+1italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT i...
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma...
A
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S...
In our running example, recognizing that the instances are uniformly distributed from Fig. 3(a.1–a.3), we use the default splitting with 25% predictive probability intervals. It is essential to focus on the aforementioned slices (especially (a.1) and (a.2)) without ruining the prediction for the correctly classified in...
We began our investigation by examining the distribution of instances in the explorable subspaces. We noticed that most instances are correctly classified with more than 75% predicted probability (i.e., high confidence), as shown in Fig. 7(a.4). The invited ML expert found the 25% predicted probability intervals a cons...
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
B
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
B
For example, in systems trained to infer hair color on the CelebA dataset [43], the majority group of non-blond males occurs 50505050 times more than the minority group of blond males, resulting in systems incorrectly predicting non-blond as hair color for the minority group.
Methods are typically highly sensitive to hyperparameter choices, and papers report numbers on systems in which the hyperparameters were tuned using the test set distribution [18, 50, 64]. In the real world, biases may stem from multiple factors and may change in different environments, making this setup unrealistic. ...
Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli...
While this is a toy problem, in the real world, hidden minority patterns are common and failing on them can have dire consequences. Systems designed to aid human resources, help with medical diagnosis, determine probation, or loan qualification could be biased against minority groups based on age, gender, religion, sex...
In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu...
C
The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras. More concretely, the 2D eye feature regression method learns a mapping function from...
The eye model is fitted with geometric features, such as the infrared corneal reflections [28, 29], pupil center [30] and iris contours [31]. However, they usually require a personal calibration process for each subject, since the eye model contains subject-specific parameters such as cornea radius, kappa angles.
The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras. More concretely, the 2D eye feature regression method learns a mapping function from...
The 3D eye model recovery-based methods usually require personal calibration to recover person-specific parameters such as iris radius and kappa angle. While these methods often achieve high accuracy, they require dedicated devices such as infrared cameras.
It is non-trivial to learn an accurate and universal gaze estimation model. Conventional 3D eye model recovery methods usually build a unified gaze model including subject-specific parameters such as eyeball radius [28]. They perform a personal calibration to estimate these subject-specific parameters. In the field of...
A
The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax f...
The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax f...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine...
The comparison of the computation times between the proposed method and Almabdy et al.’s method almabdy2019deep shows that the use of the BoF paradigm decreases the time required to extract deep features and to classify the masked faces (See Table 4). Note that this comparison is performed using the same pre-trained ...
C
Note: this is an extended version of an eponymous paper that appeared in FSCD 2022 that includes further examples (Examples 1, 1, and 1), a more straightforward presentation of the metatheory (Section 4) based on Kripke logical relations [Plo73], and a representative set of the corresponding proofs (Sections 3 and 4).
Adding (co)inductive types and terminating recursion (including productive corecursive definitions) to any programming language is a non-trivial task, since only certain recursive programs constitute valid applications of (co)induction principles. Briefly, inductive calls must occur on data smaller than the input and, ...
Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was...
Moreover, some prior work, which is based on sequential functional languages, encodes recursion via various fixed point combinators that make both mixed inductive-coinductive programming [Bas18] and substructural typing difficult, the latter requiring the use of the ! modality [Wad12]. Thus, like Fωcopsuperscriptsubsc...
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata...
A
For another, since M>T>L𝑀𝑇𝐿M>T>Litalic_M > italic_T > italic_L and δ>1𝛿1{\delta}>1italic_δ > 1, it is intuitive from Table II that in FairCMS-II, the cloud has increased in computing and storage costs compared to that of FairCMS-I. In fact, the cloud-side communication cost of also increases in FairCMS-II as the un...
In FairCMS-II, the cloud’s cost increases O⁢(K⁢M⁢T⁢α+K⁢M⁢β)𝑂𝐾𝑀𝑇𝛼𝐾𝑀𝛽O(KMT\alpha+KM\beta)italic_O ( italic_K italic_M italic_T italic_α + italic_K italic_M italic_β ), and the cost to the user changes to O⁢((M+L+2)⁢α+M⁢γ)𝑂𝑀𝐿2𝛼𝑀𝛾O((M+L+2)\alpha+M\gamma)italic_O ( ( italic_M + italic_L + 2 ) italic_α + italic...
Finally, we conduct a comparative experiment to evaluate the proposed schemes against their relevant existing counterparts, and the results are displayed in Fig. 15. For FairCMS-I and FairCMS-II, we measure the time overhead of Part 2 as it is executed once for each user. For the other schemes, we evaluate their prima...
Finally, we analyze the necessary efforts required from the user to obtain the protected content. As can be seen from Table II, compared with the AFP scheme, the computational cost and communication cost of the user in FairCMS-I only increase constant terms O(O(italic_O (2α)\alpha)italic_α ) and O(O(italic_O (2)))) re...
The communication cost is then analyzed. In FairCMS-I, the owner sends the LUT encrypted media content 𝐜𝐜\mathbf{c}bold_c to the cloud with a communication cost of O⁢(M)𝑂𝑀O(M)italic_O ( italic_M ). The user receives the public-key encrypted D-LUT and the LUT encrypted media content sent from the cloud, which costs...
C
Though based on graph spectral theory Bruna et al. (2013), the learning process of graph convolutional networks (GCN) Kipf and Welling (2017) also can be considered as a mean-pooling neighborhood aggregation. GraphSAGE Hamilton et al. (2017) concatenates the node features and introduces three
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
Currently, Graph Neural Networks (GNN) Kipf and Welling (2017); Hamilton et al. (2017); Veličković et al. (2018) have recently emerged as an effective class of models for capturing high-order relationships between nodes in a graph and have achieved state-of-the-art results on a variety of tasks such as computer vision...
Due to the strength in modeling relations on graph-structured data, GNN has been widely applied to various applications like neural machine translation Beck et al. (2018), semantic segmentation Qi et al. (2017), image classification Marino et al. (2017), situation recognition Li et al. (2017), recommendation Wu et al. ...
To capture the diversified polysemy of feature interactions in different semantic subspaces Li et al. (2020) and also stabilize the learning process Vaswani et al. (2017); Veličković et al. (2018), we extend our mechanism to employ multi-head attention.
C
which is reminiscient of the 𝒪⁢(Lf𝒳⁢D2/t)𝒪superscriptsubscript𝐿𝑓𝒳superscript𝐷2𝑡\mathcal{O}(L_{f}^{\mathcal{X}}D^{2}/t)caligraphic_O ( italic_L start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_X end_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_t )...
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
Moreover, as the upper bound on the Bregman divergence holds for ν=2𝜈2\nu=2italic_ν = 2 regardless of the value of d2⁢(𝐱,𝐲)subscript𝑑2𝐱𝐲d_{2}(\mathbf{x},\mathbf{y})italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_x , bold_y ), we can modify the proof of Theorem 2.4 to obtain a convergence rate of the form:...
For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5 if the Frank-Wolfe gap g⁢(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ) is greater than the away gap ⟨∇f⁢(𝐱t),𝐚t−𝐱t⟩∇𝑓subscript𝐱𝑡subscript𝐚𝑡subscript𝐱𝑡\left\langle\nabla f(\mathbf{x}_{t}),\m...
A
Table 1: A summary of the running times in several different models, compared to the previous state-of-the-art, for computing a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate maximum matching. In the distributed setting, “running time” refers to the round complexity, while in the streaming setting it refers to th...
Our DFS search approach guarantees that we find a poly⁡εpoly𝜀\operatorname{poly}\varepsilonroman_poly italic_ε fraction of all possible augmentations, giving rise to an algorithm that in poly⁡1/εpoly1𝜀\operatorname{poly}1/\varepsilonroman_poly 1 / italic_ε passes finds a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approx...
Given a graph on n𝑛nitalic_n vertices, there is a deterministic (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximation algorithm for maximum matching that runs in poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes in the semi-streaming model.
In the special case of bipartite graphs, the deterministic algorithms by Ahn and Guha [AG11], Eggert et al. [EKMS12], as well as Assadi et al. [AJJ+22] obtain a runtime of poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes. The first algorithm can also be adapted to the case of genera...
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and let ε∈(0,1/2)𝜀012\varepsilon\in(0,1/2)italic_ε ∈ ( 0 , 1 / 2 ) be a parameter. Let Amatchingsubscript𝐴matchingA_{\rm{matching}}italic_A start_POSTSUBSCRIPT roman_matching end_POSTSUBSCRIPT be an algorithm that finds an O⁢(1)𝑂1O(1)italic_O ( 1 )-approximate max...
B
It is worth noting that for both CPP and B-CPP, the choices b=2𝑏2b=2italic_b = 2 for quantization or k=5𝑘5k=5italic_k = 5 for Rand-k are more communication-efficient than b=4,6𝑏46b=4,6italic_b = 4 , 6 or k=10,20𝑘1020k=10,20italic_k = 10 , 20. This indicates that as the compression accuracy becomes smaller, its impa...
In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com...
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the...
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun...
In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP...
D
Data and model. We consider the benchmark of image classification on the CIFAR-10 [46] dataset. It contains 50,0005000050,00050 , 000 and 10,0001000010,00010 , 000 images in the training and validation sets, respectively, equally distributed over 10101010 classes. To emulate the distributed scenario, we partition the ...
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
Setting. To train ResNet18 in CIFAR-10, one can use stochastic gradient descent with momentum 0.90.90.90.9, the learning rate of 0.10.10.10.1 and a batch size of 128128128128 (40404040 batches = 1111 epoch). This is one of the default learning settings. Based on these settings, we build our settings using the intuitio...
Remark. In order for the comparison of Algorithm 1 and Algorithm 3 to be fair, it is necessary to balance the number of communications and local iterations for both algorithms, that is why we take T=1p=1ηopt⁢λ=10λ𝑇1𝑝1subscript𝜂opt𝜆10𝜆T=\frac{1}{p}=\frac{1}{\eta_{\text{opt}}\lambda}=\frac{10}{\lambda}italic_T = di...
Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre...
B
We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt...
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t...
A (C)CE MS provides a distribution that is in equilibrium over the set of joint policies found so far, Π0:tsuperscriptΠ:0𝑡\Pi^{0:t}roman_Π start_POSTSUPERSCRIPT 0 : italic_t end_POSTSUPERSCRIPT. For the algorithm to have converged, it needs to also be in equilibrium over the set of all possible joint policies, Π∗supe...
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
A
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_PO...
Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta...
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a seri...
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mecha...
The similarity function serves as a measure of the local sensitivity of the issued queries with respect to the replacement of the two datasets, by quantifying the extent to which they differ from each other with respect to the query q𝑞qitalic_q. The case of noise addition mechanisms provides a natural intuitive interp...
C
In fact, we prove a slightly stronger statement. If a graph G𝐺Gitalic_G can be reduced to a graph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by iteratively removing z𝑧zitalic_z-antlers, each of width at most k𝑘kitalic_k, and the sum of the widths of this sequence of antlers is t𝑡...
The remainder of the paper is organized as follows. After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3. We present structural properties of antlers and how they combine in Section 4. In Section 5 we show how color coding can be used to find a large feedb...
As described in Section 1, our algorithm aims to identify vertices in antlers using color coding. To allow a relatively small family of colorings to identify an entire antler structure (C,F)𝐶𝐹(C,F)( italic_C , italic_F ) with |C|≤k𝐶𝑘|C|\leq k| italic_C | ≤ italic_k, we need to bound |F|𝐹|F|| italic_F | in terms of...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets). We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler...
D
Figure 11: In the first (resp., second, third) row, we show two examples from RealHM [60] (resp., HFlickr in iHarmony4 [9], HVIDIT [45]) dataset. From left to right in each example, we show the composite image, the foreground mask, and the ground-truth harmonized image.
Given a composite image, its foreground and background are likely to be captured in different conditions (e.g., weather, season, time of day, camera setting), and thus have distinctive illumination characteristics, which make them look incompatible. Image harmonization aims to adjust the appearance of composite foregr...
In the previous section, image harmonization methods could adjust the foreground appearance to make it compatible with the background, but they ignore the fact that the inserted object may also have impact on the background (e.g., reflection, shadow). For example, if background objects cast shadows on the ground but th...
To solve this issue, image blending [172, 198] aims to address the unnatural boundary between foreground and background, so that the foreground could be seamlessly blended with the background. For the second issue, since the foreground and background may be captured in different conditions (e.g., weather, season, time ...
Replacement: A natural way to build image harmonization dataset is collecting a set of foreground images captured in different illumination conditions, followed by replacing one foreground with another counterpart. For example, Transient Attributes Database [70] contains 101 sets, in which each set has well-aligned im...
A
Impact of Context Data. By leveraging POI data for region matching, our proposed RegionTrans method achieves lower error rates than fine-tuning in most cases. This finding, coupled with the results presented in Section III-A, underscores the importance of multi-modal data in CityNet and verifies the connection between...
Inter-city correlations. Our results demonstrate that transfer learning leads to error reductions in all source-target pairs, as compared to using target data only. Notably, the largest reduction of approximately 15% is observed in the case of Shenzhen and Chongqing. These findings suggest that there exist sufficient ...
TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er...
To address this problem, we utilize LSTM as the base model, which is similar to ST-net in MetaST [5], and adopt a multi-task learning approach. We select Beijing and Shanghai as the source cities for transfer learning tasks in cities with large map sizes, and Xi’an as the source city for the transfer learning tasks in ...
Domain Selection. Our experimental results consistently demonstrate that using Beijing as the source city yields the best performance, irrespective of the target city and the choice of algorithms. One possible explanation for this observation is that Beijing comprises the highest number of regions, and therefore exhib...
D
Although ordinary neural networks have the benefit that even for a large number of features and weights they can be implemented very efficiently, their Bayesian incarnation suffers from a problem. The nonlinearities in the activation functions and the sheer number of parameters, although they are the features that make...
Both the integral in the inference step (4) and in the prediction step (5) can in general not be computed exactly (conjugate priors Fink97acompendium , such as normal distributions, form an important exception). At inference there are two general classes of approximations available:
In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th...
The general structure of the paper is as follows. In Section 2 some general aspects of the estimation of prediction intervals for regression are discussed. Subsequently, in Section 3, the different classes of methods are reviewed. The setup of an experimental assessment for a selection of methods is presented in Secti...
One of the most popular probabilistic models for regression problems is the Gaussian process williams1996gaussian . The main reason for its popularity is that it is one of the only Bayesian methods where the inference step (4) can be performed exactly, since the marginalization of multivariate normal distributions can...
A
In what follows, we use ‘our model (score)’ to indicate the result when MIDI scores are considered and similarly ‘our model (performance)’ for MIDI performances. Since MIDI performance contains velocity information, we do not evaluate on the velocity prediction task for fairness. We note that, while ‘our model (score)’...
For the note-level classification tasks, we use an RNN model as our baseline that consists of three bi-directional long short-term memory (Bi-LSTM) layers, each with 256 neurons and a feed-forward layer for classification, since such a network has led to state-of-the-art result in many audio-domain music classification...
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, ...
Tab. 2 shows that our model greatly outperforms Bi-LSTM-Attn \parencitelin2017structured and the CNN baseline \parencitelee20ismirLBD by 10–20% regardless of the token representation taken, reaching 81.75% testing accuracy at best for this 8-class classification problem.
Tab. 2 lists the testing accuracy achieved by the baseline models and the proposed ones for four downstream tasks. We see that “our model (score)” outperforms the Bi-LSTM or Bi-LSTM-Attn baselines in all tasks consistently, using either the REMI or CP representation.
D
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and H𝐻Hitalic_H its spanning subgraph. Then λ⁢(χ⁢(H)−1)+1≤B⁢B⁢Cλ⁢(G,H)≤λ⁢(χ⁢(H)−1)+n−χ⁢(H)+1𝜆𝜒𝐻11𝐵𝐵subscript𝐶𝜆𝐺𝐻𝜆𝜒𝐻1𝑛𝜒𝐻1\lambda(\chi(H)-1)+1\leq BBC_{\lambda}(G,H)\leq\lambda(\chi(H)-1)+n-\chi(H)+1italic_λ ( italic_χ ( italic_H ) - 1 ) + 1 ≤ italic_B ...
An obvious extension would be an analysis for a class of split graphs, i.e. graphs whose vertices can be partitioned into a maximum clique C𝐶Citalic_C (of size ω⁢(G)=χ⁢(G)𝜔𝐺𝜒𝐺\omega(G)=\chi(G)italic_ω ( italic_G ) = italic_χ ( italic_G )) and an independent set I𝐼Iitalic_I. A simple application of Theorem 2.18 gi...
The λ𝜆\lambdaitalic_λ-backbone coloring problem was studied for several classes of graphs, for example split graphs [5], planar graphs [3], complete graphs [6], and for several classes of backbones: matchings and disjoint stars [5], bipartite graphs [6] and forests [3]. For a special case λ=2𝜆2\lambda=2italic_λ = 2 i...
Moreover, it was proved before in [4] that there exists a 2222-approximate algorithm for complete graphs with bipartite backbones and a 3/2323/23 / 2-approximate algorithm for complete graphs with connected bipartite backbones. Both algorithms run in linear time. As a corollary, it was proved that we can compute B⁢B⁢C...
Additionally, [16] proved for comparability graphs we can find a partition of V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) into at most k𝑘kitalic_k sets which induce semihamiltonian subgraphs in the complement of G𝐺Gitalic_G (i.e. it contains a Hamiltonian path) and from that it follows that B⁢B⁢C2⁢(Kn,G)𝐵𝐵subscript𝐶2subscr...
B