context stringlengths 250 6.18k | A stringlengths 250 3.82k | B stringlengths 250 8.2k | C stringlengths 250 4.99k | D stringlengths 250 4.17k | label stringclasses 4
values |
|---|---|---|---|---|---|
)z}{c(c+1)}}{\frac{(a+1-b)z}{c+1}+1-\cdots}\,\frac{\frac{(a+2)(c+1-b)z}{(c+1)(%
c+2)}}{\frac{(a+2-b)z}{c+2}+1-\cdots}divide start_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) end_ARG start_ARG italic_F ( italic_a + 1 , italic_b + 1 ; italic_c + 1 ; italic_z ) end_ARG ≡ divide start_ARG - italic_b italic_z... | \frac{f^{\prime\prime}(x)}{f^{\prime}(x)}\right)roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / ( 1 - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG 2 italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ... | This already suffices to implement the standard Newton iteration, i.e., to
approximate (1) by Δx=−f(x)/f′(x)Δ𝑥𝑓𝑥superscript𝑓′𝑥\Delta x=-f(x)/f^{\prime}(x)roman_Δ italic_x = - italic_f ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ). | to not exist because Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT changes sign over the integration interval.
(i) (14) suggests to split Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POS... | \prime\prime}(x)+\frac{(\Delta x)^{3}}{3!}f^{\prime\prime\prime}(x)\approx 0.italic_f ( italic_x + roman_Δ italic_x ) ≈ italic_f ( italic_x ) + roman_Δ italic_x italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) + divide start_ARG ( roman_Δ italic_x ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG s... | B |
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices.
Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar... |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... |
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec... | Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. | A |
It then follows from Lemma 1 that 1≤αiF≤α1superscriptsubscript𝛼𝑖𝐹𝛼1\leq\alpha_{i}^{F}\leq\alpha1 ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ≤ italic_α for all the local eigenvalues. Thus, Λ~h△=Λ~hfsuperscriptsubscript~Λℎ△superscriptsubscript~Λℎ𝑓\ti... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | C |
Moreover, Alg-A is more stable than the alternatives.
During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses a... | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | B |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... | Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne... | Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... |
Rumor Detection Model Settings. For the time series classification model, we only report the best performing classifiers, SVM and Random Forest, here. The parameters of SVM with RBF kernel are tuned via grid search to C=3.0𝐶3.0C=3.0italic_C = 3.0, γ=0.2𝛾0.2\gamma=0.2italic_γ = 0.2. For Random Forest, the number of t... | A |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | B |
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... |
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | C |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | B |
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b],
as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3. | C |
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | This very low threshold for now serves to measure very basic movements and to check for validity of the data.
Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity). | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | B |
For related visual tasks such as semantic segmentation, information distributed over convolutional layers at different levels of the hierarchy can aid the preservation of fine spatial details Hariharan et al. (2015); Long et al. (2015). The prediction of fixation density maps does not require accurate class boundaries ... |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the... |
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | A |
Since a marking sequence is just a linear arrangement of the symbols of the input word, computing marking sequences seems to be well tailored to greedy algorithms: until all symbols are marked, we choose an unmarked symbol according to some greedy strategy and mark it. Unfortunately, we can formally show that many nat... | These strategies are – except for LeftRightLeftRight\operatorname{\textsf{LeftRight}}LRstrategy – nondeterministic, since there are in general several valid choices of the next symbol to mark. However, we will show poor performances for these strategies independent of the nondeterministic choices (i. e., the approximat... | This proposition points out that even simple words can have only optimal marking sequences that are not block-extending. In terms of greedy strategies however, Proposition 5.4 only shows a lower bound of roughly 2222 for the approximation ratio of any greedy algorithm that employs some block-extending greedy strategy (... |
We call a marking sequence σ𝜎\sigmaitalic_σ for a word α𝛼\alphaitalic_α block-extending, if every symbol that is marked except the first one has at least one block-extending occurrence. This definition leads to the general combinatorial question of whether every word has an optimal marking sequence that is block-ext... |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie... | A |
In[175] the authors used a CNN to learn the features and a PCA-based nearest neighbor search utilized to estimate the local structure distribution.
Besides demonstrating good results they argue that it is important for CNN to incorporate information regarding the tree structure in terms of accuracy. | Convolutional Neural Networks (CNNs), as shown in Fig. 2, consist of a convolutional part where hierarchical feature extraction takes place (low-level features such as edges and corners and high-level features such as parts of objects) and a fully connected part for classification or regression, depending on the nature... | They argue that the learnt features of their model are more reliable to pathology, noise and different imaging conditions, because the learning process exploits the characteristics of vessels in all training images.
In[177] the authors employed unsupervised hierarchical feature learning using a two level ensemble of sp... | In[90] the authors added noise signals from the NSTDB to the MITDB and then used scale-adaptive thresholding WT to remove most of the noise and a denoised AE to remove the residual noise.
Their experiments indicated that increasing the number of training data to 1000 the signal-to-noise ratio increases dramatically aft... | Their model consisted of two parallel parts; the statistical learning and a rule inference.
In statistical learning the ECGs are preprocessed using bandpass and lowpass filters, then fed to two parallel lead-CNNs and finally Bayesian fusion is employed to combine the probability outputs. | B |
Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | The structure of the model-based RL algorithm that we employ consists of alternating between learning a model, and then using this model to optimize a policy with model-free reinforcement learning. Variants of this basic algorithm have been proposed in a number of prior works, starting from Dyna Q Sutton (1991) to more... | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | Sodhani et al. (2019) proposes learning a model consistent with RNN policy which helps to train policies that are more powerful than their model-free baseline.
Ha & Schmidhuber (2018) present a way to compose a variational autoencoder with a recurrent neural network into an architecture | Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies
from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Haf... | C |
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | D |
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... | D |
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O(logn)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration. | The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm... | bins
include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS... | Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the... |
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\... | A |
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual... | In Subsection 4.2 we will introduce the time-aware metric used to evaluate the effectiveness of the classifiers, in relation to the time taken to make the decision. Finally, Subsection 4.4 describes the different types of experiments carried out and the obtained results.
| This subsection describes how classification is carried out.
However, before we illustrate the overall process and for the sake of simplicity, we are going to assume there exist a function gv(w,c)𝑔𝑣𝑤𝑐gv(w,c)italic_g italic_v ( italic_w , italic_c ) to value words in relation to categories —and whose formal defini... | In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual... | Note that this allows us to compare words across different categories since their values are all normalized in relation to stop words, which should have a similar frequency across all the categories111111Note that we are assuming here that we are working with textual information in which there exist highly frequent ele... | B |
Sparsification methods, which are also called sparse communication methods, select only a few components of the vector for communicating with the server or the other workers. The most widely used sparsification compressor adopted in sparse communication methods is top-s𝑠sitalic_s, where each worker selects s𝑠sitalic_... | Each worker computes stochastic gradients locally and communicates with the server or other workers to obtain the aggregated stochastic gradients for updating the model parameter. Recently, more and more large-scale deep learning models, such as large language models (Devlin et al., 2019; Brown et al., 2020; Touvron et... | In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019).
There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (... | Due to the presence of compressed error, naively compressing the communicated vectors in DSGD or DMSGD will damage the convergence, especially when the compression ratio is high.
The most representative technique designed to tackle this issue is error feedback (Stich et al., 2018; Karimireddy et al., 2019), also called... | Researchers have proposed two main categories of communication compression methods for reducing communication cost: quantization (Wen et al., 2017; Alistarh et al., 2017; Jiang and Agrawal, 2018) and sparsification (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al.... | C |
Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24].
Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o... | The increased number of weights and non-zero activations make DNNs more complex, and thus more difficult to use in problems that require corresponding causality of the output with a specific set of neurons.
The majority of domains where machine learning is applied, including critical areas such as healthcare [26], requ... | Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output.
DNNs were successful in utilizi... | Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations.
In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | A |
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo... | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit... | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | D |
=2[dV¯T∗{𝐏1¯⋅(∇¯^(μ^r^2(∇^¯⋅𝐏1¯)))}+dV^T∗{μ^r^2(∇^¯⋅𝐏1¯)2}]absent2delimited-[]superscript¯𝑑𝑉𝑇⋅¯subscript𝐏1^¯∇^𝜇superscript^𝑟2⋅¯^∇¯subscript𝐏1superscript^𝑑𝑉𝑇^𝜇superscript^𝑟2superscript⋅¯^∇¯subscript𝐏12\displaystyle=2\left[\overline{dV}^{T}*\left\{\overline{\mathbf{P}_{1}}\cdot%
\left(\widehat{\ov... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{3}}\right)^{2}\right\}\right]+ [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_ARG ... | (\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{1}}\right)^{2}\right\}\right]= 2 [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_AR... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{2}}\right)^{2}\right\}\right]+ 2 [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_AR... | C |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | A |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | A |
\left|\mathcal{A}\cap\mathcal{B}\right|}{\left|\mathcal{A}\right|+\left|%
\mathcal{B}\right|},\ \ \ \ \textrm{and,}Dice coefficient , Dice ( caligraphic_A , caligraphic_B ) = 2 divide start_ARG | caligraphic_A ∩ caligraphic_B | end_ARG start_ARG | caligraphic_A | + | caligraphic_B | end_ARG , and, | where 𝜽ssubscript𝜽𝑠\bm{\theta}_{s}bold_italic_θ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and 𝜽asubscript𝜽𝑎\bm{\theta}_{a}bold_italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT denote the parameters of the segmentation and adversarial model, respectively. lbcesubscript𝑙𝑏𝑐𝑒l_{bce}italic_l start_PO... | The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pix... |
Figure 14: A 5×5555\times 55 × 5 overlap scenario with (a) the ground truth, (b) the predicted binary masks, and (c) the overlap. In (a) and (b), black and white pixels denote the foreground and the background respectively. In (c), green, grey, blue, and red pixels denote TP, TN, FP, and FN pixels respectively. |
Figure 13: Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., ... | C |
Computing all eigenvectors has a cost 𝒪(N3)𝒪superscript𝑁3\mathcal{O}(N^{3})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ), where N𝑁Nitalic_N is the number of nodes. However, computing only the eigenvector corresponding to the largest eigenvalue is fast when using the power method [29], whic... | We propose a graph sparsification procedure that reduces the computational cost of MP operations applied after pooling and has a small impact on the representations learned by the GNN.
In particular, we show both analytically and empirically that many edges can be removed without significantly altering the graph struct... | Computing all eigenvectors has a cost 𝒪(N3)𝒪superscript𝑁3\mathcal{O}(N^{3})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ), where N𝑁Nitalic_N is the number of nodes. However, computing only the eigenvector corresponding to the largest eigenvalue is fast when using the power method [29], whic... | To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals.
See the supplementary material for an illustration. | We notice that the coarsened graphs are pre-computed before training the GNN.
Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP... | D |
The input data is normalized to [−1,1]11[-1,1][ - 1 , 1 ].
For generating a wide variety of data, the prioritization of the current path wpath∼1+|𝒩(0,5)|similar-tosubscript𝑤path1𝒩05w_{\text{path}}\sim 1+\lvert\mathcal{N}(0,5)\rvertitalic_w start_POSTSUBSCRIPT path end_POSTSUBSCRIPT ∼ 1 + | caligraphic_N ( 0 , 5 ) |... | In all our experiments, stochastic gradient descent with Nesterov momentum as optimizer and cross-entropy loss are used.
The initial learning rate is set to 0.10.10.10.1, momentum to 0.90.90.90.9, and weight decay to 0.00050.00050.00050.0005. The batch size is set to 128128128128 and 512512512512, respectively, for gen... | Figure 6:
Analyzing the influence of training with original data, NRFI data, and combinations of both for different number of samples per class. Using only NRFI data (wgen=100%subscript𝑤genpercent100w_{\text{gen}}=100\%italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT = 100 %) achieves better results than using only... | A new random forest is trained every 100100100100 epochs to average the influence of the stochastic process, and the generated data samples are mixed.
In the following, training on generated data will be denoted as NRFI (gen) and training on generated and original data as NRFI (gen+ori). The fraction of NRFI data is se... | fraction of NRFI data wgensubscript𝑤genw_{\text{gen}}italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT is varied, which weights the loss of the generated data. Accordingly, the weight for the original data is set to wori=1−wgensubscript𝑤ori1subscript𝑤genw_{\text{ori}}=1-w_{\text{gen}}italic_w start_POSTSUBSCRIPT or... | C |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... | C |
Molchanov et al. (2017) exploited this freedom to optimize individual weight dropout rates wαsubscript𝑤𝛼w_{\alpha}italic_w start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT such that weights w𝑤witalic_w can be safely pruned if their dropout rate wαsubscript𝑤𝛼w_{\alpha}italic_w start_POSTSUBSCRIPT italic_α end_POSTSUB... | In the following, we present methods that determine dynamically in the course of forward propagation which structures should be computed or, equivalently, which structures should be pruned.
The intuition behind this idea is to vary the time spent for computing predictions based on the difficulty of the given input samp... | They introduce gates that determine how many recursive quantization steps should be performed which in turn determines the number of used bits.
While the quantization itself is subject to the STE, they propose to train gate probabilities using the Bayesian variational inference framework. | In this section, we start with the unstructured case which includes many of the earlier approaches and continue with structured pruning that has been the focus of more recent works.
Then we review approaches that relate to Bayesian principles before we discuss approaches that prune structures dynamically during forward... | A weight-magnitude-based decision using trainable threshold parameters determines which operation should be performed, allowing for gradient-based training of both the weight parameters and the architecture.
Again, the STE is employed to backpropagate through the threshold function. | A |
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm. | Of central interest in topological data analysis has been the question of providing a complete characterization of the Vietoris-Rips persistence barcodes of spheres of different dimensions. Despite the existence of a complete answer to the question for the case of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTS... | In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence b... |
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm. | The simplicial complex nowadays referred to as the Vietoris-Rips complex was originally introduced by Leopold Vietoris in the early 1900s in order to build a homology theory for metric spaces [79]. Later, Eliyahu Rips and Mikhail Gromov [47] both utilized the Vietoris-Rips complex in their study of hyperbolic groups.
| B |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | The goals of the comparative study presented in this paper were to provide initial evidence of the acceptance of t-viSNE by analysts, the consistency of their results when exploring a t-SNE projection using our tool, and the improvement over another state-of-the-art tool.
The tasks of the study were designed to test ho... |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | Study Design
Each participant took part individually (i.e., the study was performed asynchronously for each subject, in a silent room), using the same hardware, and the study was organized into four main steps, which were identical for both groups except that each interacted with the corresponding group’s tool (GEP o... | Finally, the goal of Task 6, Interpreting and Assessing Local Topology, was to find and interpret “unusual” patterns in the projection, more specifically formations that are known to happen in this data set because of identical points, i.e., data points which have the same values for all dimensions. This corresponded t... | C |
Does the physical analogue exist?: The inspiration of several bio-inspired algorithms does not strictly follow the rules of a phenomenon. An example is Cat Swarm Optimization, in which cats form a swarm, but in real life, they do not seem to cooperate in any way. Authors show more examples (Coyote Optimization Algorith... | In [18, 19], the authors analyze the algorithm called Intelligent Water Drops, providing several proofs that “ all main algorithmic components of Intelligent Water Drops are simplifications or special cases of ant colony optimization (ACO)”. They also examine the natural metaphor of “water drops flowing in rivers remov... |
Algorithms under this category are characterized by the fact that they imitate the behavior of physical or chemical phenomena, such as gravitational forces, electromagnetism, electric charges and water movement (in relation to physics-based approaches), and chemical reactions and gases particles movement as for chemis... |
Nature inspired optimization algorithms or simply variations of metaheuristics? - 2021 [15]: This overview focuses on the study of the frequency of new proposals that are no more than variations of old ones. The authors critique a large set of algorithms based on three criteria: (1) whether there is a physical analogy... | Similar inspiration or duplicate methods?: Authors analyze several classes of bio-inspired algorithms such as those based on gravitational forces, water phenomena, bees, penguins, wolves, and bacteria, and conclude that not all the different variations are real contributions.
| D |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | D |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... |
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we ... |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... |
∙∙\bullet∙ Limited coverage. Previous studies infer spoofability based on measurements of a limited set of networks, e.g., those that operate servers with faulty network stack (Kührer et al., 2014) or networks with volunteers that execute the measurement software (Beverly and Bauer, 2005; Beverly et al., 2009; Mauch, ... | C |
Natural systems need to adapt to a changing world continuously; seasons change, food sources and shelter opportunities vary, cooperation and competition with other animals evolves over time. Moreover, their embodiment also changes over their lifetime. Young animals experience a period of growth where their size increa... |
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
| C |
The values ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT play an important role in the analysis of the algorithm, and it will be convenient to assume that the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are independent.
However, when the x𝑥xitalic_x-c... | First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent.
Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this ... | In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings.
In the third step, we will explain which changes are made to the algorithm. | In the first step, we will show that long edges are unlikely to be viable.
For the second step, recall the definition of the spacing of pisubscript𝑝𝑖p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (in P𝑃Pitalic_P) as Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman_Δ start_... | The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets:
a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut... | B |
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup.
For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups: | The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... |
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment... | D |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance o... | C |
We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... | To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... |
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w... | D |
Pie charts on top of projections show probability distributions of action classes. Although this work is not similar to StackGenVis in general, we use a gradient color scale to map the performance of each model in the projected space.
EnsembleMatrix [55] linearly fuses multiple models with the help of a confusion matri... | In our VA system, the user can explore how models perform on each class of the data set, and the performance metrics are instilled into a combined user-driven value. Manifold [66] generates pairs of models and compares them over all classes of a data set, including feature selection. We adopt a similar approach, but in... | Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th... |
To illustrate how to choose different metrics (and with which weights), we start our exploration by selecting the heart disease data set in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a). Knowing that the data set is balanced, we pick accuracy (weight... |
Selection of Algorithms and Models. Similar to the workflow described in section 4, we start by setting the most appropriate parameters for the problem (see Figure 6(a)). As the data set is very imbalanced, we emphasize g-mean over accuracy, and ROC AUC over precision and recall. Log loss is disabled because the inves... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | C |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | B |
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roma... | The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
Tracking the AOAs and AODs is essential for beam tracking, which... |
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C. | ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively.
In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent... |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Sectio... | D |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | B |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| C |
Though Zhang et al. (2019); Xu et al. (2020b) suggest using a large batch size which may lead to improved performance, we only used a batch size of 25k25𝑘25k25 italic_k target tokens (through gradient accumulation of small batches) to fairly compare with previous work Vaswani et al. (2017); Xu et al. (2020a). |
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ... | We used a beam size of 4444 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5555 checkpoints for the Transformer Base setting and 20202020 checkpoints for the Transformer Big setting saved at intervals of 1,50015001,5001 , 500 training steps. We also conducted significance ... |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform... |
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing dep... | B |
For all A∈Fin(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ), let ψA𝖤𝖥𝖮superscriptsubscript𝜓𝐴𝖤𝖥𝖮\psi_{A}^{\mathsf{EFO}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT be the
diagram sentence such that ⟦ψA𝖤𝖥𝖮⟧Struct(σ)... | \prime},y^{\prime})}\subseteq f^{-1}(U)( italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ∈ italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_PO... | we can write F=(Uc∩F)∪(Vc∩F)𝐹superscript𝑈𝑐𝐹superscript𝑉𝑐𝐹F=(U^{c}\cap F)\cup(V^{c}\cap F)italic_F = ( italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∩ italic_F ) ∪ ( italic_V start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∩ italic_F )
an conclude that F𝐹Fitalic_F is the disjoint union of two no... | then {C}𝐶\{C\}{ italic_C } is open in (𝒞,τ|C|)𝒞subscriptτ𝐶(\mathcal{C},\uptau_{|C|})( caligraphic_C , roman_τ start_POSTSUBSCRIPT | italic_C | end_POSTSUBSCRIPT ) and therefore
f−1({C})superscript𝑓1𝐶f^{-1}(\{C\})italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( { italic_C } ) is open in X𝑋Xitalic_X. Sinc... | F⊆Uc∪Vc𝐹superscript𝑈𝑐superscript𝑉𝑐F\subseteq U^{c}\cup V^{c}italic_F ⊆ italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∪ italic_V start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, but F⊊Uc𝐹superscript𝑈𝑐F\subsetneq U^{c}italic_F ⊊ italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT
and F⊊Vc... | D |
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Sinc... | Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ... | Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re... |
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... | C |
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... |
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ... | argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32]
observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a... | C |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>... | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s... | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa... | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C... |
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star... | B |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | C |
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | A |
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | A |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | B |
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizo... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analys... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al.... | B |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... |
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ... | 75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ... | C |
where 𝒮+superscript𝒮\mathcal{S}^{+}caligraphic_S start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, 𝒮−superscript𝒮\mathcal{S}^{-}caligraphic_S start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT represent the positive entity pair set (i.e., the training set) and sampled negative entity pair set, respectively. The term ||⋅||||\c... |
In Table 8, we present more detailed entity prediction results on open-world FB15K-237, considering the influence of different decoders. Our observations indicate that decentRL consistently outperforms the other methods across most metrics when using TransE and DistMult as decoders. Furthermore, we provide results on ... | We employ different adaptation strategies for various KG embedding tasks. In entity alignment, we follow the existing GNN-based methods [12, 39] to concatenate the output embeddings from each layer to form the final representation. This process can be written as follows:
| Similarly, for entity prediction, we leverage a decoder to predict missing entities [13]. In our experiments, we employ ComplEx [30] and DistMult [29] as the decoders due to their superior performance without compromising efficiency. We initialize the input entity embeddings, relation embeddings, and weight matrices us... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... | C |
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode... |
(i) For the network architecture, the important hyper-parameters include the dimensions of latent space Z𝑍Zitalic_Z, the dimensions of state features d𝑑ditalic_d, and the use of skip-connection between the prior and generative networks. We add an ablation study in Tab. IV to perform a grid search. The result shows t... |
The related exploration methods aim to remove the stochasticity of the dynamics rather than modeling it. For example, Inverse Dynamics [10], Random Features [11], and EMI [30] learn a feature space to remove the task-irrelevant information in feature space such as white-noise. Curiosity-Bottleneck [31] and Dynamic Bot... | We compare the model complexity of all the methods in Table I. VDM, RFM, and Disagreement use a fixed CNN for feature extraction. Thus, the trainable parameters of feature extractor are 0. ICM estimates the inverse dynamics for feature extraction with 2.21M parameters. ICM and RFM use the same architecture for dynamics... |
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode... | C |
The number of coefficients |Am,n,1|=(m+nn)∈𝒪(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali... | Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
| Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl... | Whatsoever, any answer to Questions 2 that is to be of practical relevance
must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1. | convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality.... | A |
Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS).
However, it is pointed out in [23] that when the bandwidth is chosen based on ... | On the one hand, it should be rich enough to claim μ=ν𝜇𝜈\mu=\nuitalic_μ = italic_ν if the metric vanishes.
On the other hand, to control the type-I error, the function space should also be relatively small so that the empirical estimate of IPM decays quickly into zero. | The orthogonal constraint on the projection mapping A𝐴Aitalic_A is for normalization, such that any two different projection mappings have distinct projection directions.
The projected Wasserstein distance can also be viewed as a special case of integral probability metric with the function space | It is shown in [39] that its empirical estimate decays into zero with rate O(n−1/2)𝑂superscript𝑛12O(n^{-1/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) under mild conditions, and a two-sample test can be constructed based on this nice statistical behavior.
However, it is costly to comput... | In other words, we only scale the first two diagonal entries in the covariance matrix of ν𝜈\nuitalic_ν to make the hypothesis testing problem difficult to perform.
We compare the performance of the PW test with the MMD test discussed in [20], where the kernel function is chosen to be the standard Gaussian kernel with ... | A |
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA... | The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | A |
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins. | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... |
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized... |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the... |
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation. | D |
Any permutation polynomial f(x)𝑓𝑥f(x)italic_f ( italic_x ) decomposes the finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT into sets containing mutually exclusive orbits, with the cardinality of each set being equal to the cycle length of the elements in that se... |
Given an n𝑛nitalic_n-dimensional vector space 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over finite field 𝔽𝔽\mathbb{F}blackboard_F, maps F:𝔽n→𝔽n:𝐹→superscript𝔽𝑛superscript𝔽𝑛F:\mathbb{F}^{n}\to\mathbb{F}^{n}italic_F : blackboard_F start_POSTSUPERSCRIPT ita... | There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields. Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few. Conditions for such families of maps to... | Univariate polynomials f(x):𝔽→𝔽:𝑓𝑥→𝔽𝔽f(x):\mathbb{F}\to\mathbb{F}italic_f ( italic_x ) : blackboard_F → blackboard_F that induces a bijection over the field 𝔽𝔽\mathbb{F}blackboard_F are called permutation polynomials (in short, PP) and have been studied extensively in the literature. For instance, given a gene... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... | B |
Typically B𝐵Bitalic_B is set to 50, but the choice of q𝑞qitalic_q and πthrsubscript𝜋thr\pi_{\text{thr}}italic_π start_POSTSUBSCRIPT thr end_POSTSUBSCRIPT is somewhat more involved. In particular, one can obtain a bound on the expected number of falsely selected variables, the so-called per-family error rate (PFER),... | Forward selection is a simple, greedy feature selection algorithm (Guyon \BBA Elisseeff, \APACyear2003). It is a so-called wrapper method, which means it can be used in combination with any learner (Guyon \BBA Elisseeff, \APACyear2003). The basic strategy is to start with a model with no features, and then add the sing... |
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Hou... |
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge... | Stability selection is an ensemble learning framework originally proposed for use with the lasso (Meinshausen \BBA Bühlmann, \APACyear2010), although it can be used with a wide variety of feature selection methods (Hofner \BOthers., \APACyear2015). The basic idea of stability selection is to apply a feature selection m... | A |
For LOF, iForest, FastABOD, OCSVM and SOD, we use the implementations in the dbscan [80] R package, IsolationForest [81] R package, abodOutlier [82] R package, e1071 [83] R package and HighDimOut [84] R package respectively. MBOM, ALSO and COMBN are implemented by ourselves based on the bnlearn [85] R package. All the... |
The overall running time of the two DepAD algorithms and the nine benchmark methods are presented in Table 11. In general, the two DepAD algorithms have high efficiency. In the nine benchmark methods, FastABOD, ALSO, SOD and COMBN could not finish in four hours on some datasets. | In the experiments, if a method is unable to produce a result within four hours, we stop the experiments. The stopped methods and data sets include 1) FastABOD and SOD on datasets Backdoor and Census; 2) ALSO on datasets Backdoor, CalTech16, Census, Secom, MNIST, CalTech28, Fashion and Ads; 3) COMBN on datasets Backdoo... |
Thirty-two real-world datasets are used for the evaluation. These datasets cover diverse domains, e.g., spam detection, molecular bioactivity detection, and image object recognition, as shown in Table 4. The AID362, Backdoor, MNIST and caltech16 datasets are obtained from the Kaggle data repository [72]. The Pima, WBC... | The running times on the 32 datasets and their average values are shown in Table 10. Comparing the five methods, FBED is the most efficient, with an average running time of 2.7 seconds, followed by MI at 23 seconds, HITON-PC at 26 seconds, DC at 133 seconds, and IEPC being the most time-consuming at 1538 seconds. Notab... | B |
For building intuition, assume that 𝐗𝒬t⊤θ∗superscriptsubscript𝐗subscript𝒬𝑡topsubscript𝜃\mathbf{X}_{\mathcal{Q}_{t}}^{\top}\theta_{*}bold_X start_POSTSUBSCRIPT caligraphic_Q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_θ start_POSTSUBSCRIPT ∗... |
Our result is still O(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they... | Next we show how using a global lower bound in form of κ𝜅\kappaitalic_κ (see Assumption 2) early in the analysis in the works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] lead to loose prediction error upper bound. For this we first introduce a new notation:
| The detailed proof is provided in A.4. Here we develop the main ideas leading to this result and develop an analytical flow which will be re-used while working with convex confidence set Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in Section 4.3. In the previou... | where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star... | B |
Inspired by FPN [22], which computes multi-scale features with different levels, we propose a cross-scale graph pyramid network (xGPN). It progressively aggregates features from cross scales as well as from the same scale at multiple network levels via a hybrid module of a temporal branch and a graph branch. As shown ... |
2) We propose a novel temporal action localization framework VSGN, which features two key components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). For effective feature aggregation, we design a cross-scale graph network for each level in xGPN with a hybrid module of a temporal branch and a gra... | We provide ablation study for the key components VSS and xGPN in VSGN to verify their effectiveness on the two datasets in Table 3 and 4, respectively. The baselines are implemented by replacing each xGN module in xGPN with a layer of Conv1d(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) and ReLU, and not using cutt... | Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ... | To further improve the boundaries generated from Mlocsubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we design Madjsubscript𝑀𝑎𝑑𝑗M_{adj}italic_M start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT inspired by FGD in [24]. For each updated anchor seg... | C |
The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in... | (2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | (ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
(iii) during the detailed examination phase, zoo... | B |
Simulation results demonstrate that the DSMC algorithm achieves faster convergence, characterized by an exponential convergence guarantee, compared to existing homogeneous and time-inhomogeneous Markov chain synthesis algorithms presented in [7] and [14]. | Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... |
In this section, we introduce a shortest-path algorithm that is proposed as a modification to the Metropolis-Hastings algorithm in [7, Section V-E] and integrated with the Markov chain synthesis methods described in [14] and [15]. This algorithm can also be integrated with the DSMC algorithm to further increase the co... | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
| The paper is organized as follows. Section II presents the consensus protocol with state-dependent weights. The decentralized state-dependent Markov matrix synthesis (DSMC) algorithm is introduced in Section III.
Section IV introduces the probabilistic swarm guidance problem formulation, and presents numerical simulati... | D |
Apart from methods tackling a QAP formulation (see previous paragraph), there exist directions utilising other structural properties of isometries.
The Laplace-Beltrami operator (LBO) [54], a generalisation of the Laplace operator on manifolds, as well as its eigenfunctions are invariant under isometries. | Fig. LABEL:fig:teaser shows that our method finds the correct correspondence among the partial shape collection, while being cycle-consistent.
Partial functional maps are rectangular and low-rank [58], and this experiments shows that our method can also handle this more general case. More details can be found in the su... | The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos... | Due to their low-dimensionality and continuous representation, functional maps also serve as the backbone of many deep learning architectures for 3D correspondence.
One of the first examples is FMNet [40], which has also been extended for unsupervised learning settings recently [27, 3, 59]. | However, extracting a point-wise correspondence from a functional map matrix is not trivial [17, 57]. This is mainly because of the low-dimensionality of the functional map, and the fact that not every functional map matrix is a representation of a point-wise correspondence [51].
In [44], the authors simultaneously sol... | B |
Convert the coloring f:ΓC/∼→{0,1}f:\Gamma_{C}/\sim\rightarrow\{0,1\}italic_f : roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT / ∼ → { 0 , 1 } in a directed clique path tree of ΓCsubscriptΓ𝐶\Gamma_{C}roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT. |
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly... | We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ... | Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati... | On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ... | B |
UKfaculty: this network reflects the friendship among academic staff of a given Faculty in a UK university consisting of three separate schools UKfaculty . The original network contains 81 nodes, and the smallest group only has 2 nodes. The smallest group is removed for community detection in this paper.
| In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi... | published around the 2004 presidential election and sold by the online bookseller Amazon.com. In Polbooks, nodes represent books, edges represent frequent co-purchasing of books by the same buyers. Full information about edges and labels can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. The original ... | UKfaculty: this network reflects the friendship among academic staff of a given Faculty in a UK university consisting of three separate schools UKfaculty . The original network contains 81 nodes, and the smallest group only has 2 nodes. The smallest group is removed for community detection in this paper.
| Before comparing these methods, we take some preprocessing to remove nodes that may have mixed memberships for community detection. For the Polbooks data, nodes labeled as “neutral” are removed. The smallest group with only 2 nodes in UKfaculty data is removed. Table 1 presents some basic information about the four dat... | D |
These works utilize the property that the diffusion process associated with Langevin dynamics in 𝒳𝒳\mathcal{X}caligraphic_X corresponds to the Wasserstein gradient flow of the KL-divergence in 𝒫2(𝒳)subscript𝒫2𝒳\mathcal{P}_{2}(\mathcal{X})caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( caligraphic_X )
(Jo... | In addition to gradient-based MCMC, variational transport also shares similarity with Stein variational gradient descent (SVGD) (Liu and Wang, 2016), which is a more recent particle-based algorithm for Bayesian inference.
Variants of SVGD have been subsequently proposed. See, e.g., | Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
In each iteration, variati... | artifacts adopted only for theoretical analysis. We present the details of such a modified algorithm in Algorithm 2 in §A.
Without these modifications, Algorithm 2 reduces to the general method proposed in Algorithm 1, a deterministic particle-based algorithm, which is more advisable for | In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward... | A |
The evaluation scenarios come from four real road network maps of different scales, including Hangzhou (China), Jinan (China), New York (USA) and Shenzhen (China), illustrated in Fig. 6. The road networks and data of Hangzhou, Jinan and New York are from the public datasets222https://traffic-signal-control.github.io/.... |
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II. |
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-... | We run the experiments under three traffic flow configurations: real traffic flow, mixed low traffic flow and mixed high traffic flow. The real traffic flow is real-world hourly statistical data with slight variance in vehicle arrival rates, as shown in Tab. I. Since the real-world strategies tend to break down during ... |
Mixedl. The mixedl is a mixed low traffic flow with a total flow of 2550 in one hour, to simulate a light peak. The arrival rate changes every 10 minutes, which is used to simulate the uneven traffic flow distribution in the real world, the details of the vehicle arrival rate and cumulative traffic flow are shown in F... | C |
can be as small as needed when k𝑘kitalic_k is sufficient large.
Consequently, the sequence {𝐱k}k=0∞superscriptsubscriptsubscript𝐱𝑘𝑘0\big{\{}\mathbf{x}_{k}\big{\}}_{k=0}^{\infty}{ bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end... | Sδ(𝐱∗)¯⊂Ω∗∩Ω1¯subscript𝑆𝛿subscript𝐱subscriptΩsubscriptΩ1\overline{S_{\delta}(\mathbf{x}_{*})}\subset\Omega_{*}\cap\Omega_{1}over¯ start_ARG italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) end_ARG ⊂ roman_Ω start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∩ roman_Ω ... | by the iteration (4.1) converges to a certain
𝐱^∈Sδ(𝐱∗)¯⊂Ω1^𝐱¯subscript𝑆𝛿subscript𝐱subscriptΩ1\hat{\mathbf{x}}\,\in\,\overline{S_{\delta}(\mathbf{x}_{*})}\subset\Omega_{1}over^ start_ARG bold_x end_ARG ∈ over¯ start_ARG italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_PO... | for all 𝐱^,𝐱ˇ∈Sδ(𝐱∗)^𝐱ˇ𝐱subscript𝑆𝛿subscript𝐱\hat{\mathbf{x}},\,\check{\mathbf{x}}\in S_{\delta}(\mathbf{x}_{*})over^ start_ARG bold_x end_ARG , overroman_ˇ start_ARG bold_x end_ARG ∈ italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ), 𝐳∈Sτ(𝐱∗)𝐳subscr... | 𝐲=𝐲∗∈Σ0𝐲subscript𝐲subscriptΣ0\mathbf{y}\,=\,\mathbf{y}_{*}\in\Sigma_{0}bold_y = bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ roman_Σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT stays in Sδ(𝐱∗)subscript𝑆𝛿subscript𝐱S_{\delta}(\mathbf{x}_{*})italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( bold_x start_P... | B |
In terms of analysis techniques, we note that the theoretical analysis of the algorithms we present is specific to the setting at hand and treats items “collectively”. In contrast, almost all known online bin packing algorithms are analyzed using a weighting technique (?), which treats each bin “individually” and indep... | These algorithms are variants of the classic Harmonic algorithm (?), which places items of approximately equal sizes, according to a harmonic sequence, in the same bin.
The currently best algorithm is the Advanced Harmonic (AH) algorithm, which has a competitive ratio of 1.57829 (?), whereas the best-known lower bound ... | In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity.
Ideally, one aims for a loss that is as small as o(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu... |
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin... |
Online bin packing has a long history of study. The simplest algorithm is NextFit, which places an item into its single open bin when possible; otherwise, it closes the bin (does not use it anymore) and opens a new bin for the item. FirstFit is another simple heuristic that places an item into the first bin of suffici... | D |
ℒ(ϕ(XU(p);Wϕ),V(p)),ℒitalic-ϕsubscript𝑋𝑈𝑝subscript𝑊italic-ϕ𝑉𝑝\mathcal{L}\left(\phi(X_{U}(p);W_{\phi}),V(p)\right),caligraphic_L ( italic_ϕ ( italic_X start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT ( italic_p ) ; italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) , italic_V ( italic_p ) ) ,
|
The above formulation alone causes that many of the produced patches have unnecessarily long edges, and the network folds them, so the patch fits the surface of an object. To mitigate the issue, we add an edge length regularization motivated by (Wang et al., 2018). If we assume that the reconstructed mesh has the form... | Watertigthness Typically, a mesh is referred to as being either watertight or not watertight. Since it is a true or false statement, there is no well-established measure to define the degree of discontinuities in the object’s surface. To fill this gap, we propose a metric based on a simple, approximate check of whether... |
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create... | Practically speaking, our approach transforms the embedding of point cloud obtained from the base model to parametrize the bijective function represented by the MLP network. This function aims to find a mapping between a canonical 2D patch to the 3D patch on the surface of the target mesh. We condition the positioning ... | A |
R𝒵2=2mMx2(λmin+(𝐖𝐱))−2superscriptsubscript𝑅𝒵22𝑚superscriptsubscript𝑀𝑥2superscriptsuperscriptsubscript𝜆subscript𝐖𝐱2R_{\mathcal{Z}}^{2}={2m}M_{x}^{2}(\lambda_{\min}^{+}({\bf W}_{{\bf x}}))^{-2}italic_R start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 2 itali... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... | To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
| \bf x}\right\|_{{\mathcal{X}}}^{2}+\left\|{\bf p}\right\|_{{\mathcal{P}}}^{2}∥ ( bold_x , bold_p ) ∥ start_POSTSUBSCRIPT ( caligraphic_X , caligraphic_P ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ bold_x ∥ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERS... | Next, we introduce the second important component of the convergence rate analysis, namely the smoothness assumption on the objective F𝐹Fitalic_F.
To set the stage we first introduce a general definition of Lipschitz-smooth function of two variables. | D |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. | In the case that we can find some non-star spanning tree T𝑇Titalic_T of
G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Tital... |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... | A |
N=N(b,k,m,ℓ)𝑁𝑁𝑏𝑘𝑚ℓN=N(b,k,m,\ell)italic_N = italic_N ( italic_b , italic_k , italic_m , roman_ℓ ) such that for every n≥N𝑛𝑁n\geq Nitalic_n ≥ italic_N and any group homomorphism h:Ck(G[n]m)→(ℤ2)b:ℎ→subscript𝐶𝑘𝐺superscriptdelimited-[]𝑛𝑚superscriptsubscriptℤ2𝑏h:C_{k}(G[n]^{m})\to\left(\mathbb{Z}_{2}\right)... |
1111111111111111001111001111111100111111110011110000111111110011110011110000111100111100001111111111111111001111001111000000001111111111110000000000001111111111111111001111111111110011110011111111001111111100000000000000000011111111111111110000000011110000111111111111001111001111111100001111000000000011111111111100111... | In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma... |
Two central problems in this line of research are to identify the weakest possible assumptions under which the classical theorems generalize, and to determine their key parameters, for instance the Helly number (d+1𝑑1d+1italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT i... | In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP... | A |
Using our approach, we managed to achieve the same accuracy as before, 89%, compared to 83% reported by Mansouri et al. [94] for the additional external data set. For precision and recall, we always use macro-average, which is identical to Mansouri et al. [94]. On the one hand, the precision was 4% lower in both test a... | Following the guidelines from prior works [97, 98, 99, 68], we conducted online semi-structured interviews with three experts to collect qualitative feedback about our system’s effectiveness.
The first ML expert (E1) is a senior lecturer in mathematics with a PhD in this field. | Next, as XGBoost [29] is a nonlinear ML algorithm, we also train a linear classifier (a logistic regression [83] model with the default Scikit-learn’s hyperparameters [84]) to compute the coefficients matrix and then use Recursive Feature Elimination (RFE) [40] to rank the features from the best to the worst in terms o... | Visualization and interaction.
E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv... |
We derived the analytical tasks described in this section from the in-depth analysis of the related work in Section 2. The three analytical tasks from Krause et al. [50], the three experts who expressed their requirements in Zhao et al. [32], and the user tasks acquired through expert interviews from Collaris and van ... | A |
‖e^c‖∞subscriptnormsubscript^𝑒𝑐\|\hat{e}_{c}\|_{\infty}∥ over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT
‖e^c‖2subscriptnormsubscript^𝑒𝑐2\|\hat{e}_{c}\|_{2}∥ over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∥ st... |
Figure 5: Position, velocity, acceleration, and maximal contour error resulting from optimization of the MPC parameters, comparing unconstrained BO optimization (solid lines) to BO optimization with additional constraint on the maximal tracking error, for infinity (left) and octagon(center) geometries. The right panel... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | To reduce the number of times this experimental “oracle” is invoked, we employ Bayesian optimization (BO) [16, 17], which is an effective method for controller tuning [13, 18, 19] and optimization of industrial processes [20]. The constrained Bayesian optimization samples and learns both the objective function and the ... | A |
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue fo... | Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli... |
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue fo... | Without bias mitigation mechanisms, standard models (StdM) often use spurious bias variables for inference, rather than developing invariance to them, which often results in their inability to perform well on minority patterns [27, 11, 3, 61]. To address this, several bias mitigation mechanisms have been proposed, and ... | We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. An orthogonal avenue for attacking bias mitigation is to use alternative architectures. Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured c... | D |
\addedCompared with the second-row methods, these methods leverage target images to improve the model performance within specific domains. This approach yields a dedicated model for each domain, outperforming PureGaze and RAT.
Notably, CSA [143] stands out as a source-free method that dispenses with the need for a sour... | Semi-supervised CNNs require both labeled and unlabeled images for optimizing networks. Wang et al. propose an adversarial learning approach to improve the model performance on the target subject/dataset [59].
As shown in Fig. 6, it requires labeled images in the training set as well as unlabeled images of the target s... | This trend is noteworthy for its implications in privacy protection.
PureGaze-FT [72] samples 5 images per person for fine-tuning. Although the method achieves good performance with 50505050 images, it requires annotated images while previous methods only require unannotated images. | They learn the person-specific feature during fine-tuning. Linden et al. introduce user embedding for recording personal information.
They obtain user embedding of the unseen subjects by fine-tuning using calibration samples [136]. Chen et al. [131, 132] observe the different gaze distributions of subjects. They use t... | It is the most popular dataset for appearance-based gaze estimation methods. It contains a total of 213,659 images collected from 15 subjects. The images are collected in daily life over several months and there is no constraint for the head pose. MPIIGaze dataset provides both 2D and 3D gaze annotation. It also provid... | B |
Covariance-based features have been applied in hariri20163d and achieved high recognition performance on 3D datasets in the presence of occluded regions. We have employed this method using 2D-based features (texture, gray level, LBP) to extract covariance descriptors. The evaluation on the RMFRD and SMFRD datasets co... |
The comparison of the computation times between the proposed method and Almabdy et al.’s method almabdy2019deep shows that the use of the BoF paradigm decreases the time required to extract deep features and to classify the masked faces (See Table 4). Note that this comparison is performed using the same pre-trained ... |
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ... |
We have tested the face recognizer presented in luttrell2018deep that achieved a good recognition accuracy on two subsets of the FERET database phillips1998feret . This technique is based on transfer learning (TL) which employs pre-trained models and fine-tuning them to recognize masked faces from RMFRD and SMFRD dat... |
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine... | B |
We define SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT, which extends the semi-axiomatic sequent calculus (SAX) [DPP20] with arithmetic refinements, recursion, and infinitely deep typing derivations (Section 2). Then, we define an auxiliary type system called SAXωsuperscriptSAX𝜔... | Most importantly, the call rule does not refer to a coinductively-defined auxiliary judgment, because in the absence of free arithmetic variables, the tracked size arguments decrease from some n¯¯𝑛\overline{n}over¯ start_ARG italic_n end_ARG to n′¯¯superscript𝑛′\overline{n^{\prime}}over¯ start_ARG italic_n start_POST... |
As we mentioned in the introduction, we can make the SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT judgment arbitrarily rich to support more complex patterns of recursion. As long as derivations in that system can be translated to SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_... |
In this section, we extend SAX [DPP20] with recursion and arithmetic refinements in the style of Das and Pfenning [DP20b]. SAX is a logic-based formalism and subsuming paradigm [Lev04] for concurrent functional programming that conceives call-by-need and call-by-value strategies as particular concurrent schedules [PP2... | We define SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT, which extends the semi-axiomatic sequent calculus (SAX) [DPP20] with arithmetic refinements, recursion, and infinitely deep typing derivations (Section 2). Then, we define an auxiliary type system called SAXωsuperscriptSAX𝜔... | C |
In day-to-day life, people are encountering an ever-growing volume of media big data through various social media platforms such as Facebook, Twitter, and WeChat. As a result, it has become increasingly common for media owners to share their contents with multiple users. To handle the vast number of users and media co... | The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado... | An intuitive approach to reduce overhead for the owner is to store the media contents in a cloud platform and, with the help of the cloud, share the media contents to the authorized users. It is evolving into an emerging technique called cloud media sharing [3, 4]. In this technique, on the one hand, the owner can make... | In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ... | Implement privacy-preserving access control. On the one hand, the cloud should be prevented from obtaining the private plaintext of the data it encounters, including the owner’s media content, the users’ fingerprints, and the LUTs. On the other hand, only users authorized by the owner can access the media content.
| B |
In this work, we proposed a graph neural network-based approach to modeling feature interactions. We design a feature interaction selection mechanism, which can be seen as learning the graph structure by viewing the feature interactions as edges between features. | Modeling feature interactions is a crucial aspect of predictive analytics and has been widely studied in the literature. FM Rendle (2010) is a popular method that learns pairwise feature interactions through vector inner products. Since its introduction, several variants of FM have been proposed, including Field-aware ... | One of the main limitations of FM is that it is not able to capture higher-order feature interactions, which are interactions between three or more features. While higher-order FM (HOFM) has been proposed Rendle (2010, 2012) as a way to address this issue, it suffers from high complexity due to the combinatorial expans... |
Factorization machine (FM) Rendle (2010, 2012) are a popular and effective method for modeling feature interactions, which involve learning a latent vector for each one-hot encoded feature and modeling the pairwise (second-order) interactions between them through the inner product of their respective vectors. FM has b... | In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter... | A |
\mathcal{L}_{0}}\delta^{2}}{4\tilde{L}D^{2}}\right)^{\left\lceil(t-1)/2\right%
\rceil}.italic_h ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ≤ italic_h ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ( 1 - divide start_ARG italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT cal... |
When the domain 𝒳𝒳\mathcal{X}caligraphic_X is a polytope, one can obtain linear convergence in primal gap for a generalized self-concordant function using the well known Away-step Frank-Wolfe (AFW) algorithm [Guélat & Marcotte, 1986, Lacoste-Julien & Jaggi, 2015] shown in Algorithm 5 | We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes wh... |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... | We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
| D |
Recalling from Section 3 that τmax=def1/ε6superscriptdefsubscript𝜏max1superscript𝜀6\tau_{\textrm{max}}\stackrel{{\scriptstyle\text{\tiny\rm def}}}{{=}}1/%
\varepsilon^{6}italic_τ start_POSTSUBSCRIPT max end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG def end_ARG end_RELOP 1 / italic_ε start_... | Otherwise, we will find an augmentation and we have that an augmenting path satisfying one of the two desired properties has been found.
This property is formalized in Observation 4.2 and the process for finding these odd cycles is formalized in Definition 4.3 and Lemma 4.4. |
The primary goal of Extend-Active-Paths is to extend active paths of a maximal (not necessary maximum) number of distinct free nodes with respect to a given ordering of arcs. Algorithm 7 does not achieve the same guarantee. As a consequence of such behavior of Algorithm 7, Backtrack-Stuck-Structures potentially reduce... | Let P𝑃Pitalic_P be an alternating path belonging to 𝒮αsubscript𝒮𝛼\mathcal{S}_{\alpha}caligraphic_S start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. A convenient property of having P𝑃Pitalic_P settled is that, once P𝑃Pitalic_P becomes settled, we show that all the arcs in P𝑃Pitalic_P at any point belong to the sam... | The rough idea of the proof is as follows. First, we observe that having a small number of short augmenting paths is a certificate for a good approximation, as formalized in Lemma 5.9. We use this observation to show in Lemma 5.10 that whenever we do not have a good approximation yet, we must find many augmenting paths... | D |
Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols.
The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a... | Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16].
Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... |
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the... | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com... | A |
Distributed optimization methods have already become integral to solving problems, including many applications in machine learning.
For example, distributing training data evenly across multiple devices can greatly speed up the learning process. Recently, a new research direction has appeared concerning distributed opt... | Unlike classical distributed learning methods, the FL approach assumes that data is not stored within a centralized computing cluster but is stored on clients’ devices, such as laptops, phones, and tablets. This formulation of the training problem gives rise to many additional challenges, including the privacy of clien... | Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre... |
Data and model. We consider the benchmark of image classification on the CIFAR-10 [46] dataset. It contains 50,0005000050,00050 , 000 and 10,0001000010,00010 , 000 images in the training and validation sets, respectively, equally distributed over 10101010 classes. To emulate the distributed scenario, we partition the ... | Predicting the next word written on a mobile keyboard [3] is a typical example when the performance of a local (personalized) model is significantly ahead of the classical FL approach that trains only the global model.
Improving the local models using this additional knowledge may need a more careful balance, consideri... | A |
Recent success in tackling two-player, constant-sum games (Silver et al., 2016; Vinyals et al., 2019) has outpaced progress in n-player, general-sum games despite a lot of interest (Jaderberg et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019; Lockhart et al., 2020; Gray et al., 2020; Anthony et al., 2020). One ... | Recent success in tackling two-player, constant-sum games (Silver et al., 2016; Vinyals et al., 2019) has outpaced progress in n-player, general-sum games despite a lot of interest (Jaderberg et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019; Lockhart et al., 2020; Gray et al., 2020; Anthony et al., 2020). One ... |
Outside of normal form (NF) games, this problem setting arises in multi-agent training when dealing with empirical games (also called meta-games), where a game payoff tensor is populated with expected outcomes between agents playing an extensive form (EF) game, for example the StarCraft League (Vinyals et al., 2019) a... |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... |
Policy-Space Response Oracles (PSRO) (Lanctot et al., 2017) (Algorithm 1) is an iterative population based training method for multi-agent learning that generalizes other well known algorithms such as fictitious play (FP) (Brown, 1951), fictitious self play (FSP) (Heinrich et al., 2015) and double oracle (DO) (McMahan... | B |
Given η>0𝜂0\eta>0italic_η > 0 and a query q𝑞qitalic_q, the Gaussian mechanism with noise parameter η𝜂\etaitalic_η returns its empirical mean q(s)𝑞𝑠{q}\left(s\right)italic_q ( italic_s ) after adding a random value, sampled from an unbiased Gaussian distribution with variance η2superscript𝜂2\eta^{2}italic_η start... | Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta... | In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst.
This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability ... | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient... |
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section ... | B |
For each u∈χ−1(𝖢˙)𝑢superscript𝜒1˙𝖢u\in\chi^{-1}(\mathsf{\dot{C}})italic_u ∈ italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) we perform a number of 𝒪(n+m)𝒪𝑛𝑚\mathcal{O}(n+m)caligraphic_O ( italic_n + italic_m )-time operations and run the dynamic programming algo... |
Given a multigraph G𝐺Gitalic_G and coloring χ𝜒\chiitalic_χ of G𝐺Gitalic_G that properly colors some simple reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), a reducible FVC (C′,F′)superscript𝐶normal-′superscript𝐹normal-′(C^{\prime},F^{\prime})( italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_F st... | Note that the condition |NG(F)|≤|C|+1subscript𝑁𝐺𝐹𝐶1|N_{G}(F)|\leq|C|+1| italic_N start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_F ) | ≤ | italic_C | + 1 trivially holds for any single-tree FVC. We will show that, given a reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), we can efficiently reduce to a s... |
Using the previous lemmas the problem of finding a reducible single-tree FVC reduces to finding a coloring that properly colors a simple reducible FVC. We generate a set of colorings that is guaranteed to contain at least one such coloring. To generate this set we use the concept of a universal set. | Similar to the algorithm from Lemma 5.8, we can use two (n+m,𝒪(k5z2))𝑛𝑚𝒪superscript𝑘5superscript𝑧2(n+m,\mathcal{O}(k^{5}z^{2}))( italic_n + italic_m , caligraphic_O ( italic_k start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )-universal sets to create a set of c... | C |
Inspired by traditional image harmonization methods [182, 143] which applied color transformation to adjust the foreground appearance, Cong et al. [20] proposed to learn color transformation using deep learning for image harmonization. They combined color-to-color transformation and pixel-to-pixel transformation in a ... | Zhu et al. [209] trained a composite image discriminator to predict the realism of composites by compositing each foreground with the background. This method is effective by using the realism of composite image to measure the foreground-background compatibility, but computing the realism of all composite images is very... | Kulal et al. [68] adopted a similar approach, but focused on human generation. Chen et al. [16] proposed to promote the foreground fidelity by using high-frequency information. The works [68, 16] pointed out that multi-view datasets and video datasets can help simulate more diverse and realistic geometry perturbation. ... | By treating different capture conditions as different domains, Cong et al. [18] proposed a domain verification discriminator to pull close the foreground domain and background domain. Similarly, Cong et al. [19] formulated image harmonization as background-guided domain translation task, in which the domain code of bac... | Some recent works [113, 133, 13] concurred that dynamic kernels acting upon feature maps can boost the harmonization performance. Furthermore, [113, 133] pointed out the importance of global information in dynamic kernel prediction. Niu et al. [112] studied domain adaptive image harmonization by treating different data... | D |
Our experimental results demonstrate that LPA outperforms LLD in most cases. This can be attributed to the fact that LPA optimizes the expected long-term revenues at each dispatching round, while LLD only focuses on the immediate reward. As a result, LPA is better suited for maximizing the total revenue of the system ... |
Data-driven analytical techniques have become increasingly prevalent in both the research community and industry for addressing various tasks in urban computing [1]. In recent years, several machine learning techniques, including deep learning [2, 3], transfer learning [4, 5], and reinforcement learning [6, 7], have b... |
In order to address above challenges, this paper introduces CityNet, a multi-modal dataset comprising data from various cities and sources for smart city applications. Drawing inspiration from [13], we use the term “multi-modal” to reflect the diverse range of cities and sources from which CityNet is derived. In compa... | In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp... | To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem... | C |
Input : Model architecture 𝒜𝒜\mathcal{A}caligraphic_A, likelihood function ℒ(Y|X,θ)ℒconditional𝑌𝑋𝜃\mathcal{L}(Y\,|\,X,\theta)caligraphic_L ( italic_Y | italic_X , italic_θ ), prior distribution p(θ)𝑝𝜃p(\theta)italic_p ( italic_θ ), data set 𝒟𝒟\mathcal{D}caligraphic_D
|
One of the most popular probabilistic models for regression problems is the Gaussian process williams1996gaussian . The main reason for its popularity is that it is one of the only Bayesian methods where the inference step (4) can be performed exactly, since the marginalization of multivariate normal distributions can... |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... | Both the integral in the inference step (4) and in the prediction step (5) can in general not be computed exactly (conjugate priors Fink97acompendium , such as normal distributions, form an important exception). At inference there are two general classes of approximations available:
|
To obtain a point estimate for future predictions, the most popular choice is the conditional mean E[y∗|𝐱∗,𝒟]Edelimited-[]conditionalsuperscript𝑦superscript𝐱𝒟\mathrm{E}[y^{*}\,|\,\mathbf{x}^{*},\mathcal{D}]roman_E [ italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUP... | A |
Fig. 1(a) shows that, except for Bar, the other tokens in a REMI sequence always occur consecutively in groups, in the order of Sub-bar, Pitch, Duration. We can further differentiate Bar(new) and Bar(cont), representing respectively the beginning of a new bar and a continuation of the current bar and always have one of... | Instead of feeding the token embedding of each of them individually to the Transformer, we can combine the token embedding of either the four tokens for MIDI scores or six tokens for MIDI performances in a group by concatenation and let the Transformer model
process them jointly, as depicted in Fig. 1(b). We can also m... | Moreover, we consider two types of MIDI data and compare the performance of the resulting PTMs. Specifically, following \textciteoore2018time, we differentiate two types of MIDI files, MIDI scores, which are musical scoresheets rendered directly into MIDI with no dynamics and exactly according to the written metrical g... | These constitute the main ideas of the CP representation \parencitehsiao21aaai,
which has at least the following two advantages over its REMI counterpart: 1) the number of time steps needed to represent a MIDI piece is much reduced, since the tokens are merged into a “super token” (a.k.a. a “compound word” \parencitehs... | For the sequence-level tasks, which require only a prediction for an entire sequence, we follow \textciteemopia and choose the Bi-LSTM-Attn model from \textcitelin2017structured as our baseline, which was originally proposed for sentiment classification in NLP.
The model combines LSTM with a self-attention module for t... | A |
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and H𝐻Hitalic_H its spanning subgraph. Then λ(χ(H)−1)+1≤BBCλ(G,H)≤λ(χ(H)−1)+n−χ(H)+1𝜆𝜒𝐻11𝐵𝐵subscript𝐶𝜆𝐺𝐻𝜆𝜒𝐻1𝑛𝜒𝐻1\lambda(\chi(H)-1)+1\leq BBC_{\lambda}(G,H)\leq\lambda(\chi(H)-1)+n-\chi(H)+1italic_λ ( italic_χ ( italic_H ) - 1 ) + 1 ≤ italic_B ... | An obvious extension would be an analysis for a class of split graphs, i.e. graphs whose vertices can be partitioned into a maximum clique C𝐶Citalic_C (of size ω(G)=χ(G)𝜔𝐺𝜒𝐺\omega(G)=\chi(G)italic_ω ( italic_G ) = italic_χ ( italic_G )) and an independent set I𝐼Iitalic_I.
A simple application of Theorem 2.18 gi... | Additionally, [16] proved for comparability graphs we can find a partition of V(G)𝑉𝐺V(G)italic_V ( italic_G ) into at most k𝑘kitalic_k sets which induce semihamiltonian subgraphs in the complement of G𝐺Gitalic_G (i.e. it contains a Hamiltonian path) and from that it follows that BBC2(Kn,G)𝐵𝐵subscript𝐶2subscr... | The λ𝜆\lambdaitalic_λ-backbone coloring problem was studied for several classes of graphs, for example split graphs [5], planar graphs [3], complete graphs [6], and for several classes of backbones: matchings and disjoint stars [5], bipartite graphs [6] and forests [3].
For a special case λ=2𝜆2\lambda=2italic_λ = 2 i... |
Moreover, it was proved before in [4] that there exists a 2222-approximate algorithm for complete graphs with bipartite backbones and a 3/2323/23 / 2-approximate algorithm for complete graphs with connected bipartite backbones. Both algorithms run in linear time. As a corollary, it was proved that we can compute BBC... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.