context
stringlengths
250
7.19k
A
stringlengths
250
3.38k
B
stringlengths
250
5.14k
C
stringlengths
250
5.47k
D
stringlengths
250
5.47k
label
stringclasses
4 values
\Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
Rnm⁢(x)=(−1)(n−m)/2⁢xm⁢P(n−m)/2(m+1−D/2,0)⁢(1−2⁢x2)=(n+1−D/2(n−m)/2)⁢xm⁢G−a⁢(2+m−D/2,2+m−D/2,x2).superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2superscript𝑥𝑚superscriptsubscript𝑃𝑛𝑚2𝑚1𝐷2012superscript𝑥2binomial𝑛1𝐷2𝑛𝑚2superscript𝑥𝑚subscript𝐺𝑎2𝑚𝐷22𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}x^{m}P_{(n-m)...
^{2}-m^{2}\right]x^{2}\\ +D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP...
m+D/2\end{array}\mid x^{2}\right),italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_D + italic_m + italic_n end_ARG start_ARG 2...
B
For example, computing the Bruhat decomposition of a random matrix in GL⁢(250,2)GL2502\textrm{GL}(250,2)GL ( 250 , 2 ) resulted in an SLP of length 353 969353969353\;969353 969. During the evaluation, our MSLP required 32323232 memory slots and it was easily possible to evaluate this MSLP on the standard generators of...
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
We note that after applying the function SlotUsagePattern, the resulting SLP only required 12121212 memory slots and could be evaluated in the same time as our MSLP. This is due to the fact that SlotUsagePattern was handed a well-designed SLP. When faced with an SLP not designed to be memory efficient, one might not ex...
D
The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
The main bottle-neck in dealing with high-contrast coefficients is that the decay is slower, albeit still exponential, forcing the use of larger patches. To deal with this situation, we use a subspace of Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h en...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
D
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
B
We consider two types of Ensemble Features: features accumulating crowd wisdom and averaging feature for the Tweet credit Scores. The former are extracted from the surface level while the latter comes from the low dimensional level of tweet embeddings; that in a way augments the sparse crowd at early stage.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
C
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
C
For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). For the experiments, we implement the 3 non-neural network models with Scikit-learn library111111scikit-learn.org/. Furthermore,...
The experiments’ results of the testing models are shown in Table 3. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are even worse. So the cl...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
A
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
A
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance ...
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali...
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
C
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)). Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)).
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
A
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
B
Even though the reduction from MinLoc to MinPathwidth yields an O⁡(log⁡(opt)⁢log⁡(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm for MinLoc, it is also important to directly investigate ...
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,...
Expecting an improvement of cutwidth approximation – a heavily researched area – by translating the problem into a string problem and then investigating the approximability of this string problem seems naive. This makes it even more surprising that linking cutwidth with pathwidth via the locality number is in fact hel...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
Even though the reduction from MinLoc to MinPathwidth yields an O⁡(log⁡(opt)⁢log⁡(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm for MinLoc, it is also important to directly investigate ...
B
They applied selective data sampling on a CNN which increased the speed of the training by dynamically selecting misclassified negative samples during training. Weights are assigned to the training samples and informative samples are included in the next training iteration.
First, optimal paths in a computed flow field are found and then a CNN classifier is used for removing extraneous paths in the detected centerlines. The method was enhanced using a model-based detection of coronary specific territories and main branches to constrain the search space.
They applied the mean of a series of Gabor filters with varying frequencies and sigma values to the output of the network to determine whether a pixel represents a vessel or not. Besides finding that the optimal filters vary between channels, the authors also state the ‘need’ of enforcing the networks to align with hum...
A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label. The CNN classification was propagated through the minimum spanning tree of the graph.
The arrows at the bottom denote the flow of the backpropagation starting after the calculation of the loss using the cost function J𝐽Jitalic_J, the original output y𝑦yitalic_y and the predicted output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG. This loss is backpropagated through the filters of the network adjustin...
C
We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a). Our results are poor with 20202020K interactions. For 50505050K th...
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (...
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score...
Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training. Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was...
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data. Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ...
D
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
A
The paper’s organization is as follows. The second section presents the two main locomotion methods employed by the Cricket robot, rolling and walking, along with a description of two gaits designed for negotiating steps. In the third section, we outline the mathematical framework used for quantifying energy expenditur...
This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
The paper’s organization is as follows. The second section presents the two main locomotion methods employed by the Cricket robot, rolling and walking, along with a description of two gaits designed for negotiating steps. In the third section, we outline the mathematical framework used for quantifying energy expenditur...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
This paper introduced an energy-centric method for automatic transitioning between locomotion modes in quadruped track-legged robots during step negotiation. Exhibiting flexibility, our methodology could be applied to a diverse range of wheel/track-legged robots, deriving transition thresholds from energy assessments d...
A
Suppose that you have an investment account with a significant amount in it, and that your financial institution advises you periodically on investments. One day, your banker informs you that company X will soon receive a big boost, and advises to use the entire account to buy stocks. If you were to completely trust th...
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
In this work we focus on the online computation with advice. Our motivation stems from observing that, unlike the real world, the advice under the known models is often closer to “fiat” than “recommendation”. Our objective is to propose a model which allows the possibility of incorrect advice, with the objective of ob...
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ...
C
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual...
Note that this allows us to compare words across different categories since their values are all normalized in relation to stop words, which should have a similar frequency across all the categories111111Note that we are assuming here that we are working with textual information in which there exist highly frequent ele...
This subsection describes how classification is carried out. However, before we illustrate the overall process and for the sake of simplicity, we are going to assume there exist a function g⁢v⁢(w,c)𝑔𝑣𝑤𝑐gv(w,c)italic_g italic_v ( italic_w , italic_c ) to value words in relation to categories —and whose formal defini...
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual...
In Subsection 4.2 we will introduce the time-aware metric used to evaluate the effectiveness of the classifiers, in relation to the time taken to make the decision. Finally, Subsection 4.4 describes the different types of experiments carried out and the obtained results.
B
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter...
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai...
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and al...
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )...
B
An advantage of SANs compared to Sparse Autoencoders [37] is that the constrain of activation proximity can be applied individually in each example instead of requiring the computation of forward-pass of all examples. Additionally, SANs create exact zeros instead near-zeros, which reduces co-adaptation between instance...
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations. In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ...
Regarding the φ𝜑\varphiitalic_φ metric and considering Eq. 17 our target is to estimate an as accurate as possible representation of 𝒙𝒙\bm{x}bold_italic_x through 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic...
Olshausen et al. [43] presented an objective function that considers subjective measures of sparseness of the activation maps, however in this work we use the direct measure of compression ratio. Previous work by [44] have used a weighted combination of the number of neurons, percentage root-mean-squared difference and...
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
D
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo...
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it...
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
C
as ∇¯^⋅𝐏^⋅^¯∇^𝐏\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{P}}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_P end_ARG, where 𝐏^=∇^¯⁢U¯^𝐏¯^∇¯𝑈\widehat{\mathbf{P}}=\overline{\widehat{\nabla}}\,\,\overline{U}over^ start_ARG bold_P end_ARG = over¯ start_ARG over^ start_ARG ∇ end_ARG end...
\,\overline{\psi}\right)+\overline{\eta}\,\,\left(\overline{\overline{\Delta^{% {}^{*}}}}\,\,\overline{\psi}\right)= - over¯ start_ARG bold_v end_ARG ⋅ ( over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_ψ end_ARG ) + over¯ start_ARG italic_η end_ARG ( over¯ start_ARG over¯ start_ARG roman_Δ star...
The Δ¯¯¯¯Δ\overline{\overline{\Delta}}over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG and Δ∗¯¯¯¯superscriptΔ\overline{\overline{\Delta^{{}^{*}}}}over¯ start_ARG over¯ start_ARG roman_Δ start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG end_ARG
d⁢V¯∗T[Δ¯¯U¯]\displaystyle\overline{dV}{}^{T}*\left[\overline{\overline{\Delta}}\,\,% \overline{U}\right]over¯ start_ARG italic_d italic_V end_ARG start_FLOATSUPERSCRIPT italic_T end_FLOATSUPERSCRIPT ∗ [ over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG over¯ start_ARG italic_U end_ARG ]
and U¯=η¯⁢(Δ∗¯¯⁢ψ¯)¯𝑈¯𝜂¯¯superscriptΔ¯𝜓\overline{U}=\overline{\eta}\,\,\left(\overline{\overline{\Delta^{{}^{*}}}}\,% \,\overline{\psi}\right)over¯ start_ARG italic_U end_ARG = over¯ start_ARG italic_η end_ARG ( over¯ start_ARG over¯ start_ARG roman_Δ start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRI...
C
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
D
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t...
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
A
Table 3: A summary of medical image segmentation papers along with their type of proposed improvement. * indicates the count at the highest level. For example, if a paper reports counts of patients, volumes, slices, etc., we report the count of patients.
Table 2 lists a summary of selected papers from this review, the nature of their proposed contributions, and the datasets that they were evaluated on. For the papers that evaluated their models on the PASCAL VOC 2012 dataset (Everingham et al., 2012), one of the most popular image semantic segmentation dataset for natu...
PASCAL Context: The PASCAL Context dataset (Mottaghi et al., 2014) extended the PASCAL VOC 2010 Challenge dataset by providing pixel-wise annotations for the images, resulting in a much larger dataset with 19,740 annotated images and labels belonging to 540 categories.
Kim and Hwang (2016) proposed a weakly supervised semantic segmentation network using unpooling and deconvolution operations, and used feature maps from the deconvolutions layers to learn scale-invariant features, and evaluated their model on the PASCAL VOC and chest X-ray image datasets. Lee et al. (2019) used dropout...
Pascal VOC datasets: The PASCAL Visual Object Classes (VOC) Challenge (Everingham et al., 2010) was an annual challenge that ran from 2005 through 2012 and had annotations for several tasks such as classification, detection, and segmentation. The segmentation task was first introduced in the 2007 challenge and featured...
D
As a result the graph collapses, becoming densely connected and losing its original structure. On the other hand, topological pooling methods can preserve the graph structure by operating on the whole adjacency matrix at once to compute the coarsened graphs and are not affected by uninformative node features.
Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation. After training, we extract the embedding vector of each word in the vocabulary and construct a...
Interestingly, the GNNs configured with GRACLUS and NDP always achieve better results than the Dense network, even if the latter generates the word embeddings used to build the graph on which the GNN operates. This can be explained by the fact that the Dense network immediately overfits the dataset, whereas the graph s...
We use a graph that encodes the similarity of all words in the vocabulary. Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise.
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
C
Sparse connectivity maintains the tree structures and has fewer weights to train. In practice, sparse weights require a special differentiable implementation, which can drastically decrease performance, especially when training on a GPU. Full connectivity optimizes all parameters of the fully connected network. Massice...
In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models. We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t...
These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption. In this work, we address this issue by proposing an imitation learning-based me...
For training, we generate input-target pairs (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) as described in the last section. These training examples are fed into the training process to teach the network to predict the same results as the random forest. To avoid overfitting, the data is generated on-the-fly so that each traini...
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim...
B
Theoretically, we establish the sample efficiency of OPPO in an episodic setting of Markov decision processes (MDPs) with full-information feedback, where the transition dynamics are linear in features (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020). In particular, we allow the trans...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Moreover, we prove that, even when the reward functions are adversarially chosen across the episodes, OPPO attains the same regret in terms of competing with the globally optimal policy in hindsight (Cesa-Bianchi and Lugosi, 2006; Bubeck and Cesa-Bianchi, 2012). In comparison, existing algorithms based on value iterati...
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
B
Inspired by ResNets whose skip connections have shown to reduce the vanishing gradient problem, densely connected CNNs (DenseNets) introduced by Huang et al. (2017) drive this idea even further by connecting each layer to all previous layers. DenseNets are conceptually very similar to ResNets—instead of adding the outp...
This reduces the number of features at the transition drastically, and by having the same number of channels as there are classes, it can also be used to completely remove fully connected layers. Secondly, they used 1×1111\times 11 × 1 convolutions with weight kernels 𝐖∈ℝ1×1×C×D𝐖superscriptℝ11𝐶𝐷\mathbf{W}\in\mathbb...
We use a DenseNet architecture (Huang et al., 2017) consisting of 100 layers with bottleneck and compression layers, i.e., a DenseNet-BC-100. We select the default growth rate of k=12𝑘12k=12italic_k = 12 for the model, i.e., the number of feature maps added per layer.
The spatial size of features detected within an image is bounded by the receptive field, i.e., the section of the input image that influences the value of a particular spatial location in some hidden layer. The receptive field is increased by stacking multiple convolutional layers, e.g., performing two consecutive 3×33...
Since this stacking necessarily increases the number of feature maps with each layer, the number of new feature maps computed by each layer is typically small. Furthermore, it is proposed to use compression layers after downscaling the spatial dimension with pooling, i.e., a 1×1111\times 11 × 1 convolution is used to r...
D
Since VRr⁢(X)⊆VRs⁢(X)subscriptVR𝑟𝑋subscriptVR𝑠𝑋\mathrm{VR}_{r}(X)\subseteq\mathrm{VR}_{s}(X)roman_VR start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X ) ⊆ roman_VR start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_X ) for all 0<r≤s0𝑟𝑠0<r\leq s0 < italic_r ≤ italic_s, this construction then naturally...
In particular, one can apply the homology functor to the Vietoris-Rips filtration of a metric space X𝑋Xitalic_X. This induces a persistence module (with T=ℝ>0𝑇subscriptℝabsent0T=\mathbb{R}_{>0}italic_T = blackboard_R start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT) where the morphisms are those induced by inclusions. As a...
The persistent homology of the Vietoris-Rips filtration of a metric space provides a functorial way111Where for metric spaces X𝑋Xitalic_X and Y𝑌Yitalic_Y morphisms are given by 1111-Lipschitz maps ϕ:X→Y:italic-ϕ→𝑋𝑌\phi:X\rightarrow Yitalic_ϕ : italic_X → italic_Y, and for persistence modules V∗subscript𝑉V_{*}itali...
One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding...
The notion of persistent homology arose from work by Frosini, Ferri, and Landi [40, 41], Robins [74], and Edelsbrunner [27, 37] and collaborators. After that, considering the persistent homology of the simplicial filtration induced from Vietoris-Rips complexes was a natural next step. For example, Carlsson and de Silv...
D
Nevertheless, while it may not reflect reality in the same way as, e.g., a large-scale field study performed with real-world experts in their actual working environment [81], the positive results from the study showed that our approach is promising and deserves to be developed and tested further, which will be done in ...
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q...
The second option of the Visual Mapping panel, the Remaining Cost, indicates (in the points’ sizes, by default) the final value of K⁢L⁢D⁢(Pi∥Qi)𝐾𝐿𝐷conditionalsubscript𝑃𝑖subscript𝑄𝑖KLD(P_{i}\|Q_{i})italic_K italic_L italic_D ( italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ italic_Q start_POSTSUBSCRIPT ...
The remaining costs are one aspect of estimating the projection quality. This means that projected points with high remaining costs can be moved by an additional optimization step. Akin to this idea, t-viSNE might show a preview of the data points in the next optimization step. In consequence, users could determine whe...
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ...
C
The first analysis focuses on taxonomies. Specifically, we provide several recommendations to improve research practices in this area. The growing number of nature-inspired proposals could be seen as a symptom of the active status of this field; however, its sharp evolution suggests that research efforts should be als...
Both taxonomies and the analysis provide a full overview of the situation of the bio-inspired optimization field. However, Figure 1 reflects the interest of research in this field, as the number of papers is in continuous growth of interest. We believe that it is essential to highlight and reflect on what is expected ...
The above statement is quantitatively supported by Figure 1, which depicts the increasing number of papers/book chapters published in the last years with bio-inspired optimization and nature-inspired optimization in their title, abstract and/or keywords. We have considered both bio-inspired and nature-inspired optimiz...
The second analysis delves into a critical perspective on bio-inspired optimization. It discusses the strengths, weaknesses, and challenges that have been identified in the field in recent years, while it also highlights the potential held for future developments in bio-inspired optimization.
The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b...
C
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
C
These findings show that SMap offers benefits over the existing methods, providing better coverage of the ASes in the Internet and not requiring agents or conditions for obtaining traceroute loops, hence improving visibility of networks not enforcing ingress filtering.
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route serve...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
In order to understand if there are differences in enforcement of ingress filtering between different network types and different countries, we perform characterisation of the networks that we found to not be filtering spoofed packets. Specifically, we ask the following questions: Does business type of networks or geo-...
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we ...
C
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification ...
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
A
Third, we gave a 2O⁢(δ1−1/d)⁢nsuperscript2𝑂superscript𝛿11𝑑𝑛2^{O(\delta^{1-1/d})}n2 start_POSTSUPERSCRIPT italic_O ( italic_δ start_POSTSUPERSCRIPT 1 - 1 / italic_d end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n expected time algorithm for random point sets.
The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets: a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut...
Let Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the x𝑥xitalic_x-coordinates of the points are taken independently uniformly at random fro...
Let Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the spacings Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman...
Random point sets. In the third scenario the points in P𝑃Pitalic_P are drawn independently and uniformly at random from the hypercylinder [0,n]×Balld−1⁢(δ/2)0𝑛superscriptBall𝑑1𝛿2[0,n]\times\mathrm{Ball}^{d-1}(\delta/2)[ 0 , italic_n ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / ...
A
If S𝑆Sitalic_S and T𝑇Titalic_T are automaton semigroups such that there exist automata for S𝑆Sitalic_S and T𝑇Titalic_T with state sets P𝑃Pitalic_P and Q𝑄Qitalic_Q respectively and maps ϕ:P→Q:italic-ϕ→𝑃𝑄\phi:P\rightarrow Qitalic_ϕ : italic_P → italic_Q and ψ:Q→P:𝜓→𝑄𝑃\psi:Q\rightarrow Pitalic_ψ : italic_Q → it...
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
In this paper, we extend the idea of the constructions used for these results in multiple directions. First, we generally consider partial automata for all of our results, i. e. we do not require the generating automaton to be complete (in contrary to many other results in the literature, for example those mentioned ab...
However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least very close to being an auto...
C
Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o...
HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers ag⁢tsubscript𝑎𝑔𝑡a_{gt}italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT are different from the ranks computed from the human-based attention maps.
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
Both Human Importance Aware Network Tuning (HINT) Selvaraju et al. (2019) and Self Critical Reasoning (SCR) Wu and Mooney (2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss betwe...
To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following Wu and Mooney (2019), we define the sensitivity of an answer a𝑎aitalic_a with respect to a visual region visubscript𝑣𝑖v_{i...
C
We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents. We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross ver...
Figure 3 shows how the number of topics in privacy policies vary with respect to the PageRank value. The whiskers in the plot represent the 95% confidence interval of the means of the number of topics in the privacy policies in each PageRank value bin. The PageRank values were binned with a constant value of 0.25 such ...
Topic modelling showed the distribution of themes of privacy practices in policies, corresponding to the expectations of legal experts in some ways, but differing in others. The positive relationship between PageRank of a domain and the number of topics covered in its policy indicates that more popular domains have a s...
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as P...
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ...
B
Considering all that, E3 noted that our system could be useful in solving competition problems, e.g., on Kaggle, and for her team to run tests before applying specific models to their huge data sets. Progressive VA workflows [53] could also be useful for improving the scalability of our approach for larger data sets.
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
Figure 7: The exploration of the models’ and predictions’ spaces and the metamodel’s results. (a) presents the initial models’ space and how it can be simplified with the removal of unnecessary models. The predictions’ space is then updated, and the user is able to select instances that are not well classified by the ...
Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re...
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
B
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
B
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm...
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
C
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are as required by Theorem 3.7.
a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ. The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI...
We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en...
Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever...
Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital...
D
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize...
B
Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose a densel...
Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose a densel...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
As for the costs, the decoder depth has a strong impact on inference speed, as the decoder has to be computed once for each decoding step during auto-regressive decoding Kasai et al. (2021); Xu et al. (2021c), and the use of only deep encoders Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a); Chai et al. (20...
For the convergence of deep Transformers, Bapna et al. (2018) propose the Transparent Attention mechanism which allows each decoder layer to attend weighted combinations of all encoder layer outputs. Wang et al. (2019) present the Dynamic Linear Combination of Layers approach that additionally aggregates shallow layers...
D
through the map f:∏i∈IXi→Struct⁡(σ):𝑓→subscriptproduct𝑖𝐼subscript𝑋𝑖Structσf\colon\prod_{i\in I}X_{i}\to\operatorname{Struct}(\upsigma)italic_f : ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT → roman_Struct ( roman_σ ) that associates to each (Ai...
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_P...
the disjoint union of the structures Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with εisubscript𝜀𝑖\varepsilon_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT true on the structure Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT
x)\wedge\neg(x=y)italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≜ ∃ italic_x . ∃ italic_y . italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ ¬ ( italic_x = italic_y ) for i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and θi,j≜∃x.∃y...
The set X𝑋Xitalic_X is the union (disjoint by construction) ⋃i∈Ifi⁢(Xi)subscript𝑖𝐼subscript𝑓𝑖subscript𝑋𝑖\bigcup_{i\in I}f_{i}(X_{i})⋃ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) where...
B
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
Figure 5: Comparison of two learning representations for distortion estimation, distortion parameter (left) and ordinal distortion (right). In contrast to the ambiguous relationship between the distortion distribution and distortion parameter, the proposed ordinal distortion displays an evident positive correlation to ...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
D
We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others. Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes.
showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1 shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed
Figure 3 shows the validation perplexity of the three methods with a small batch size of 20 and a large batch size of 2000. In small-batch training, SNGM and LARS achieve validation perplexity comparable to that of MSGD. Meanwhile, in large-batch training, SNGM achieves better performance than MSGD and LARS.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
D
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d⁢(j,S)>...
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C...
        do FA←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star...
  FAs¯←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{% \pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s...
Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa...
D
This together with the convergence of {‖X⁢(k,ω)−𝟏N⊗z∗⁢(ω)‖,k≥0}norm𝑋𝑘𝜔tensor-productsubscript1𝑁superscript𝑧𝜔𝑘0\{\|X(k,\omega)-\mathbf{1}_{N}\otimes z^{*}(\omega)\|,k\geq 0\}{ ∥ italic_X ( italic_k , italic_ω ) - bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ⊗ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUP...
k)}|\mathcal{F}(k-1)\right]\succeq O_{N\times N}\ \mbox{a.s.},\ \mathcal{G}(k|% k-1)\text{ is balanced}\ \mbox{a.s.},\ k\geq 0\Big{\}}.roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = { { caligraphic_G ( italic_k ) , italic_k ≥ 0 } | italic_E [ caligraphic_A start_POSTSUBSCRIPT caligraphic_G ( italic_k ) end_POSTSUBSC...
From the definition of Γ2subscriptΓ2\Gamma_{2}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we know that Γ2⊆Γ1subscriptΓ2subscriptΓ1\Gamma_{2}\subseteq\Gamma_{1}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Then, similar to the proof of Theorem 2 in [25], we get ...
At first, we suppose {𝒢⁢(k),k≥0}𝒢𝑘𝑘0\{\mathcal{G}(k),k\geq 0\}{ caligraphic_G ( italic_k ) , italic_k ≥ 0 } is a Markov chain with countable state space. For this case, Condition (b.1) of Theorem III.1 becomes more intuitive and Condition (b.2) is weakened.
The proof of Theorem III.2 is similar to that of Theorem 3.1 and is omitted here. For details, see Appendix A. The only difference is that by the independence between ℒ𝒢⁢(i)subscriptℒ𝒢𝑖\mathcal{L}_{\mathcal{G}(i)}caligraphic_L start_POSTSUBSCRIPT caligraphic_G ( italic_i ) end_POSTSUBSCRIPT and ℒ𝒢⁢(j)subscriptℒ𝒢𝑗...
C
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
D
Figure 2: Example of segmentation results on validation dataset from three best single models: (a)(d) HTC, (b)(e) SOLOv2 and (c)(f) PointRend. PointRend predicts masks with substantially finer details around object boundaries. All figures are best viewed digitally with zoom.
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
C
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
A
Compared to OPT-WLSVI and MASTER, our proposed algorithms achieve comparable empirical performance. More specifically, MASTER outperforms our proposed algorithm which agrees with its dynamic regret upper bound. However, the variance of MASTER is larger due to the random scheduling of multiple base algorithms. Our algo...
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are gen...
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dy...
\mathcal{S})+\frac{k-100i}{100}\bm{\mu}_{h}^{(i+1)\mod 5}(\mathcal{S}),bold_italic_μ start_POSTSUBSCRIPT italic_h , italic_k end_POSTSUBSCRIPT ( caligraphic_S ) = ( 1 - divide start_ARG italic_k - 100 italic_i end_ARG start_ARG 100 end_ARG ) bold_italic_μ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSC...
B
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
D
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different down...
GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no...
The proposed DAN is compatible with most existing GNN-based methods, allowing these methods to leverage our DAN as the GNN module for entity encoding. Furthermore, the computational cost is comparable to that of existing methods. Therefore, we offer an efficient and general GNN architecture for KG embedding.
The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ...
We employ different adaptation strategies for various KG embedding tasks. In entity alignment, we follow the existing GNN-based methods [12, 39] to concatenate the output embeddings from each layer to form the final representation. This process can be written as follows:
B
We compare the model complexity of all the methods in Table I. VDM, RFM, and Disagreement use a fixed CNN for feature extraction. Thus, the trainable parameters of feature extractor are 0. ICM estimates the inverse dynamics for feature extraction with 2.21M parameters. ICM and RFM use the same architecture for dynamics...
Normalization methods. We normalize the intrinsic reward and advantage function in training for more stable performance. Since the reward generated by the environment are typically non-stationary, such normalization is useful for a smooth and stable update of the value function. In practice, we normalize the advantage ...
State preprocessing. In Atari games, the observations are raw images. The images are resized to 84×84848484\times 8484 × 84 pixels and converted to grayscale. The state stacks 4444 recent observations as a frame of shape 84×84×48484484\times 84\times 484 × 84 × 4. In both Super Mario and Atari games, we use the frame-s...
We demonstrate the setup of the experiment in Fig. 10. The equipment mainly includes an RGB-D camera that provides the image-based observations, a UR5 robot arm that interacts with the environment, and different objects in front of the robot arm. An example of the RGB-D image is shown in Fig. 11. We develop a robot en...
Network architecture. The proposal network contains 2222 fully-connected layers and 3333 residual blocks. The input to the proposal network contains features of the current state, next state, and action. In each layer, we integrate the action with features from the previous layer, which amplifies the impact of actions...
B
If we would add nodes to make the grid symmetric or tensorial, then the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪⁢(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}ital...
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros...
Here, we answer Questions 1–2. To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us...
for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π, find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 ...
B
As a result, the sample complexity for estimating the Wasserstein distance W⁢(μ,ν)𝑊𝜇𝜈W(\mu,\nu)italic_W ( italic_μ , italic_ν ) up to ϵitalic-ϵ\epsilonitalic_ϵ sub-optimality gap is of order 𝒪~⁢(ϵd∨2)~𝒪superscriptitalic-ϵ𝑑2\tilde{\mathcal{O}}(\epsilon^{d\lor 2})over~ start_ARG caligraphic_O end_ARG ( italic_ϵ st...
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
The 1111-Wasserstein distance can be viewed as a special IPM with ℱ=Lip1ℱsubscriptLip1\mathcal{F}=\text{Lip}_{1}caligraphic_F = Lip start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, where the Rademacher complexity of ℱℱ\mathcal{F}caligraphic_F is given by [42, Example 4]:
Motivated by Example 1, we propose the projected Wasserstein distance in Definition 2 to improve the sample complexity. This distance can be viewed as a special IPM with the function space defined in (1), a collection of 1111-Lipschitz functions in composition with an orthogonal k𝑘kitalic_k-dimensional linear mapping.
The orthogonal constraint on the projection mapping A𝐴Aitalic_A is for normalization, such that any two different projection mappings have distinct projection directions. The projected Wasserstein distance can also be viewed as a special case of integral probability metric with the function space
C
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
D
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
B
Given a polynomial function f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard...
Given a polynomial function f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard...
We developed a linear representation theory for functions over 𝔽𝔽\mathbb{F}blackboard_F in the previous section. This section extends the idea to a family of functions over 𝔽𝔽\mathbb{F}blackboard_F defined through a 𝔽𝔽\mathbb{F}blackboard_F-valued parameter. The well-known Dickson polynomial is one such motivatin...
Given an 1111-parameter family of maps over 𝔽𝔽\mathbb{F}blackboard_F, determine if it is parametrically invertible over 𝔽𝔽\mathbb{F}blackboard_F. It is also shown in this paper that the compositional inverse of a 1111-parameter family of permutation polynomials is also a 1111-parameter family of permutation polynom...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
C
A particular challenge of the aforementioned joint classification and view selection problem is its inherent trade-off between accuracy and sparsity. For example, the most accurate model may not perform the best in terms of view selection. In fact, the prediction-optimal amount of regularization causes the lasso to sel...
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An exam...
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi...
In terms of view selection, each of the 10×10101010\times 1010 × 10 fitted models is associated with a set of selected views. However, quantities like TPR, FPR and FDR cannot be computed since the true status of the views is unknown. We therefore report the number of selected views, since this allows assessment of mode...
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which...
D
When applying a proximity-based method to this dataset, it may incorrectly label object a2subscript𝑎2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT as an anomaly. Proximity-based methods tend to report objects that lie in sparsely populated regions as anomalies, which is why a2subscript𝑎2a_{2}italic_a start_P...
This example highlights the fundamental difference between proximity-based and dependency-based methods. Dependency-based methods focus on identifying anomalies based on underlying relationships between variables, whereas proximity-based methods rely on object similarity in terms of proximity. In cases like this, where...
The proximity-based approach is mainstream in anomaly detection [8, 9, 10, 11], and operates on the assumption that anomalies are objects that exhibit significant distance or sparsity in their neighborhood compared to other objects. The anomalousness of an object is determined by its proximity to neighboring objects. P...
Dependency-based approach is fundamentally different from proximity-based approach because it considers the relationship among variables, while proximity-based approach examines the relationship among objects. We use an example to explain the difference between the two approaches.
The dependency-based approach works under the assumption that anomalies deviate from the normal dependency among variables, and the extend of anomalousness is evaluated based on this deviation. While the proximity-based approach that focuses on relationships among objects, the dependency-based approach emphasizes on t...
A
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
D
Cross-scale graph pyramid network (xGPN). From Table 3 and 4, we can see that xGPN obviously improves the performance of short actions as well as the overall performance. On the one hand, xGPN utilizes long-range correlations in multi-level features and benefits actions of various lengths. On the other hand, xGPN enabl...
Multi-scale input. The magnification process may inevitably impair the information in the clip, thus the original video clip, which contains the original intact information, is also necessary. To take advantage of the complementary properties of both scales, we design a video stitching technique to piece them together...
We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp...
Cross-scale correlations. The original clip and the magnified clip, albeit different, are highly correlated since they contain the same video content. If we can utilize their correlations and draw connections between their features, then the impaired information in the magnified clip can be rectified by the original cl...
Clip O and Clip U. In Table 5, we compare the performance when generating predictions only from Clip O, only from Clip U, and from both with the same well-trained VSGN model. We can see that the two clips still result in different performance even after their features are aggregated throughout the network. Clip O is be...
D
(2) project the models into a hyperparameter embedding according to the previous overall performance using DR methods; (3) compare the mean performance of all algorithms and models vs. a selection of models for every metric; and (4) analyze the predictive results for each instance and for all models against a selection...
Afterwards, in Section 3, we describe the analytical requirements and design goals for attaching VA to evolutionary optimization and combining VA with ensemble learning. Section 4 presents the functionalities of the tool and, at the same time, describes the first use case with the goal of selecting a composition of mod...
G2: Migration of powerful and alternative models to the majority-voting ensemble. In continuation of the preceding goal, our VA tool should allow the users to pick the best (and most diverse) models for the ensemble from different areas in the projection (R2).
an implementation of the aforementioned conceptual proposal, our VA tool called VisEvol, that consists of a novel combination of interactive coordinated views—which control the crossover and mutation processes—and supports the visual exploration of the most performant/diverse models for the creation of a powerful majo...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
B
In [8], the Metropolis-Hastings algorithm is extended to incorporate safety upper bound constraints on the probability vector. This paper includes numerical simulations that demonstrate the application of the extension in a probabilistic swarm guidance problem. In order to enhance convergence rates, [9] introduces a co...
However, all feedback-based algorithms mentioned above require global feedback on the state of the density distribution. Communication between all agents has to be established to estimate the density distribution in the probabilistic swarm guidance problem.
In [10], the approach presented in [9] is enhanced by incorporating state-feedback to further improve the convergence rate. These works are also extended to impose density upper bounds and density rate constraints in [11] and density flow and density diffusivity constraints in [12].
For the probabilistic swarm guidance application, removing the assumption that agents have access to density values of their own and neighboring bins will be the subject of future studies. A useful extension of this research may involve imposing safety constraints on the density distribution of the swarm, such as densi...
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state. The probabilistic guidance algorithm led to the dev...
B
While (near)-isometric shape matching has been studied extensively for the case of matching a pair of shapes, the isometric multi-shape matching problem, where an entire collection of (near-isometric) shapes is to be matched, is less explored. Important applications of isometric multi-shape matching include learning lo...
Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc...
In principle, any pairwise shape matching method can be used for matching a shape collection. To do so, one can select one of the shapes as reference, and then solve a sequence of pairwise shape matching problems between each of the remaining shapes and the reference. However, a major disadvantage is that such an appr...
Most pipelines for partial matching include the full reference shape to resolve some of the complexity. Although our optimisation does not need any information about the complete geometry, we use a partiality-adjusted version of ZoomOut to obtain the shape-to-universe initialisation for IsoMuSh. In this case, the optim...
Alternatively, one could solve pairwise shape matching problems between all pairs of shapes in the shape collection. Although this way there is no bias, in general the resulting correspondences are not cycle-consistent. As such, matching shape A via shape B to shape C, may lead to a different correspondence than matchi...
B
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some parti...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path % graphs $\subset$ path graphs $\subset$ chordal graphs}.interva...
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
C
In experiments 1(a) and 1(b), we study how the fraction of pure nodes affects the behaviors of these mixed membership community detection methods under MMSB and DCMM, respectively. We fix (x,ρ)=(0.4,0.1)𝑥𝜌0.40.1(x,\rho)=(0.4,0.1)( italic_x , italic_ρ ) = ( 0.4 , 0.1 ) and let n0subscript𝑛0n_{0}italic_n start_POSTSUB...
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
D
For any functional F:ℳ→ℝ:𝐹→ℳℝF\colon\mathcal{M}\rightarrow\mathbb{R}italic_F : caligraphic_M → blackboard_R, we let grad⁡Fgrad𝐹\operatorname{{\mathrm{grad}}}Froman_grad italic_F denote the functional gradient of F𝐹Fitalic_F with respect to the Riemannian metric g𝑔gitalic_g.
Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity. Therefore, in this scenario, variational ...
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe...
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilber...
D
Mixedl. The mixedl is a mixed low traffic flow with a total flow of 2550 in one hour, to simulate a light peak. The arrival rate changes every 10 minutes, which is used to simulate the uneven traffic flow distribution in the real world, the details of the vehicle arrival rate and cumulative traffic flow are shown in F...
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II.
Mixedl. The mixedl is a mixed low traffic flow with a total flow of 2550 in one hour, to simulate a light peak. The arrival rate changes every 10 minutes, which is used to simulate the uneven traffic flow distribution in the real world, the details of the vehicle arrival rate and cumulative traffic flow are shown in F...
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-...
We run the experiments under three traffic flow configurations: real traffic flow, mixed low traffic flow and mixed high traffic flow. The real traffic flow is real-world hourly statistical data with slight variance in vehicle arrival rates, as shown in Tab. I. Since the real-world strategies tend to break down during ...
A
at a certain 𝐳jsubscript𝐳𝑗\mathbf{z}_{j}bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT with ϕ⁢(𝐳j)=𝐱ˇjitalic-ϕsubscript𝐳𝑗subscriptˇ𝐱𝑗\phi(\mathbf{z}_{j})\,=\,\check{\mathbf{x}}_{j}italic_ϕ ( bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = overroman_ˇ start_ARG bold_x end_ARG start_POSTSUBSCRI...
ϕ𝐳⁢(𝐳j)⁢ϕ𝐳⁢(𝐳j)†⁢𝐱j−ϕ⁢(𝐳j)‖𝐱j−ϕ⁢(𝐳j)‖2=ϕ𝐳⁢(𝐳j)‖𝐱j−ϕ⁢(𝐳j)‖2⁢(ϕ𝐳⁢(𝐳j)†⁢(𝐱j−ϕ⁢(𝐳j)))=𝟎.subscriptitalic-ϕ𝐳subscript𝐳𝑗subscriptitalic-ϕ𝐳superscriptsubscript𝐳𝑗†subscript𝐱𝑗italic-ϕsubscript𝐳𝑗subscriptnormsubscript𝐱𝑗italic-ϕsubscript𝐳𝑗2subscriptitalic-ϕ𝐳subscript𝐳𝑗subscriptnormsubscript𝐱𝑗it...
‖𝐱j−𝐱ˇj‖2=min𝐳∈Δ⁡‖𝐱j−ϕ⁢(𝐳)‖2=‖𝐱j−ϕ⁢(𝐳j)‖2subscriptnormsubscript𝐱𝑗subscriptˇ𝐱𝑗2subscript𝐳Δsubscriptnormsubscript𝐱𝑗italic-ϕ𝐳2subscriptnormsubscript𝐱𝑗italic-ϕsubscript𝐳𝑗2\|\mathbf{x}_{j}-\check{\mathbf{x}}_{j}\|_{2}~{}~{}=~{}~{}\min_{\mathbf{z}\in% \Delta}\|\mathbf{x}_{j}-\phi(\mathbf{z})\|_{2}~{}~{}=~{...
≤μ⁢(‖𝐱j−1−𝐱j‖2+‖𝐱j−𝐱j+1‖2+‖𝐱j+1−𝐱j+2‖2+⋯)absent𝜇subscriptnormsubscript𝐱𝑗1subscript𝐱𝑗2subscriptnormsubscript𝐱𝑗subscript𝐱𝑗12subscriptnormsubscript𝐱𝑗1subscript𝐱𝑗22⋯\displaystyle~{}\leq~{}\mu\,\big{(}\|\mathbf{x}_{j-1}-\mathbf{x}_{j}\|_{2}+\|% \mathbf{x}_{j}-\mathbf{x}_{j+1}\|_{2}+\|\mathbf{x}_{j+1}-\mat...
at a certain 𝐳jsubscript𝐳𝑗\mathbf{z}_{j}bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT with ϕ⁢(𝐳j)=𝐱ˇjitalic-ϕsubscript𝐳𝑗subscriptˇ𝐱𝑗\phi(\mathbf{z}_{j})\,=\,\check{\mathbf{x}}_{j}italic_ϕ ( bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = overroman_ˇ start_ARG bold_x end_ARG start_POSTSUBSCRI...
A
((1+ϵ′)2<1+3⁢ϵ′superscript1superscriptitalic-ϵ′213superscriptitalic-ϵ′(1+\epsilon^{\prime})^{2}<1+3\epsilon^{\prime}( 1 + italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < 1 + 3 italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT)
We will now use Lemma 2 to prove a more general result that incorporates the prediction error into the analysis. To this end, we will relate the cost of the packing of ProfilePacking to the packing that the algorithm would output if the prediction were error-free, which will allow us to apply the result of Lemma 2. Spe...
A second approach could be along the lines of (?), which describe a general method for combining an optimistic algorithm that trusts the prediction (in our context, ProfilePacking) and a pessimistic algorithm that ignores the prediction (in our context, the online algorithm A𝐴Aitalic_A). The optimistic and pessimisti...
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i...
We first show that in the ideal setting of error-free prediction, ProfilePacking is near-optimal (Lemma 2). This result will be very useful in the analysis of the more realistic setting of erroneous predictions (Theorem 3). We denote by ϵitalic-ϵ\epsilonitalic_ϵ any fixed constant less than 0.5, and in order to achiev...
A
Table 2: Shape auto-encoding on the ShapeNet dataset. The best results are highlighted in bold. CD is multiplied by 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and EMD is multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencod...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
For the point cloud representation, the crucial step is to define reconstruction loss that can be used in the autoencoding framework. In the literature, two distance measures are successively applied: Earth Mover’s (Wasserstein) Distance (Rubner et al., 2000), and Chamfer pseudo-distance (Tran, 2013).
We examine the generative capabilities of the provided LoCondA model compared to the existing reference approaches. In this experiment, we follow the evaluation protocol provided in (Yang et al., 2019). We use standard measures for this task like Jensen-Shannon Divergence (JSD), coverage (COV), and minimum matching dis...
D
O⁢(n2ε⁢n⁢ln⁡n⁢maxi,j⁡Ci⁢j2⁢χ).𝑂superscript𝑛2𝜀𝑛𝑛subscript𝑖𝑗superscriptsubscript𝐶𝑖𝑗2𝜒O\left(\frac{n^{2}}{\varepsilon}\sqrt{n\ln n}\max_{i,j}C_{ij}^{2}\chi\right).italic_O ( divide start_ARG italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_ε end_ARG square-root start_ARG italic_n ro...
We comment on the complexity of the DMP algorithm compared to the existing state-of-the-art methods: the iterative Bregman projections (IBP) algorithm, its accelerated versions and primal dual algorithm (ADCWB), see Table 1. All of these methods use entropic regularization of Wasserstein metric with parameter γ𝛾\gamma...
parameter γ𝛾\gammaitalic_γ to solve the WB problem. We ran the IBP and the ADCWB algorithms with different values of the regularization parameter γ𝛾\gammaitalic_γ starting from γ=0.1𝛾0.1\gamma=0.1italic_γ = 0.1 and gradually decreasing its value to γ=10−4𝛾superscript104\gamma=10^{-4}italic_γ = 10 start_POSTSUPERSCR...
We demonstrate the performance of the DMP algorithm on different network architectures with different conditional number χ𝜒\chiitalic_χ: complete graph, star graph, cycle graph and the Erdős-Rényi random graphs with the probability of edge creation p=0.5𝑝0.5p=0.5italic_p = 0.5 and p=0.4𝑝0.4p=0.4italic_p = 0.4 under...
Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter probl...
A
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
D
For any simplicial complex K𝐾Kitalic_K and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ), there exists an integer t=t⁢(b,K,m)𝑡𝑡𝑏𝐾𝑚t=t(b,K,m)italic_t = italic_t ( italic_b , italic_K , italic_m ) with the following property: If ℱℱ\mathcal{F}caligraphic_F is an m𝑚mita...
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma...
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ...
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37]. This technique, which we briefly outline here, was specifically designed for complete intersection patterns. A major part of this paper, all of...
D
Another possible improvement is to utilize parallel processing on powerful cloud servers. Progressive VA and data science workflows [103, 104] could also be effective. Moreover, alternative feature selection techniques for computing feature importance could be incorporated in our tool (e.g., SHAP [105]).
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heat...
G5: Reassessment of the instances’ predicted probabilities and performance, computed with appropriate validation metrics. In the end, users’ interactions should be tracked in order to preserve a history of modifications in the features, and the performance should be monitored with validation metrics (T5). At all stages...
Visualization and interaction. E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv...
A customized beeswarm plot could facilitate selecting groups of instances and then explaining why some instances migrated. DR methods could also be helpful here, as noted by E3. Also, he proposed to include additional filtering options for all metrics.
D
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system. We leverage the repeatability of the system, which is higher than the integrated encoder error of 3⁢μ⁢m3𝜇𝑚3\mu m3 italic_μ italic_m,
The physical system is a 2-axis gantry stage for (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) positioning with industrial grade actuators and sensors [14]. The plant can be modeled as a mass-spring-damper system with two masses linked with a damper and a spring for capturing imperfection and friction in the transmitting movem...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
To bring the model close to the real system, we unify the terms required for the contour control formulation with the velocity and acceleration for each axis from the identified, discretized state-space model from (4). Also, we include the path progress sksubscript𝑠𝑘s_{k}italic_s start_POSTSUBSCRIPT italic_k end_POST...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
A
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue fo...
We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. An orthogonal avenue for attacking bias mitigation is to use alternative architectures. Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured c...
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue fo...
Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli...
Without bias mitigation mechanisms, standard models (StdM) often use spurious bias variables for inference, rather than developing invariance to them, which often results in their inability to perform well on minority patterns [27, 11, 3, 61]. To address this, several bias mitigation mechanisms have been proposed, and ...
A
They require a time-consuming data collection for the specific subject. To reduce the number of training samples, Williams et al. introduce semi-supervised gaussian process regression methods [33]. Sugano et al. propose a method that combines gaze estimation with saliency [35].
2) A robust regression function to learn the mappings from appearance feature to human gaze. It is non-trivial to map the high-dimensional eye appearance to the low-dimensional gaze. Many regression functions have been used to regress gaze from appearance, e.g., local linear interpolation [21] and adaptive linear regre...
To address the performance degradation across subjects, Funes et al. present a cross-subject training method [36]. However, the reported mean error is larger than 10 degrees. Sugano et al. introduce a learning-by-synthesis method [37]. They use a large number of synthetic cross-subject data to train their model. Lu et...
Appearance-based gaze estimation suffers from many challenges, including head motion and subject differences, particularly in the unconstrained environment. Traditional appearance-based methods often struggle to effectively address these challenges due to their limited fitting ability.
Lu et al. propose an adaptive linear regression method to select an optimal set of sparsest training sample for interpolation [19]. However, these methods only show reasonable performance in a constrained environment, i.e., fixed head pose and the specific subject. Their performance significantly degrades when tested o...
D
Inspired by the high performance of CNN based methods that have strong robustness to illumination, facial expression, and facial occlusion changes, we propose in this paper an occlusion removal approach and deep CNN based model to address the problem of masked face recognition during the COVID-19 pandemic. Motivations...
Real-World-Masked-Face-Dataset wang2020masked is a masked face dataset devoted mainly to improve the recognition performance of the existing face recognition technology on the masked faces during the COVID-19 pandemic. It contains three types of images namely, Masked Face Detection Dataset (MFDD), Real-world Masked F...
The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax f...
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face...
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
D
Note: this is an extended version of an eponymous paper that appeared in FSCD 2022 that includes further examples (Examples 1, 1, and 1), a more straightforward presentation of the metatheory (Section 4) based on Kripke logical relations [Plo73], and a representative set of the corresponding proofs (Sections 3 and 4).
Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was...
Adding (co)inductive types and terminating recursion (including productive corecursive definitions) to any programming language is a non-trivial task, since only certain recursive programs constitute valid applications of (co)induction principles. Briefly, inductive calls must occur on data smaller than the input and, ...
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata...
Moreover, some prior work, which is based on sequential functional languages, encodes recursion via various fixed point combinators that make both mixed inductive-coinductive programming [Bas18] and substructural typing difficult, the latter requiring the use of the ! modality [Wad12]. Thus, like Fωcopsuperscriptsubsc...
B
where 𝐆¯=𝐁m⁢𝐆¯𝐆superscript𝐁𝑚𝐆{\bar{\mathbf{G}}}={{\mathbf{B}}^{m}}{\mathbf{G}}over¯ start_ARG bold_G end_ARG = bold_B start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT bold_G. It is clear from Eq. (3) that the fingerprint 𝐛ksubscript𝐛𝑘\mathbf{b}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT has b...
Judge. The judge is a trusted entity who is only responsible for arbitration in the case of illegal redistribution, as in existing traitor tracing systems [10, 11, 12, 13, 14, 3]. After receiving the owner’s request for arbitration, the judge makes a fair judgment based on the evidence provided by the owner. Although o...
The whole FairCMS-I scheme is summarized as follows. First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 2. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\mathb...
Once a copyright dispute occurs between the owner and the user, they delegate a judge that is credible for both parties to make a fair arbitration. Due to the possible noise effect during data transmission, the received suspicious media content copy is assumed to be contaminated by the an additive noise 𝐧𝐧\mathbf{n}b...
Upon the detection of a suspicious media content copy 𝐦~ksuperscript~𝐦𝑘\tilde{\mathbf{m}}^{k}over~ start_ARG bold_m end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, the owner resorts to the judge for violation identification. To this end, the proofs that the owner needs to provide the judge includes the o...
C
Neural Factorization Machines (NFM) He and Chua (2017) design a bi-interaction layer to learn the pairwise feature interaction and apply DNN to learn the higher-order ones. Wide&Deep Cheng et al. (2016) introduces a hybrid architecture containing both shallow and deep components to jointly learn low-order and high-orde...
One of the main limitations of FM is that it is not able to capture higher-order feature interactions, which are interactions between three or more features. While higher-order FM (HOFM) has been proposed Rendle (2010, 2012) as a way to address this issue, it suffers from high complexity due to the combinatorial expans...
To address these issues, some recent studies have attempted to identify beneficial feature interactions automatically. AutoFIS Liu et al. (2020) is a two-stage algorithm that uses a gate operation to search and model beneficial feature interactions, but there is a loss of information between the stages, and the modelin...
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter...
D
This means that Theorems 2.4 and 2.6 effectively bound the number of ZOO, FOO, DO, and LMO oracle calls needed to achieve a target primal gap or Frank-Wolfe gap accuracy ε𝜀\varepsilonitalic_ε as a function of Tνsubscript𝑇𝜈T_{\nu}italic_T start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT and ε𝜀\varepsilonitalic_ε; note...
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run.
In practice, a halving strategy for the step size is preferred for the implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps
We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refe...
C
However, to be considered an efficient approximation algorithm in theory, ideally the dependence on all relevant parameters should be polynomial. Indeed, this has been a key property in the qualification of efficiency in parametrized complexity. The question whether there is a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-ap...
Table 1: A summary of the running times in several different models, compared to the previous state-of-the-art, for computing a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate maximum matching. In the distributed setting, “running time” refers to the round complexity, while in the streaming setting it refers to th...
In a distributed/parallel setting, the aforementioned “time” should be understood as the number of rounds. All the times listed above are a function of G𝐺Gitalic_G and ε𝜀\varepsilonitalic_ε, but for the sake of brevity we drop these parameters in the rest of this section.
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu...
Instantiating our framework with state-of-the-art results for computing an O⁢(1)𝑂1O(1)italic_O ( 1 )-approximate maximum matching in CONGEST and MPC, we obtain the results outlined in Table 1. In particular, our framework exponentially improves the dependence on 1/ε1𝜀1/\varepsilon1 / italic_ε in these models, hence ...
A
Specifically, the push-sum based subgradient method in [18] can be implemented over time-varying directed graphs, and linear convergence rates were achieved in [19, 20] for minimizing strongly convex and smooth objective functions by applying the push-sum technique to EXTRA.
Specifically, the methods proposed in [12, 21, 22, 23] employ gradient tracking to achieve linear convergence for strongly convex and smooth objective functions, where the work in [21, 23, 22] particularly considered combining gradient tracking with the push-sum technique to accommodate directed graphs. The methods can...
In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP...
Specifically, the push-sum based subgradient method in [18] can be implemented over time-varying directed graphs, and linear convergence rates were achieved in [19, 20] for minimizing strongly convex and smooth objective functions by applying the push-sum technique to EXTRA.
For strongly convex and smooth objective functions, [57] first considered a linearly convergent gradient tracking method based on a specific quantizer. More recently, the paper [52] introduced LEAD that works with a general class of compression operators and still enjoys linear convergence. Some recent developments can...
A
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
We present a new SPP formulation of the PFL problem (1) as the decentralized min-max mixing model. This extends the classical PFL problem to a broader class of problems beyond the classical minimization problem. It furthermore covers various communication topologies and hence goes beyond the centralized setting.
Note that in the proposed formulation (1) we consider both the centralized and decentralized cases. In the decentralized setting, all nodes are connected within a network, and each node can communicate/exchange information only with their neighbors in the network. While the centralized architecture consists of master-s...
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
We propose a lower bounds both on the communication and the number of local oracle calls for a general algorithms class (that satisfy Assumption 3). The bounds naturally depend on the communication matrix W𝑊Witalic_W (as in the minimization problem), but our results apply to SPP (see ”Lower” rows in Table 1 for variou...
A
There has been significant recent interest in solving the equilibrium selection problem (Ortiz et al., 2007; Omidshafiei et al., 2019). This paper provides a novel approach which is computationally tractable, supports general-support solutions, and has favourable scaling properties when the solution is full-support.
The new solution concept MG(C)CE is rooted in the powerful principles of entropy and margin maximisation. Therefore it is a simple solution that makes limited assumptions, and is robust to many possible counter strategies (Jaynes, 1957). The MG(C)CE defines a family of unique solutions parameterized by ϵitalic-ϵ\epsil...
An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has...
Figure 1: The solution landscape for the traffic lights game. The solid polytope shows the space of CE joint strategies, and the dotted surface shows factorizable joint strategies. NEs are where the surface and polytope intersect. There are three unsatisfying NEs: mixed spends most of its time waiting and does not avoi...
It is worth emphasizing a set of particularly interesting solutions within this family. Firstly the standard MG(C)CE, with ϵ=0italic-ϵ0\epsilon=0italic_ϵ = 0, provides a weak equilibrium for non-trivial games (Theorem 4). Secondly, an edge case with positive ϵitalic-ϵ\epsilonitalic_ϵ is max⁡(A⁢b)𝐴𝑏\max(Ab)roman_max (...
A
Given η>0𝜂0\eta>0italic_η > 0 and a query q𝑞qitalic_q, the Gaussian mechanism with noise parameter η𝜂\etaitalic_η returns its empirical mean q⁢(s)𝑞𝑠{q}\left(s\right)italic_q ( italic_s ) after adding a random value, sampled from an unbiased Gaussian distribution with variance η2superscript𝜂2\eta^{2}italic_η start...
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section ...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta...
In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst. This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability ...
D
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
A substantial theoretical framework has been built around the definition of kernelization [17, 22, 27, 29, 31]. It includes deep techniques for obtaining kernelization algorithms [10, 28, 39, 43], as well as tools for ruling out the existence of small kernelizations [11, 19, 23, 30, 32] under complexity-theoretic hypot...
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized...
C
Zhou et al. [208] established the bijection between random vector and positive composite image. Moreover, they reformulated object placement as a graph completion task. In particular, background nodes have both content features and placements, while the inserted foreground node only has content feature, giving rise to ...
Discriminative approaches: Liu et al. [94] proposed a discriminative approach named SimOPA to verify whether a composite image is rational in terms of the foreground object placement. Particularly, they feed the concatenation of composite image and foreground mask into a binary classification network to predict a rati...
Analogous to [5] using Retinex theory, Guo et al. [45] also developed a model to disentangle a composite image into reflectance map and illumination map, in which the illumination map is harmonized by transferring lighting information from background to foreground. Another work [44] also adopted the similar decompositi...
Zhang et al. [202] proposed to make sequential decisions to produce a reasonable placement by using reinforcement learning. Azadi et al. [2] employed STN to warp the foreground and relative appearance flow network to change the viewpoint of foreground. Additionally, they investigated on self-consistency constraint, tha...
In the previous section, image harmonization methods could adjust the foreground appearance to make it compatible with the background, but they ignore the fact that the inserted object may also have impact on the background (e.g., reflection, shadow). For example, if background objects cast shadows on the ground but th...
C
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem...
Interrelationship: We have classified the sub-datasets into two categories: service data and context data, as depicted in Fig. 1(c). Service data pertains to the status of urban service providers (e.g. taxi companies), while context data refers to the urban environment (e.g. weather). Based on this categorization, we h...
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp...
Our analyses and experiments on CityNet have yielded valuable insights for researchers. Our studies have confirmed the correlations among sub-datasets and have demonstrated that urban modeling and analyses can be enhanced by appropriately utilizing the mutual knowledge among correlated sub-datasets. To this end, we hav...
The paper is structured as follows. Section II outlines the pre-processing procedure of all sub-datasets in CityNet, along with their basic statistics. In Section III, we employ data mining tools to reveal and elucidate the correlations between contexts and service data. In Section IV, we conduct machine learning exper...
C
\big{(}\nabla_{\mathbf{x}}\mathcal{L}(\mathbf{x},y;\theta)\big{)}\,,bold_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT := bold_x + bold_italic_η ⊙ sgn ( ∇ start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT caligraphic_L ( bold_x , italic_y ; italic_θ ) ) ,
By differentiating the argument on the right-hand side with respect to q𝑞qitalic_q and equating it to 0, one obtains definition (19) of the α𝛼\alphaitalic_α-quantile. The pinball loss (26) is then simply the loss function for the sample α𝛼\alphaitalic_α-quantile, i.e. the α𝛼\alphaitalic_α-quantile of the empirical ...
where ⊙direct-product\odot⊙ denotes elementwise multiplication. This allows the constant 𝜼𝜼\boldsymbol{\eta}bold_italic_η to be a vector as to accompany features with different ranges. The modified update rule is obtained by replacing the loss function in the gradient descent step with the total loss
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
The idea behind deep ensembles lakshminarayanan2017simple is the same as for any ensemble technique: training multiple models to obtain a better and more robust prediction. The loss functions of most (deep) models have multiple local minima and by aggregating multiple models one hopes to take into account all these mi...
B
For symbolic-domain melody extraction, initial methodologies predominantly adopted rule-based approaches. These rule-based methods encompassed techniques such as utilising pitch contour characteristics \parencitemelody1, as well as the implementation of the “skyline" algorithm \parencitechia01skyline. In recent years, ...
Table 3: Testing metrics (in %) of “our model (performance) +CP” and other baseline methods for the two-class “melody versus non-melody” classification task using POP909, viewing vocal melody and instrumental melody as “melody” and accompaniment as “non-melody”.
Specifically, we consider two formulations of the task. Firstly, we adhere to the original configuration of POP909 and perform three-class melody classification, classifying each Pitch into three categories: vocal melody, instrumental melody or accompaniment. Secondly, we merge vocal melody and instrumental melody into...
POP909 comprises piano covers of 909 pop songs compiled by \textcitepop909.555https://github.com/music-x-lab/POP909-Dataset It is the only dataset among the five that provides melody, non-melody labels for each note. Specifically, each note is labelled with one of the following three classes: vocal melody (piano notes ...
Similar to \textcitesimonettaCNW19, we regard melody extraction as a task that identifies the melody notes in a single-track 101010It is common for MIDI files to consist of multiple tracks. We refer to “single-track” as MIDI files containing only one track, which is in contrast to multi-track MIDI files that have multi...
D
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and H𝐻Hitalic_H its spanning subgraph. Then λ⁢(χ⁢(H)−1)+1≤B⁢B⁢Cλ⁢(G,H)≤λ⁢(χ⁢(H)−1)+n−χ⁢(H)+1𝜆𝜒𝐻11𝐵𝐵subscript𝐶𝜆𝐺𝐻𝜆𝜒𝐻1𝑛𝜒𝐻1\lambda(\chi(H)-1)+1\leq BBC_{\lambda}(G,H)\leq\lambda(\chi(H)-1)+n-\chi(H)+1italic_λ ( italic_χ ( italic_H ) - 1 ) + 1 ≤ italic_B ...
Additionally, [16] proved for comparability graphs we can find a partition of V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) into at most k𝑘kitalic_k sets which induce semihamiltonian subgraphs in the complement of G𝐺Gitalic_G (i.e. it contains a Hamiltonian path) and from that it follows that B⁢B⁢C2⁢(Kn,G)𝐵𝐵subscript𝐶2subscr...
An obvious extension would be an analysis for a class of split graphs, i.e. graphs whose vertices can be partitioned into a maximum clique C𝐶Citalic_C (of size ω⁢(G)=χ⁢(G)𝜔𝐺𝜒𝐺\omega(G)=\chi(G)italic_ω ( italic_G ) = italic_χ ( italic_G )) and an independent set I𝐼Iitalic_I. A simple application of Theorem 2.18 gi...
The λ𝜆\lambdaitalic_λ-backbone coloring problem was studied for several classes of graphs, for example split graphs [5], planar graphs [3], complete graphs [6], and for several classes of backbones: matchings and disjoint stars [5], bipartite graphs [6] and forests [3]. For a special case λ=2𝜆2\lambda=2italic_λ = 2 i...
Moreover, it was proved before in [4] that there exists a 2222-approximate algorithm for complete graphs with bipartite backbones and a 3/2323/23 / 2-approximate algorithm for complete graphs with connected bipartite backbones. Both algorithms run in linear time. As a corollary, it was proved that we can compute B⁢B⁢C...
C