context
stringlengths
100
4.5k
A
stringlengths
100
3.31k
B
stringlengths
100
3.4k
C
stringlengths
100
4.85k
D
stringlengths
100
3.48k
label
stringclasses
4 values
{\prime}(x)}\left(h_{0}(x)\frac{f(x)}{f^{\prime}(x)}+h_{1}(x)\right)\right].roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / [ 1 + divide start_ARG 1 end_ARG start_ARG 2 italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCR...
from f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT [17, 33, 39], which means the update
(i) fast calculation of f′′/f′superscript𝑓′′superscript𝑓′f^{\prime\prime}/f^{\prime}italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT from f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT,
Structure relations [24] relate the ratio f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
Installation of f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in (1) progresses by dividing Rnm≅xm⁢Fsuperscriptsubscript𝑅𝑛𝑚superscript𝑥𝑚𝐹R_{n}^{m}\cong x^{m}Fitalic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT...
C
In practice, the MSLP should be constructed in such a way that the ‘input’ of each of the subroutines (Algorithms 4–7) is stored in memory when the subroutine is called and the ‘output’ is kept in memory for the subsequent stage of Algorithm 3.
In practice, the MSLP should be constructed in such a way that the ‘input’ of each of the subroutines (Algorithms 4–7) is stored in memory when the subroutine is called and the ‘output’ is kept in memory for the subsequent stage of Algorithm 3.
There exists a b𝑏bitalic_b-MSLP, S𝑆Sitalic_S, of length at most λ𝜆\lambdaitalic_λ such that if S𝑆Sitalic_S is evaluated with memory containing the input of Algorithm 4 then S𝑆Sitalic_S returns memory containing the output of Algorithm 4.
There exists a b𝑏bitalic_b-MSLP, S𝑆Sitalic_S, of length at most λ𝜆\lambdaitalic_λ such that if S𝑆Sitalic_S is evaluated with memory containing the input of Algorithm 5 then S𝑆Sitalic_S returns memory containing the output of Algorithm 5.
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
D
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computa...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
B
In particular, two of them (called legs) have their midpoints touched by P𝑃Pitalic_P, whereas the remaining one is called the base.
Moreover, one of the following holds: (1) The base is flushed with (i.e. contains an edge of) P𝑃Pitalic_P.
(2) One of the legs is flushed with an edge of P𝑃Pitalic_P and has as its midpoint a vertex of this edge.
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
In particular, two of them (called legs) have their midpoints touched by P𝑃Pitalic_P, whereas the remaining one is called the base.
A
Table 5: Importance ranking of CreditScore, CrowdWisdom and PolarityScores over time; 0 indicates the best rank.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet lev...
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed a...
. We showcase here a study of the Munich shooting. We first show the event timeline at an early stage. Next we discuss some examples of misclassifications by our “weak” classifier and show some analysis on the strength of some highlighted features. The rough event timeline looks as follows.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
C
We define ψ⁢(t)=z⁢(t)+h⁢(t)𝜓𝑡𝑧𝑡ℎ𝑡\psi\left(t\right)=z\left(t\right)+h\left(t\right)italic_ψ ( italic_t ) = italic_z ( italic_t ) + italic_h ( italic_t ), and
ϕ2⁢(t+1)≤z⁢(t)+h⁢(t)⁢ϕ⁢(t)+ϕ2⁢(t)superscriptitalic-ϕ2𝑡1𝑧𝑡ℎ𝑡italic-ϕ𝑡superscriptitalic-ϕ2𝑡\phi^{2}\left(t+1\right)\leq z\left(t\right)+h\left(t\right)\phi\left(t\right)%
\right)\right]+\phi^{2}\left(t\right)≤ italic_z ( italic_t ) + italic_h ( italic_t ) roman_max [ 1 , italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t ) ] + italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t )
ϕ2⁢(t+1)superscriptitalic-ϕ2𝑡1\displaystyle\phi^{2}\left(t+1\right)italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t + 1 )
+\phi^{2}\left(t\right)\,italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t + 1 ) ≤ italic_z ( italic_t ) + italic_h ( italic_t ) italic_ϕ ( italic_t ) + italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t )
C
At 17:52 CEST, a shooter opened fire in the vicinity of the Olympia shopping mall in Munich. 10 people, including the shooter, were killed and 36 others were injured.
At 17:52 CEST, a shooter opened fire in the vicinity of the Olympia shopping mall in Munich. 10 people, including the shooter, were killed and 36 others were injured.
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
At 18:31 CEST, the first misclassified tweet is posted. It was a tweet with shock sentiment and swear words: ”there’s now a shooter in a Munich shopping centre.. What the f*** is going on in the world. Gone mad”. It is classified as rumor-related.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet lev...
B
March 31t⁢hsuperscript31𝑡ℎ31^{th}31 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT, 2017, on a clean history browser.
frequency of the pre-event aspect stays high. We witness similar phenomenon with the same event in 2017 in the Google query logs. We therefore postulate that (1) long-term salience should provide good ranking results for the
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
While the precise methods employed by the search engine for its recommendations remain undisclosed, the subpar performance could potentially be attributed by the influence of aspect salience (in this case, query popularity) and the occurrence of the rich get richer phenomenon: the salience of an aspect is
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
C
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
Riquelme et al. [2018] benchmarked some of these techniques, and reported that neural networks with approximate inference, even if successful for supervised learning, under-perform in the MAB setting.
for the successful performance of SMC methods for inference of linear dynamical states in practice [Urteaga et al., 2017; Urteaga and Djurić, 2016a, b].
C
For example, Patient 8 prefers to work out at 20:00 every day, and the level of working out is reduced on weekends.
Most of the glucose measurements after the meals, on the other hand, are logged after at least four hours for most of the patients.
For example, Patient 8 prefers to work out at 20:00 every day, and the level of working out is reduced on weekends.
Among all patients, patient 12 seems to enjoy working out the least and the period that she burns most calories are around noon.
For activities, we observe that certain patients have favorite time of “working out” during the day, and it does not change much across the days.
C
between predictions and targets. The best results are marked in bold and models are sorted in descending order of their cumulative rank across a subset of weakly correlated evaluation measures within each group.
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-train...
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) ...
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)...
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity
C
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, ...
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s...
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, ...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap...
A
Based on these features they then train an ensemble of regularized multi-layer perceptrons and a RF classifier to predict the pathological target class.
In[143] the authors created a semi-supervised learning method, in which a segmentation network for LV/RV and myocardium was trained from labeled and unlabeled data.
Isensee et al.[141] used an ensemble of a 2D and a 3D u-net for segmentation of the LV/RV cavity and the LV myocardium on each time instance of the cardiac cycle.
Patravali et al.[140] trained a model based on u-net using Dice combined with cross entropy as a metric for LV/RV and myocardium segmentation.
There are also cardiology applications that used CRFs with deep learning as a segmentation refinement step in fundus photography[171, 174], and in LV/RV[143].
A
To match the assumed prior and the approximate, we use the Kullback–Leibler divergence term as an additional loss term (Babaeizadeh et al., 2017a).
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^{...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ...
A
Using this definition we can also derive that most previous methods for EEG classification use non-trainable S2Is and that no previous study has compared trainable with non-trainable S2Is.
In this paper we have shown empirical evidence that 1D ‘base model’ variations and trainable S2Is (especially the one layer CNN) perform better than non-trainable S2Is.
Using this definition we can also derive that most previous methods for EEG classification use non-trainable S2Is and that no previous study has compared trainable with non-trainable S2Is.
In this paper we compare non-trainable and trainable S2Is combined with well known ‘base models’ neural network architectures along with the 1D and depth-wise variations of the latter.
B𝐵Bitalic_B includes the following bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT along with their depth-wise variations and their equivalent 1D architectures for d=1𝑑1d=1italic_d = 1 (for a complete list refer to first two rows of Table. I):
C
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in ...
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we p...
C
(1.5625,3.75)1.56253.75(1.5625,3.75)( 1.5625 , 3.75 )-competitive. Similarly, for α=0.868𝛼0.868\alpha=0.868italic_α = 0.868, we get a (1.5783,3.56)1.57833.56(1.5783,3.56)( 1.5783 , 3.56 )-algorithm, whose consistency is the same as best existing online algorithms. Therefore, our results are useful for values of α>0.86...
In contrast, for α<0.868𝛼0.868\alpha<0.868italic_α < 0.868, the best-known competitive algorithms without predictions dominate our proposed solution.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms...
(1.5625,3.75)1.56253.75(1.5625,3.75)( 1.5625 , 3.75 )-competitive. Similarly, for α=0.868𝛼0.868\alpha=0.868italic_α = 0.868, we get a (1.5783,3.56)1.57833.56(1.5783,3.56)( 1.5783 , 3.56 )-algorithm, whose consistency is the same as best existing online algorithms. Therefore, our results are useful for values of α>0.86...
is 1.7. Many other algorithms with improved competitive ratios have been studied. The best known algorithm was introduced by Balogh et al. [6] and has a competitive ratio of at most 1.5783. Moreover, it is known that no online algorithm can achieve a competitive ratio better than 1.54278 [7].
A
Where s⁢g𝑠𝑔sgitalic_s italic_g and s⁢n𝑠𝑛snitalic_s italic_n are functions of the form f:W×C↦[0,1]:𝑓maps-to𝑊𝐶01f:W\times C\mapsto[0,1]italic_f : italic_W × italic_C ↦ [ 0 , 1 ].
As we will see, the former decreases l⁢v𝑙𝑣lvitalic_l italic_v in relation to the global significance of w𝑤witalic_w, and the latter sanctions it, in relation to the number of categories for which w𝑤witalic_w is significant.
Our approach to calculating g⁢v𝑔𝑣gvitalic_g italic_v, as we will see later, tries to overcome some problems arising from the valuation of words only based on local information to a category. This is carried out by, firstly, computing a word local value (l⁢v𝑙𝑣lvitalic_l italic_v) for every category, and secondly, co...
Where g⁢v⁢(w,c)=v𝑔𝑣𝑤𝑐𝑣gv(w,c)=vitalic_g italic_v ( italic_w , italic_c ) = italic_v is read as “w𝑤witalic_w has a global value of v𝑣vitalic_v in c𝑐citalic_c” or, alternatively, “the global value of w𝑤witalic_w in c𝑐citalic_c is v𝑣vitalic_v”.
Finally, we need to define s⁢n𝑠𝑛snitalic_s italic_n, the sanction function, which will proportionally decrease the global value of w𝑤witalic_w, in relation to the number of categories for which w𝑤witalic_w is significant. Hence s⁢n𝑠𝑛snitalic_s italic_n should be a function such that: (a) when w𝑤witalic_w is sign...
A
The RCC of DMSGD is 100%percent100100\%100 % (no compression). Here, all numbers have the same unit (float value).
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods.
Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data dis...
We adopt two popular deep models: ResNet20 (He et al., 2016) and Vision Transformer (ViT) (Lee et al., 2021) with four Transformer blocks. Although Batch Normalization (BN) in ResNet20 is effective in practice, it is known to be problematic in the non-IID setting due to its dependence on the estimated mean and variance...
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
D
A limitation of SANs is the use of varying amplitude-only kernels, which are not sufficient for more complex data and also do not fully utilize the compressibility of the data.
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ...
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data.
The majority of domains where machine learning is applied, including critical areas such as healthcare [26], require models to be interpretable and explainable before considering them as a solution.
A possible solution would be using a grid sampler [45] on the kernel allowing it to learn more general transformations (such as scale) than simple amplitude variability.
D
Considering several UAVs with UAV ad-hoc network game with potential function ϕ:S→R:italic-ϕ→𝑆𝑅\phi:S\rightarrow Ritalic_ϕ : italic_S → italic_R. When all UAVs adhere to SPBLLA, if m𝑚mitalic_m is large enough, the stochastically stable strategies are maximizers of the potential function, which are PSNEs.
Definition 3 indicates that the change of utility function is the same amount of the change of potential function, which gives an ideal property to the potential game.
Let each UAV alter strategy as large as possible to make utility function change significantly. Calculating the most significant difference that a utility function can make in an iteration, and we are capable of learning the range of m𝑚mitalic_m.
However, we have to recognize that the altering strategies probability ω𝜔\omegaitalic_ω severely impacts on the efficiency of SPBLLA. If Theorem 5 limits m𝑚mitalic_m to be a large value, the probability will decrease. When m𝑚mitalic_m is too large, UAVs are hard to move, and the learning rate will decrease. To some ...
According to Appendix B, to make the SPBLLA converge, m𝑚mitalic_m should be twice larger than the most massive altering amount of each UAV’s utility function.
D
(a), 9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs
at 0⁢μ0μ0\upmu0 roman_μs (a), 9⁢μ9μ9\upmu9 roman_μs (b), 45⁢μ45μ45\upmu45 roman_μs (c), 65⁢μ65μ65\upmu65 roman_μs (d)
(a), 9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs
9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs (e),
at 0⁢μ0μ0\upmu0 roman_μs (a), 9⁢μ9μ9\upmu9 roman_μs (b), 45⁢μ45μ45\upmu45 roman_μs (c), 65⁢μ65μ65\upmu65 roman_μs (d).
C
One of the motivation for our work draws from a collaboration with an industrial partner specialized in cold chain, refrigereation and conditioning (CEMAFROID).
Within this collaboration, a finer consideration of equality was a key notion to select relevant data in SQL in order to get better results.
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Interestingly, while the results we present in the body of this paper are unchanged by reflexivity, we show in C that reflexivity is a key property to ensure completeness of (extended) Armstrong axioms.
There is an ongoing work on SQL queries based on these principles within the context of a collaboration with the (CEMAFROID).
A
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and afte...
The findings indicate that Dropout can effectively reduce the variance and overestimation issues in DQN, leading to more stable learning curves and notably enhanced performance.
Figure 5 demonstrates that using Dropout methods in DQN reduce the overestimation from the optimal policy. Despite that Gridworld environment is not suffering from intangible overestimation that can distort the overall cumulative rewards but reducing overestimation leads to more accurate predictions.
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinf...
A
Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G...
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol...
Collecting large-scale accurate pixel-level annotation is time-consuming and financially expensive. However, unlabeled and weakly-labeled images can be collected in large amounts in a relatively fast and cheap manner. As shown in Figure 2, varying levels of supervision are possible when training deep segmentation model...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
The scarcity of richly annotated medical images is limiting supervised deep learning-based solutions to medical image analysis tasks (Perone and Cohen-Adad, 2019), such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models.
B
Importantly, when the solution of the spectral algorithm become worse than the random cut, the MAXCUT upper bound is close to 0.5.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT becomes lower than 1−τ1𝜏1-\tau1 - italic_τ the solution of the spectral algorithm is still larger than the cut induced by the random parti...
Therefore, when the spectral cut is lower than 0.5 it is possible to return the random partition instead, which yields a nearly-optimal solution.
Therefore, when the spectral cut is lower than 0.5 it is possible to return the random partition instead, which yields a nearly-optimal solution.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT becomes lower than 1−τ1𝜏1-\tau1 - italic_τ the solution of the spectral algorithm is still larger than the cut induced by the random parti...
B
Welbl: Welbl (2014) and Biau et al. (2019) present a similar mapping with subsequent fine-tuning. The authors introduce two training modes: independent and joint. The first optimizes each small network individually, while the latter joins all mapped decision trees into one network. Additionally, the authors evaluate a ...
Network splitting (Massiceti et al., 2017) slightly improves the number of parameters of the networks.
Massiceti: Massiceti et al. (2017) present a network splitting strategy to reduce the number of network parameters. The decision trees are divided into subtrees and mapped individually while sharing common split nodes. The optimal depth of the subtrees is determined by evaluating all possible values.
Massiceti et al. (2017) extend this approach and introduce a network splitting strategy by dividing each decision tree into multiple subtrees. The subtrees are mapped individually and share common neurons for evaluating the split decision.
Network splitting proposed by Massiceti et al. (2017) maps multiple subtrees while sharing common split nodes and reduces the average number of network parameters to 748 000748000748\,000748 000.
B
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into pol...
To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu...
The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
B
In this section we compare these specialized forms of compression on their respective hardware in terms of absolute performance to identify the most promising compute concepts for DNNs.
Figure 6 shows test accuracy over throughput of the FINN data-flow architectures mapped to a XILINX Ultra96 FPGA using different bit combinations.
Figure 8: Throughput-accuracy trade-off of different compression methods for different processor architectures (CPU, FPGA, GPU) on the CIFAR-10 task.
Notably, whilst fundamentally different in architecture, from a system-level view these three processors, namely ARM Cortex-A57 CPU, NVIDIA Nano GPU, and XILINX Ultra96 FPGA, are comparable as they all exhibit a power consumption in the range of about 5 Watts.
We evaluate the inference throughput of the compressed models on an ARM CPU (Section 5.2.1), Xilinx FPGA (Section 5.2.2) and an embedded NVIDIA GPU (Section 5.2.3).
C
By Lemma 2.3, 𝒰rsubscript𝒰𝑟\mathcal{U}_{r}caligraphic_U start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is a good cover of Br⁢(X,E)subscript𝐵𝑟𝑋𝐸B_{r}(X,E)italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X , italic_E ). Hence, by the nerve lemma (see [49, Corollary 4G.3]), Br⁢(X,E)subscript𝐵𝑟𝑋𝐸...
One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding...
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori no...
In this section we consider a certain strong variant of the filling radius satisfying equation (11) which arises from the notion of persistent homology.
One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
D
Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and ...
FocusChanger [50] empowers users to perform local analyses by setting Points of Interest (POIs) in a linear projection, which is then updated to enhance the representation of the selected POIs. When hovering over specific points, the information of true neighborhood of other points is mapped to the saturation of the co...
Praxis [46], with two methods—backward and forward projection—but it requires fast out-of-sample extensions which are not available for the original t-SNE.
After the analysis, we decided on GEP mainly because it has a good overlap of functionalities with t-viSNE, is well-known, available online, and works correctly with user-provided data. VisCoDeR [22], for example, also provides an overlap of features, but the focus of the tool and the tasks it supports—the comparison o...
C
As previously mentioned, an ever-growing amount of new bio-inspired optimization techniques has been proposed in recent decades (see Figure 1). This overwhelming number of alternatives could make it difficult to choose an appropriate option for a given optimization problem. The vast number of proposals not only casts d...
Particular reasons aside, some algorithms are not created to solve problems and provide a practical advantage, but mainly to be published and gain notoriety without any consideration for their lack of algorithmic novelty and innovation. Examples of this controversy can be found in [14], as authors state this problem ev...
In [24], the authors claim that grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms are not novel algorithms, and their inspiration has been in the literature for years. To assert this, the authors present a rigorous, component-based analysis of each algorithm that reveals evidences about them: these alg...
We further elaborate on the above statement: our literature analysis revealed that the majority of proposals (more than a half, 60%) generate new solutions based on differential vector forces over existing ones, as in the classical PSO or DE. A complementary analysis can be done by departing from this observation towar...
A critical point of reflection associated with this explosion of proposals has been that novel metaphors do not lead to new solvers, and that comparisons undergo serious methodological problems. Although there are increasingly more bio-inspired algorithms, many of them rely on so-claimed novel metaphors that do not cre...
A
The high-level information exploitation can be also regarded as a promotion of the GAE with shallow architecture.
It should be emphasized that a large k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT frequently leads to capture the wrong information.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective st...
2) It helps to correct the wrong links among samples that are caused by the low-level relationships.
D
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes ar...
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes ar...
Further, to avoid single point of failure it is recommended that the Name servers of a domain are hosted in multiple networks. This is also our observation when correlating between domains and ASes. Essentially we find that when testing one domain for each server we can obtain different results, depending on the AS tha...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
Figure 8. Fraction of domains hosted in multiple ASes. We check how many ASes host services of one domain: 70% of the domains are hosted in one or two ASes.
B
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on ...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
The context processing pathway utilizes the sequential structure of the dataset via recurrent processing. This pathway is incorporated with a feedforward component to define the context+skill model as described above.
C
Note that in the final iteration, when i=t+1𝑖𝑡1i=t+1italic_i = italic_t + 1, we take B=∅𝐵B=\emptysetitalic_B = ∅. Now
12:         if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
M𝑀Mitalic_M and M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT are compatible if and only if the union of the corresponding path covers
6:                  if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
6:              if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
B
With this terminology, all states in Q𝑄Qitalic_Q ignore open gates, closed gates, and unmarked and circled letters, so the inductive hypothesis holds trivially for these (in particular, c⋅w⋅𝑐𝑤c\cdot witalic_c ⋅ italic_w and c⋅w~⋅𝑐~𝑤c\cdot\tilde{w}italic_c ⋅ over~ start_ARG italic_w end_ARG are always defined in th...
Thus, let w=𝒜w~subscript𝒜𝑤~𝑤w=_{\mathcal{A}}\tilde{w}italic_w = start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT over~ start_ARG italic_w end_ARG for w,w~∈P+𝑤~𝑤superscript𝑃w,\tilde{w}\in P^{+}italic_w , over~ start_ARG italic_w end_ARG ∈ italic_P start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT. We only show the i...
We obtain the cross diagram depicted in Figure 10 and an analogous diagram for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG (compare to the action of the adding machine in 2). Thus, we have
For the remaining types of symbols in C𝐶Citalic_C we have the following cross-diagrams, and analogous ones for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG:
for 1≤i<k1𝑖𝑘1\leq i<k1 ≤ italic_i < italic_k. Therefore, we can still apply the claim (1) and obtain an analogous cross diagram for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG if α⋅u~k⋅𝛼subscript~𝑢𝑘\alpha\cdot\tilde{u}_{k}italic_α ⋅ over~ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT...
C
This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any spon...
Following Selvaraju et al. (2019), we train HINT on the subset with human-based attention maps Das et al. (2017), which are available for 9% of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to 2×10−52superscript1052\times 10^{-5}2 × 10 start_POSTSUPERSCRIPT - 5 end_PO...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whic...
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
B
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regul...
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word...
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories...
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application ...
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert anno...
D
We have then several options to manipulate this point as shown in Figure 3(c.3): we can remove the point’s instance entirely from the data set or merge a set of points into a new one, which receives either their mean or median values per feature.
and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme...
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th...
The history manager saves the aforementioned manipulations or restores the previous saved step on demand.
Analysts might also want to step back to a specific previous stage in case they reached a dead end in the exploration of algorithms and models (G2).
C
(v′,[323])superscript𝑣′delimited-[]323(v^{\prime},[323])( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is adjacent to (v′,f′)superscript𝑣′superscript𝑓′(v^{\prime},f^{\prime})( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ), to
(v′,[323])superscript𝑣′delimited-[]323(v^{\prime},[323])( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is adjacent to (v′,f′)superscript𝑣′superscript𝑓′(v^{\prime},f^{\prime})( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ), to
We have that (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[313])𝑣delimited-[]313(v,[313])( italic_v , [ 313 ] )
3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and to (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and so
p⁢(v′,[323])𝑝superscript𝑣′delimited-[]323p(v^{\prime},[323])italic_p ( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is 2222.
C
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personaliz...
In text classification experiments, we use the CNN proposed in [Bao et al., 2020] as the base model and follow the hyperparameter settings.
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
A
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roma...
Based on the designed CCA codebook, the joint subarray partition and AWV selection (SPAS) algorithm is developed in this section to solve the beam tracking problem in (13).
Tracking the AOAs and AODs is essential for beam tracking, which is closely connected with the position and attitude of the t-UAVs and r-UAV. The position and attitude compose the UAV’s motion state information (MSI). In this section, the MSI prediction based AOAs and AODs estimation scheme and the protocol for beam tr...
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section...
The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
C
Presburger formulas that capture all possible sizes of complete simple A|Bconditional𝐴𝐵A|Bitalic_A | italic_B-biregular graphs,
on the matrices that specify the graph constraints. The restriction is that they are “simple matrices”.
In this section we will show how to reduce the non-simple matrices to simple matrices for biregular graphs.
For a pair of simple matrices A|Bconditional𝐴𝐵A|Bitalic_A | italic_B (with the same number of rows),
where the matrices A𝐴Aitalic_A and B𝐵Bitalic_B may have multiple colors, but are what we call simple matrices,
B
Q‡⁢(x)=∫σ⁢(x;θ)⁢dν¯⁢(θ)superscript𝑄‡𝑥𝜎𝑥𝜃differential-d¯𝜈𝜃Q^{\ddagger}(x)=\int\sigma(x;\theta)\,{\mathrm{d}}\underline{\nu}(\theta)italic_Q start_POSTSUPERSCRIPT ‡ end_POSTSUPERSCRIPT ( italic_x ) = ∫ italic_σ ( italic_x ; italic_θ ) roman_d under¯ start_ARG italic_ν end_ARG ( italic_θ ). We assume that Dχ2⁢(ν¯∥ν...
Under Assumptions 4.1, 4.2, and 6.1, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that
Upon telescoping (5.5) and setting η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT, we obtain that
Under Assumptions 4.1 and 4.2, it holds for any k≤T/ϵ⁢(k∈ℕ)𝑘𝑇italic-ϵ𝑘ℕk\leq T/\epsilon\ (k\in\mathbb{N})italic_k ≤ italic_T / italic_ϵ ( italic_k ∈ blackboard_N ) that
Under Assumptions 4.1, 4.2, and 6.3, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that
D
Our approach with the Transformer base setting brings about more improvements on the English-German task than that on the English-French task. We conjecture that maybe because the performance on the English-French task using a large dataset (∼similar-to\sim∼36363636M sentence pairs) may rely more on the capacity of the...
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f...
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transforme...
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
A
𝖥𝖮𝖥𝖮\mathsf{FO}sansserif_FO-interpetation that is surjective and continuous from X𝑋Xitalic_X to Y𝑌Yitalic_Y,
\uptheta,\mathcal{L}^{\prime}\right\rangleitalic_f : ⟨ italic_X , roman_τ , caligraphic_L ⟩ → ⟨ italic_Y , roman_θ , caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ is
1.2.2]. A map f:(X,τ)→(Y,θ):𝑓→𝑋τ𝑌θf\colon(X,\uptau)\to(Y,\uptheta)italic_f : ( italic_X , roman_τ ) → ( italic_Y , roman_θ )
Recall that (Y,θ)𝑌θ(Y,\uptheta)( italic_Y , roman_θ ) is a pre-spectral subspace of (X,τ)𝑋τ(X,\uptau)( italic_X , roman_τ )
whenever (Y,θ)𝑌θ(Y,\uptheta)( italic_Y , roman_θ ) is a pre-spectral space such that Y⊆X𝑌𝑋Y\subseteq Xitalic_Y ⊆ italic_X,
C
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion recti...
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene li...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify ...
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion recti...
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to estim...
A
All experiments are performed using the PyTorch platform on a server with eight NVIDIA Tesla V100 GPU cards.
We consider three common deep learning tasks: image classification, natural language processing (NLP), and click-through rate (CTR) prediction for large-batch training evaluation.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
Table 7 shows the training time per epoch of SNGM with different batch sizes. We can observe that larger batch sizes can reduce the training time, which is similar to the results of image classification tasks.
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
A
11111111-approximation for inhomogeneous 2S-MatSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT.
3333-approximation for homogeneous 2S-Sup-Poly with |𝒮|≤(n+1)!𝒮𝑛1|\mathcal{S}|\leq(n+1)!| caligraphic_S | ≤ ( italic_n + 1 ) !.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
The 3333-approximation for 2S-Sup-Poly is presented in Section 3, based on a novel LP rounding technqiue; notably, its approximation ratio matches the lower bound of the non-stochastic counterpart (Knapsack Supplier).
Here (1) captures the budget constraint, and (2) captures the radius covering constraint. If the instance is feasible for the given 2S-Sup-Poly instance, we can solve the LP. The rounding algorithm appears in Algorithm 3.
C
The graph with a generalized weighted adjacency matrix is often used to describe the competitive and cooperative interaction behaviors existing in some scenarios of applications.
So, it is also worth studying the distributed stochastic optimization over the network with the generalized weighted adjacency matrix in the future.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
The graph with a generalized weighted adjacency matrix is often used to describe the competitive and cooperative interaction behaviors existing in some scenarios of applications.
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian ...
A
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI v...
In this experiment, we use the approach of aggregate query answering [37] to check the information utility of MuCo. We randomly generate 1,000 queries and calculate the average relative error rate for comparison. The sequence of the query is expressed in the following form
In this work, we propose a novel technique, called the Mutual Cover (MuCo), to protect the privacy for microdata publication. The rationale is to make similar records to cover for each other at the minimal cost by perturbing the original QI values according to the random output tables. In this way, MuCo can achieve gre...
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and mor...
Specifically, the query condition contains four random QI attributes, and the sum of salary is the result. We use the same parameters of MuCo and perform Mondrian and Anatomy complying with l𝑙litalic_l-diversity for comparison. Since the generated query conditions are strong stochastic, we report the average values an...
A
To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend.
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. May...
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma so...
D
We conduct numerical experiments on synthetic nonstationary linear MDPs to demonstrate the effectiveness of our proposed algorithms.
To make the environment challenging for exploration, our construction falls into the category of combination lock (Koenig & Simmons, 1993). For each of these 5 linear MDPs, there is only one good (and different) chain that contains a huge reward at the end, but 0 reward for the rest of the chain. Further, any sub-optim...
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to a...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
Bandit problems can be viewed as a special case of MDP problems with unit planning horizon. It is the simplest model that captures the exploration-exploitation tradeoff, a unique feature of sequential decision-making problems. There are several ways to define nonstationarity in the bandit literature. The first one is p...
D
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
D
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attemp...
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attemp...
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For exa...
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr...
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different downs...
D
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The c...
We evaluate the proposed method on several challenging image-based tasks from OpenAI Gym222http://gym.openai.com/ and Retro333https://retro.readthedocs.io, including
We demonstrate the setup of the experiment in Fig. 10. The equipment mainly includes an RGB-D camera that provides the image-based observations, a UR5 robot arm that interacts with the environment, and different objects in front of the robot arm. An example of the RGB-D image is shown in Fig. 11. We develop a robot env...
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The c...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
A
To do so, we sample 100 randomly nodes P⊆Ω𝑃ΩP\subseteq\Omegaitalic_P ⊆ roman_Ω, |P|=100𝑃100|P|=100| italic_P | = 100, independently generated for each degree, but identical for all methods and determine maxq∈P⁡|f⁢(q)−Qf⁢(q)|≈‖f−Qf‖C0⁢(Ω)subscript𝑞𝑃𝑓𝑞subscript𝑄𝑓𝑞subscriptnorm𝑓subscript𝑄𝑓superscript𝐶0Ω\max_{...
Chebfun, and MIP are the only methods that converge down to machine precision (32-bit double-precision arithmetics). The convergence rate is as stated in
However, this does not mean that efficient algorithms to evaluate the resulting interpolants to machine precision are known.
The error bound in Eq. (1.4) only guarantees a polynomial convergence rate, but no exponential convergence;
Consequently, as we demonstrate in Section 8, this allows approximating highly varying functions, such as the Runge function, to machine precision.
A
[31, 6] find the worst-case direction that maximizes the Wasserstein distance between projected sample points in one-dimension.
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized.
In contrast, the power of the PW test decreases slower since it operates by projecting high-dimensional data points into a low-dimensional subspace.
It is intuitive to understand the differences between two collections of high-dimensional samples by projecting those samples into low-dimensional spaces in some proper directions [29, 30, 31, 6, 32, 33, 34].
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized.
A
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ⁢(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mode...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf...
Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g...
C
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only ...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working...
However, this circuit can confirm that circuit discovery errors occur in Y-shaped grinding (C3 to G3, D3 to G3 / E1 to H1 / I1 to K3, J1 to K3) because the electricity is unconditionally moving to low potential.
D
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with g⁢c⁢d⁢(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start...
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representatio...
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic...
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat...
A
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi...
Excluding the interpolating predictor, nonnegative ridge regression produced the least sparse models. This is not surprising considering it performs view selection only through its nonnegativity constraints. Its high FPR in view selection appeared to negatively influence its test accuracy, as there was generally at lea...
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi...
The nonnegative elastic net, with its additional L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT penalty compared with ridge regression, is one such method. In our simulations it produced sparser models than nonnegative ridge regression, usually with better or comparable accuracy. These sparser mode...
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Houw...
A
Table 8: p𝑝pitalic_p-values of Wilcoxon Signed Ranks Test on DepAD algorithms paired with the benchmark methods.
Wilcoxon signed ranks tests are conducted on the results of each of the two DepAD algorithms, i.e., FBED-CART-PS and FBED-CART-Sum, pairwise with each of the nine benchmark methods. The alternative hypothesis is that a DepAD algorithm is better than the comparison method. The p𝑝pitalic_p-values are shown in Table 8, w...
Effectiveness: The two DepAD algorithms, FBED-CART-PS, and FBED-CART-Sum, demonstrate superior performance over nine state-of-the-art anomaly detection methods in the majority of cases. The two DepAD methods do not outperform wkNN. However, they show advantages over wkNN in higher dimensional datasets in terms of both ...
According to Figure 7 and Table 8, the two DepAD algorithms are significantly better than all benchmark methods except for wkNN and iForest in terms of ROC AUC . With wkNN, the results are similar. With iForest, the p𝑝pitalic_p-values are very close to 0.05. In terms of AP, the two DepAD algorithms yield significantly...
In the subsection, we answer the question, i.e., compared with state-of-the-art anomaly detection methods, how is the performance of the instantiated DepAD algorithms? We choose the two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, to compare them with the nine state-of-the-art anomaly detection methods shown in Ta...
C
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
choice model for capturing consumer purchase behavior in assortment selection models (see Flores et al. [2019] and Avadhanula [2019]). Recently, large-scale field experiments at Alibaba [Feldman et al., 2018] have demonstrated the efficacy of the MNL model in boosting revenues. Rusmevichientong et al. [2010] and Sauré ...
Our result is still O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they ...
In summary, our work establishes strong worst-case regret guarantees by carefully accounting for local gradient information and using second-order function approximation for the estimation error.
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star...
C
Table 1: Action detection results on validation set of THUMOS-14, measured by mAP (%) at different tIoU thresholds. Our VSGN achieves the highest mAP at tIoU threshold 0.5 (commonly adopted criteria), significantly outperforming all other methods.
∗ Re-implementation with the same features as ours. We replace 3D convolutions with 1D convolutions to adapt to the feature dimension.
∗ Re-implementation with the same features as ours. We replace 3D convolutions with 1D convolutions to adapt to the feature dimension.
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce co...
Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ...
A
(ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
R4: Contrast the results of all model-generation stages and update the majority-voting ensemble. In evolutionary optimization, a crossover and mutation phase leads to a propagation of more crossover and mutation phases with exponential growth (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolu...
(iii) during the detailed examination phase, zoom in into interesting clusters already explored in the previous phase, and focus on indications that confirm either their approval in the ensemble or their need for transformation through the evolutionary process (cf. VisEvol: Visual Analytics to Support Hyperparameter Se...
(ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in...
B
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state.
The probabilistic guidance algorithm led to the development of numerous Markov chain synthesis algorithms involving specific objectives and constraints [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
The current literature covers a broad spectrum of methodologies for Markov chain synthesis, incorporating both heuristic approaches and optimization-based techniques [4, 5, 6]. Each method provides specialized algorithms tailored to the synthesis of Markov chains in alignment with specific objectives or constraints.
Markov chain synthesis plays a central role in probabilistic swarm guidance, which has led to the development of various algorithms incorporating additional transition and safety constraints [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
A
We use the registration subset with 10101010 poses for each class and downsample each shape to 2,00020002{,}0002 , 000 faces.
In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut r...
While the PCK curves between ours, ZoomOut+Sync and HiPPI in Fig. 2 are close, the AUC in Tab. 2 shows that our performance is still superior by a small margin. Qualitative results can be found in the supplementary material.
Partial functional maps are rectangular and low-rank [58], and this experiments shows that our method can also handle this more general case. More details can be found in the supplementary material.
Our method shows state-of-the-art results and surpasses all competitors on this dataset, see Fig. 2 and Tab. 2.
B
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov...
On the side of directed path graphs, we first extend the characterization in [1] for path graphs to directed path graphs, and then we adapt the recognition algorithm for path graphs to directed path graphs, obtaining algorithm RecognizeDPG.
In this section we report the characterization of path graphs and directed path graphs described in [18]. We start with a formal definition of these classes of graphs.
In this way, we do not improve the time complexity but we unify and strictly simplify the study of path graphs and directed path graphs by the algorithmic point of view.
D
Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) None
We report the averaged mixed Hamming error rates for our methods and the other three competitors in Table 4. Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT outperforms the other three Mixed-SL...
Authors’ contributions. Qing mainly worked on the algorithm and theoretical properties. Wang mainly worked on the algorithm and whole paper organization.
In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm.
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the original authors, and they are regarded as the “ground truth” to investigate the performances of Mixed-SLIM methods in this paper.
B
For instance, 𝒳𝒳\mathcal{X}caligraphic_X can be a torus 𝕋dsuperscript𝕋𝑑\mathbb{T}^{d}blackboard_T start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, which can be viewed as the d𝑑ditalic_d-dimensional hypercube [0,1)dsuperscript01𝑑[0,1)^{d}[ 0 , 1 ) start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilber...
We specialize to such a structure only for rigorous theoretical analysis, which also appears in other works involving the Wasserstein space (Gräf and Hielscher, 2015).
artifacts adopted only for theoretical analysis. We present the details of such a modified algorithm in Algorithm 2 in §A.
over the Wasserstein space 𝒫2⁢(𝒳)subscript𝒫2𝒳\mathcal{P}_{2}(\mathcal{X})caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( caligraphic_X ). Such an optimization problem
B
To learn effective decentralized policies, there are two main challenges. Firstly, it is impractical to learn an individual policy for each intersection in a city or a district containing thousands of intersections. Parameter sharing may help. However, each intersection has a different traffic pattern, and a simple sha...
may cause learning non-stationary because the agent may receive different rewards and observation transitions for the same action at the same observation. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own observations and performe...
The observation-action history of agent i𝑖iitalic_i at time t𝑡titalic_t is denoted as τi,:tsubscript𝜏𝑖:absent𝑡\tau_{i,:t}italic_τ start_POSTSUBSCRIPT italic_i , : italic_t end_POSTSUBSCRIPT. ℛ={ℛi}i=1Nℛsuperscriptsubscriptsubscriptℛ𝑖𝑖1𝑁\mathcal{R}=\{\mathcal{R}_{i}\}_{i=1}^{N}caligraphic_R = { caligraphic_R sta...
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a...
Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards a...
D
   >> J = @(lambda,X,lambda0,X0,G,S) G*X-lambda*X0-lambda0*X-X*S;   % enter the Jacobian
   >> domain = {’1+x+x^2’,’1+x+x^2+x^3’, ’1+x’};   % representation of the domain for the mapping f
𝒹⁢𝒾⁢𝓂𝐟⁢(𝐱∗)+𝓇⁢𝒶⁢𝓃⁢𝓀⁢(𝐟𝐱⁢(𝐱∗))=the dimension of the domain of 𝐟.𝒹𝒾subscript𝓂𝐟subscript𝐱𝓇𝒶𝓃𝓀subscript𝐟𝐱subscript𝐱the dimension of the domain of 𝐟\mathpzc{dim}_{\mathbf{f}}(\mathbf{x}_{*})+\mathpzc{rank}\left(\,\mathbf{f}_{%
   >> domain = ones(4,1); parameter = {P,J,v};   % domain (space of 4x1 vectors) and parameters
   >> domain = {1,ones(n,k)};   % representation of the domain for the mapping g
D
Last, suppose that for some size x𝑥xitalic_x, it is fx>0subscript𝑓𝑥0f_{x}>0italic_f start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT > 0, whereas its prediction is fx′=0subscriptsuperscript𝑓′𝑥0f^{\prime}_{x}=0italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT = 0. In ...
implementation of ProfilePacking, we use the algorithm FirstFitDecreasing (?) to compute the profile packing, instead of an optimal algorithm. Specifically, FirstFitDecreasing first sorts items in the non-increasing order of their sizes and then packs the sorted sequence using FirstFit. Using FirstFitDecreasing helps r...
ProfilePacking packs these special items separately from others, using FirstFit. Algorithm 1 describes ProfilePacking in pseudocode.
14:      use FirstFit to pack σ⁢[i]𝜎delimited-[]𝑖\sigma[i]italic_σ [ italic_i ] ▷▷\triangleright▷  x𝑥xitalic_x is a special item
As stated in Section 2, we assume a discrete model in which items have integral sizes in [1,k]1𝑘[1,k][ 1 , italic_k ]. While this is a natural model for many AI applications, our algorithms can also handle fractional item sizes in [1,k]1𝑘[1,k][ 1 , italic_k ], by treating them as “special” items, in the sense that th...
B
Although these modifications improve the quality of obtained results, their objective is to fix the deformations after patches’ stitching.
The proposed framework overcomes the limitations of previous methods. First, we theoretically solve the problem of stitching partial meshes since every chart is informed about its local neighborhood. Second, our method can easily fill the missing spaces in the final mesh by adding a new mapping for the region of intere...
To mitigate the issue of the discrete atlas, we define Continuous Atlas, a novel paradigm for meshing any object with an atlas that is leveraged in our method. In the first step, we construct a mapping that models a local structure of the object S𝑆Sitalic_S. By Continuous Atlas (𝒞⁢𝒜𝒞𝒜\mathcal{CA}{}caligraphic_C ca...
In this paper we propose a different approach to solve such a problem - we reformulate the classical definition of atlas to obtain maps which are correctly connected. Therefore, our method tries to suppress the issue before it even occurs in the first place.
In this paper, we introduced a novel approach for generating high-quality 3D meshes composed of 2D patches directly from raw point clouds. We presented a Continuous Atlas paradigm that allows our model, Locally Conditioned Atlas, to produce an arbitrary number of patches to form a watertight mesh. The empirical evaluat...
C
\nu\right\|_{2}\leq R}\varphi(\nu).italic_ν start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ∈ start_OPERATOR roman_Arg roman_max end_OPERATOR start_POSTSUBSCRIPT ∥ italic_ν ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ italic_R end_POSTSUBSCRIPT italic_φ ( italic_ν ) and roman_max start_POSTSUBSCRIPT italic_ν ∈ blackboard_R ...
≥h⁢(θ~)+⟨ν*,𝐀⁢θ~−b⟩absentℎ~𝜃superscript𝜈𝐀~𝜃𝑏\displaystyle\geq h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb%
Also note that h⁢(θ~)=minθ∈Θ⁡ψ⁢(θ)ℎ~𝜃subscript𝜃Θ𝜓𝜃\displaystyle h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
h⁢(θ~)+⟨ν*,𝐀⁢θ~−b⟩ℎ~𝜃superscript𝜈𝐀~𝜃𝑏\displaystyle h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
we obtain that (θ~,ν*)~𝜃superscript𝜈({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
B
The inequality follows since d⁢(u)−2−ϵ⁢(u,h)≥0𝑑𝑢2italic-ϵ𝑢ℎ0d(u)-2-\epsilon(u,h)\geq 0italic_d ( italic_u ) - 2 - italic_ϵ ( italic_u , italic_h ) ≥ 0.
By intrinsic tree invariant we denote a map f:𝒯→ℝ:𝑓→𝒯ℝf:\mathscr{T}\rightarrow\mathbb{R}italic_f : script_T → blackboard_R on the set of all trees. Of particular interest
follows: suppose that there exists an intrinsic tree invariant f:𝒯→ℝ:𝑓→𝒯ℝf:\mathscr{T}\rightarrow\mathbb{R}italic_f : script_T → blackboard_R such that for every graph G𝐺Gitalic_G
Let G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) be a directed connected graph and w:E→ℝ:𝑤→𝐸ℝw:E\rightarrow\mathbb{R}italic_w : italic_E → blackboard_R be an edge function. We call w𝑤witalic_w a discrete 1-form on G𝐺Gitalic_G. Integrating w𝑤witalic_w is the problem of finding a vertex function x:V→ℝ:𝑥→�...
∩G:𝒯G→ℕ\cap_{G}:\mathscr{T}_{G}\rightarrow\mathbb{N}∩ start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT : script_T start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT → blackboard_N
A
\bullet}(\tau)\}\text{ is generic.}italic_σ , italic_τ ∈ italic_K , italic_σ ∩ italic_τ = ∅ ⟹ { italic_g start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_σ ) , italic_g start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_τ ) } is generic.
If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
Let K𝐾Kitalic_K be a simplicial complex on n𝑛nitalic_n vertices. For any m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ) there exists a generic nontrivial chain map from C∙⁢(K)subscript𝐶∙𝐾C_{\bullet}(K)italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_K ) to C∙⁢(G⁢[n]m)subscript𝐶∙𝐺superscriptdelimit...
Roughly speaking, the following “Picasso Lemma” asserts that any simplicial complex can be realized within a cubical complex via a generic chain map. (See Figure 2.)
Figure 2. The graph K5subscript𝐾5K_{5}italic_K start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT (considered as a 1-dimensional simplicial complex) realized as a subcomplex of the grid complex G⁢[5]3𝐺superscriptdelimited-[]53G[5]^{3}italic_G [ 5 ] start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT via the generic chain map given in L...
C
At this stage, we concentrated on generating new and more effective features from the existing ones. FeatureEnVi provides assistance in this procedure by highlighting only the strong correlation of features (with the current value for the Fs COR) in gray lines (cf. Fig. 7(e)). We tested all old links between pairs of f...
To verify each of our interactions, we continuously monitor the process through the punchcard, as shown in Fig. 6(c). From this visualization, we acknowledge that when F16 was excluded, we reached a better result. The feature generation process (described previously) led to the best predictive result we managed to acco...
This technique is referred to as Ranking-based FS [85] in our VA system. We would like to include further techniques in the future, however, the current selection is specifically assembled to contain one candidate for each of the high-level categories of feature selection methods introduced in Section 1. For every meth...
Using our approach, we managed to achieve the same accuracy as before, 89%, compared to 83% reported by Mansouri et al. [94] for the additional external data set. For precision and recall, we always use macro-average, which is identical to Mansouri et al. [94]. On the one hand, the precision was 4% lower in both test a...
Throughout the aforementioned phases, we utilized feature engineering to improve the most powerful XGBoost model found through hyperparameter tuning. To verify whether our findings were reliable, we applied the resulting ML model to the same test and external validation sets as Mansouri et al. [94], see Table IV. For t...
D
Note that ktotsubscript𝑘totk_{\mathrm{tot}}italic_k start_POSTSUBSCRIPT roman_tot end_POSTSUBSCRIPT depends implicitly on the vector of parameters θ𝜃\thetaitalic_θ; accordingly, we define the cost function as g0⁢(θ):=ktotassignsubscript𝑔0𝜃subscript𝑘totg_{0}(\theta):=k_{\mathrm{tot}}italic_g start_POSTSUBSCRIPT 0 e...
We model the system as two uncoupled axis with identical parameters. According to (1), the plant can be described by the transfer function G⁢(s)𝐺𝑠G(s)italic_G ( italic_s ), from the force input to the the position of system, p𝑝pitalic_p, defined as
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
For the second goal, aiming to minimize deviations and oscillations in the system, we introduce two constraints as
The cost function of MPCC is designed to match the goals of the contouring controller for the biaxial system. We penalize the squared longitudinal error e^l2superscriptsubscript^𝑒𝑙2\hat{e}_{l}^{2}over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIP...
C
We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u...
We first present the mean per group accuracy for all eight methods on all three datasets in Table. 1 to see if any method does consistently well across benchmarks. For this, we used class and gender labels as explicit biases for CelebA. For Biased MNISTv1, there are multiple ways to define explicit biases, but for this...
For each dataset, we assess all bias mitigation methods with the same neural network architecture. For CelebA, we use ResNet-18 [29]. For Biased MNISTv1, we use a convolutional neural network with four ReLU layers consisting of a max pooling layer attached after the first convolutional layer. For GQA-OOD, we employ the...
For each dataset, we use the class label y𝑦yitalic_y and the explicit bias variables be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT to define explicit groups for Up Wt, GDRO and IRMv1. For instance, for CelebA, hair color and gender result in f...
We compare seven state-of-the-art bias mitigation methods on classification tasks using Biased MNISTv1 and CelebA, measuring generalization to minority patterns, scalability to multiple sources of biases, sensitivity to hyperparameters, etc. We ensure fair comparisons by using the same architecture, optimizer, and perf...
B
In addition, some methods use generative adversarial networks (GAN) to pre-process eye images to handle specific environment factors.
Besides deep learning-based gaze estimation methods, we also summarize the practices of gaze estimation.
Besides the supervised approaches for extracting gaze features, unannotated eye images have also been used for learning gaze representations.
Facial landmarks have also been used as additional features to model the head pose and eye position.
Two streams of CNN are used for extracting individual features from left/right eye images, the other two streams are used for extracting joint features of two eye images.
B
The COVID-19 can be spread through contact and contaminated surfaces, therefore, the classical biometric systems based on passwords or fingerprints are not anymore safe. Face recognition is safer without any need to touch any device. Recent studies on coronavirus have proven that wearing a face mask by a healthy and in...
Occlusion is a key limitation of real-world 2D face recognition methods. Generally, it comes out from wearing hats, eyeglasses, masks as well as any other objects that can occlude a part of the face while leaving others unaffected. Thus, wearing a mask is considered the most difficult facial occlusion challenge since i...
However, wearing the mask face causes the following problems: 1) fraudsters and thieves take advantage of the mask, stealing and committing crimes without being identified. 2) community access control and face authentication have become very difficult tasks when a grand part of the face is hidden by a mask. 3) existing...
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face...
Real-World-Masked-Face-Dataset wang2020masked is a masked face dataset devoted mainly to improve the recognition performance of the existing face recognition technology on the masked faces during the COVID-19 pandemic. It contains three types of images namely, Masked Face Detection Dataset (MFDD), Real-world Masked Fa...
B
\ell}\}_{\ell\in S})roman_Γ , italic_y : italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⊢ start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT roman_W end_POSTSUPERSCRIPT . italic_k italic_y : : ( italic_x : ⊕ { roman_ℓ : italic_A start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT } sta...
&RωsuperscriptR𝜔\&{\operatorname{R}}^{\omega}& roman_R start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT   
∃RωsuperscriptR𝜔\exists{\operatorname{R}}^{\omega}∃ roman_R start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT   
⊕Rωdirect-sumsuperscriptR𝜔\oplus{\operatorname{R}}^{\omega}⊕ roman_R start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT   
∧RωsuperscriptR𝜔\land{\operatorname{R}}^{\omega}∧ roman_R start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT   
C
In this section, we bring forward two cloud media sharing schemes, namely FairCMS-I and FairCMS-II. FairCMS-I essentially delegates the re-encryption management of LUTs to the cloud, thus significantly reducing the overhead of the owner side. Nevertheless, FairCMS-I cannot achieve IND-CPA security for the media content...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However...
In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ...
In FairCMS-I and FairCMS-II, on the one hand, although the user generates the fingerprint 𝐛ksubscript𝐛𝑘\mathbf{b}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT on his/her own, the user is unable to know the sequence 𝐆¯⁢𝐰k¯𝐆subscript𝐰𝑘\bar{\mathbf{G}}\mathbf{w}_{k}over¯ start_ARG bold_G end_ARG bold_w...
C
The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4.
The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4.
However, in the wrongly predicted instances (b), the feature interactions of order-3 and order-4 are mostly not overlapped.
However, not all feature interactions are beneficial, and GNNs rely on the assumption that neighboring nodes share similar features, which may not always hold in the context of feature interaction modeling.
Since the features along with selected beneficial feature interactions are treated as a graph, it can provide human readable interpretations on the prediction. Here we visualize heat maps of estimated edge weights of two cherry-pick instances on MovieLens-1M dataset in Fig. 4. We show the measured edge weights of each ...
B
\mathbf{x}-\mathbf{y}\right\|^{q}\mathbf{z}\in\mathcal{X}.bold_y + italic_γ ( bold_x - bold_y ) + italic_γ ( 1 - italic_γ ) ⋅ italic_κ ∥ bold_x - bold_y ∥ start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT bold_z ∈ caligraphic_X .
The next lemma that will be presented is an extension of the one used in [Kerdreux et al., 2021, Lemma A.1] (see also Temlyakov [2015]), and allows us to go from per-iteration contractions to convergence rates.
The previous definition allows us to obtain a scaling inequality very similar to the one shown in Proposition 2.10, which is key to proving the following convergence rates, and can be implicitly found in Kerdreux et al. [2021] and Garber & Hazan [2016].
Next, we recall the definition of uniformly convex sets, used in Kerdreux et al. [2021], which will allow us to obtain improved convergence rates for the FW algorithm over uniformly convex feasible regions.
One of the key inequalities used in the proof is a scaling inequality from Lacoste-Julien & Jaggi [2015] very similar to the one shown in Proposition 2.10 and Proposition 2.13, which we state next:
B
This property is formalized in Observation 4.2 and the process for finding these odd cycles is formalized in Definition 4.3 and Lemma 4.4.
The primary goal of Extend-Active-Paths is to extend active paths of a maximal (not necessary maximum) number of distinct free nodes with respect to a given ordering of arcs. Algorithm 7 does not achieve the same guarantee. As a consequence of such behavior of Algorithm 7, Backtrack-Stuck-Structures potentially reduces...
      Extend-Active-Paths⁢(𝒫)Extend-Active-Paths𝒫\textsc{Extend-Active-Paths}(\mathcal{P})Extend-Active-Paths ( caligraphic_P )
A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices.
Our main challenge is that on the path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β, there can be many events by active paths of many distinct free vertices, where some active paths are blocked by other active paths and others form odd cycles.
D
In practice, the parameters β,γ,η𝛽𝛾𝜂\beta,\gamma,\etaitalic_β , italic_γ , italic_η and stepsize α𝛼\alphaitalic_α can be chosen in the same way as in CPP.
The Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method introduced in [24, 25] modified the gradient tracking methods to deal with directed network topologies without the push-sum technique.
In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
Figure 4: Performance of CPP and Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with different communication networks under both quantization and Rand-k compressors.
The performance of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP and B-CPP is illustrated in Fig. 1.
B
SPPs cover a wider range of problems than minimization ones and has numerous important practical applications [6].
These include well-known and famous examples from game theory or optimal control [7]. In recent years, saddle point problems have become popular in several other respects.
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out t...
Saddle Point Problems. All previous results around personalized setting focus on the minimization problem, we consider Saddle Point Problems (SPPs).
A
MG(C)CE can provide solutions in general-support and, similar to MECE, MG(C)CE permits a scalable representation when the solution is full-support. Under this scenario, the distribution inequality constraint variables, β𝛽\betaitalic_β, are inactive, are equal to zero, can be dropped, and the α𝛼\alphaitalic_α variable...
The Gini impurity is defined as 1−σT⁢σ1superscript𝜎𝑇𝜎1-\sigma^{T}\sigma1 - italic_σ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_σ, and the MG(C)CE is denoted σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. We use an equivalent standard form objective −12⁢σT⁢σ12superscript...
\sigma-\epsilon)-\beta^{T}\sigma+\lambda(e^{T}\sigma-1),italic_L start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_β , italic_λ end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_σ start_POSTSUPERSCRIPT italic_T...
(e^{T}\sigma-1)= divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_σ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_σ + italic_α start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ( italic_A italic_σ - italic_ϵ ) + italic_λ ( italic_e start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_σ - 1 )
The MG(C)CE, σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, has the following forms:
D
δ𝛿\deltaitalic_δ, then δ𝛿\deltaitalic_δ is essentially a function of ϵitalic-ϵ\epsilonitalic_ϵ. Given such a function δ:ℝ+→[0,1]:𝛿→superscriptℝ01\delta:\mathbb{R}^{+}\rightarrow\left[0,1\right]italic_δ : blackboard_R start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT → [ 0 , 1 ], we will say the mechanism is (ϵ,δ⁢(ϵ))itali...
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it ...
data elements to the posterior induced by some view v𝑣vitalic_v. The degree to which a query q𝑞qitalic_q overfits to the dataset is expressed by the correlation between the query and that Bayes factor. This simple lemma is at the heart of the progress that we make in this paper, both in our intuitive understanding of...
One small extension of the present work would be to consider queries with range ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. It would also be interesting to extend our results to handle arbitrary normed spaces, using appropriate noise such as perhaps the Laplace mechani...
We start by introducing a particular family of queries known as linear queries, which will be used to state the main results in this paper, but it should be noted that many of the claims extend to arbitrary queries as discussed in Section C.2
D
As a corollary to this theorem, we obtain a new type of parameterized-tractability result for Feedback Vertex Set. For an integer z𝑧zitalic_z, let the z𝑧zitalic_z-antler complexity of G𝐺Gitalic_G be the minimum number k𝑘kitalic_k for which there exists a (potentially long) sequence C1,F1,…,Ct,Ftsubscript𝐶1subscrip...
The first type of safety ensures that finding vertices that belong to an optimal FVS of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT leads to finding vertices that belong to an optimal FVS of G𝐺Gitalic_G. The second type of safety ensures that if, in the original graph G𝐺Gitalic_G, t...
Intuitively, Corollary 6.11 states that optimal solutions can be found efficiently when they are composed out of small pieces, each of which has a low-complexity certificate for belonging to some optimal solution.
\mathcal{O}(n^{5})2 start_POSTSUPERSCRIPT caligraphic_O ( italic_k start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT fvs ( italic_G ) start_POSTSUPERSCRIPT caligraphic_O ( italic_z ) end_POSTSUPERSCRIPT + caligraphic_O ( italic_n start_POSTSUPERSCRIPT...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fro...
B
To evaluate the quality of generated composite images, previous object placement works usually adopt the following three schemes: 1) Some works measure the similarity between real images and composite images. For example, Tan et al. [145] score the correlation between the distributions of predicted boxes and ground-tru...
2) Some works [154, 29] utilize the performance improvement of downstream tasks (e.g., object detection) to evaluate the quality of composite images, where the training sets of the downstream tasks are augmented with generated composite images. However, the evaluation cost is quite huge and the improvement in downstrea...
To evaluate the quality of generated composite images, previous object placement works usually adopt the following three schemes: 1) Some works measure the similarity between real images and composite images. For example, Tan et al. [145] score the correlation between the distributions of predicted boxes and ground-tru...
In some previous works [154, 29], object placement is used as data augmentation strategy to facilitate the downstream tasks (e.g., object detection, instance segmentation). Therefore, they make use of existing object detection and instance segmentation datasets [89, 28, 21, 38]. In particular, the foregrounds are cropp...
Image composition has a broad spectrum of applications in the realm of entertainment, virtual reality, artistic creation, E-commerce [12, 170, 204] and data augmentation [26, 126, 116] for downstream tasks. For example, people can replace the backgrounds of self-portraits and make the obtained images more realistic usi...
A
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp...
CityNet’s comprehensive and correlated data make it a valuable resource for machine learning tasks in urban computing. These tasks include spatio-temporal predictions and its multi-task variant, spatio-temporal transfer learning, and reinforcement learning. In this paper, we present extensive benchmarking results for t...
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One ef...
Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations. In the field of urban computing, it is highly probable that the knowledge required ...
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem...
C
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by natu...
In the context of classification problems, where especially the former issue plays a role guo2017calibration , a wide variety of calibration methods is available: Platt scaling, temperature scaling, isotonic regression, etc. In general these methods take the output distribution of the trained predictor and modify it su...
C
Fig. 1(a) shows that, except for Bar, the other tokens in a REMI sequence always occur consecutively in groups, in the order of Sub-bar, Pitch, Duration. We can further differentiate Bar(new) and Bar(cont), representing respectively the beginning of a new bar and a continuation of the current bar and always have one of...
Moreover, we use the Tempo token to specify the tempo information of the songs. It is placed behind the Sub-bar token to imply when the song would perform with the tempo. We only add tempo token at the beginning of the song and the timing when tempo changes. For MIDI scores, the Velocity and Tempo tokens are simply dro...
by dropping velocity and tempo information, temporally quantising the onset time and duration of each the notes to the semiquaver resolution.
For MIDI performances, six tokens would be grouped together, including Velocity and Tempo. Following the logic of Bar, if there is no tempo change, we simply repeat the tempo value.
Figure 1: An example of a piece of score encoded using the proposed simplified version of the (a) REMI and (b) CP representations, using seven types of tokens, Bar, Sub-bar, Pitch, Velocity, Duration, Tempo and Pad (not shown here), for piano-only MIDI performance.
C
Note that it has a natural interpretation as a labeling problem: how to assign different labels to all vertices such that on every backbone edge the difference between labels is at least λ𝜆\lambdaitalic_λ.
And by the definition of the backbone coloring, R𝑅Ritalic_R and B𝐵Bitalic_B have to be independent sets in T𝑇Titalic_T.
Finally, we prove that we are left with the part of the set R*∪B*superscript𝑅superscript𝐵R^{*}\cup B^{*}italic_R start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ∪ italic_B start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT that can be colored using the remaining colors (see Figure 1 for the assignment of colors to the sets of v...
Since all vertices in c𝑐citalic_c have different colors, it is true that |Y|≤l𝑌𝑙|Y|\leq l| italic_Y | ≤ italic_l. Moreover, the optimality of c𝑐citalic_c implies that both R𝑅Ritalic_R and B𝐵Bitalic_B are non-empty. From the fact that c𝑐citalic_c is a coloring of Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT i...
This description draws a comparison e.g. to L⁢(k,1)𝐿𝑘1L(k,1)italic_L ( italic_k , 1 )-labeling problem (see e.g. [10] for a survey), where the colors of any two adjacent vertices have to differ by at least k𝑘kitalic_k and the colors of any two vertices within distance 2222 have to be distinct.
D