context stringlengths 250 5.39k | A stringlengths 250 8.2k | B stringlengths 250 7.25k | C stringlengths 250 4.17k | D stringlengths 250 3.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
a≡−(n−m)/2;b≡(D+n+m)/2;c≡m+D/2,formulae-sequence𝑎𝑛𝑚2formulae-sequence𝑏𝐷𝑛𝑚2𝑐𝑚𝐷2a\equiv-(n-m)/2;\quad b\equiv(D+n+m)/2;\quad c\equiv m+D/2,italic_a ≡ - ( italic_n - italic_m ) / 2 ; italic_b ≡ ( italic_D + italic_n + italic_m ) / 2 ; italic_c ≡ italic_m + italic_D / 2 , | \Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
| n-m)^{2}+(m+\frac{D}{2})(n-m)+m+\frac{D}{2}-1\right]R_{n}^{m}(x).start_ROW start_CELL - ( 1 + divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG ) ( 1 - italic_n - divide start_ARG italic_D end_ARG start_ARG 2 end_ARG ) divide start_ARG italic_n + italic_m + italic_D end_ARG start_ARG 2 end_ARG italic_R s... | }\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m%
}}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s... | Rnm(x)=(−1)a((n+m+D)/2−1(n−m)/2)xm[1−(n−m)(D+n+m)2(D+2m)x2\displaystyle R_{n}^{m}(x)=(-1)^{a}{(n+m+D)/2-1\choose(n-m)/2}x^{m}\Big{[}1-%
\frac{(n-m)(D+n+m)}{2(D+2m)}x^{2}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCR... | D |
19S:=assign𝑆absentS:=italic_S := Composition of S𝑆Sitalic_S and the MSLP of hℓ−1hr−1superscriptsubscriptℎℓ1superscriptsubscriptℎ𝑟1h_{\ell}^{-1}h_{r}^{-1}italic_h start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start... | First we describe the preprocessing phase during which we initialize the memory of the MSLP to encode particular matrices which will be useful for expressing diagonal matrices as words independently of the given diagonal matrix. The constructed matrices can be reused for all diagonal matrices, and so further diagonal m... |
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows. |
The following lemma shows how to compute the matrices of the preprocessing step. Recall that ω𝜔\omegaitalic_ω is a primitive element of 𝔽q=𝔽pfsubscript𝔽𝑞subscript𝔽superscript𝑝𝑓\mathbb{F}_{q}=\mathbb{F}_{p^{f}}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT = blackboard_F start_POSTSUBSCRIPT italic_... |
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm. | A |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscrip... | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien... | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... | B |
Similarly, from a P𝑃Pitalic_P-stable triangle A′B′C′superscript𝐴′superscript𝐵′superscript𝐶′A^{\prime}B^{\prime}C^{\prime}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we can also construct △ABC△𝐴�... | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS.
First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS. | C |
SVMstatic+SpikeM𝑆𝑉subscript𝑀𝑠𝑡𝑎𝑡𝑖𝑐𝑆𝑝𝑖𝑘𝑒𝑀SVM_{static}+SpikeMitalic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_t italic_a italic_t italic_i italic_c end_POSTSUBSCRIPT + italic_S italic_p italic_i italic_k italic_e italic_M [17] | For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | B |
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic... |
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | A |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | A |
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a)
and the conditional reward function’s variance ... | If the support of q(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
| We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm.
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters, | We now describe in detail how to use the SMC-based posterior random measure pM(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali... | For the more interesting case of unknown parameters,
we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions | B |
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | A |
The images presented during the acquisition of saliency maps in all aforementioned datasets are largely based on natural scenes. Stimuli of CAT2000 additionally fall into predefined categories such as Action, Fractal, Object, or Social. Together with the corresponding fixation patterns, they constituted the input and ... | To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... | Various measures are used in the literature and by benchmarks to evaluate the performance of fixation models. In practice, results are typically reported for all of them to include different notions about saliency and allow a fair comparison of model predictions Kümmerer et al. (2018); Riche et al. (2013). A set of nin... | Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba... | We normalized the model output such that all values are non-negative with unit sum. The estimation of saliency maps can hence be regarded as a probability distribution prediction task as formulated by Jetley et al. (2016). To determine the difference between an estimated and a target distribution, the Kullback-Leibler ... | B |
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under... | One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr... | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie... |
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection.... | B |
Montoya et al.[206] trained a 30 layer ResNet to generate 3D cerebral angiograms from contrast-enhanced images using three tissue types (vasculature, bone and soft tissue).
They created the annotations using thresholding and connected components in 3D space, having a combined dataset of 13790 images. | Lessman et al.[195] method for coronary calcium scoring utilizes three independently trained CNNs to estimate a bounding box around the heart, in which connected components above a Hounsfield unit threshold are considered candidates for CACs.
Classification of extracted voxels was performed by feeding two-dimensional p... | In[198] the authors created a method to identify and quantify CAC without a need for coronary artery extraction.
The bounding box detection around the heart method employs three CNNs, where each detects the heart in the axial, sagittal and coronal plane. | Every ROI was identified using a combination of three CNNs, each analyzing one orthogonal image plane.
While a single CNN predicted the presence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. | Similar work has been done by the same authors in[233] which they use three CNNs to detect a bounding box around the LV and perform LV voxel classification within the bounding box.
Commandeur et al.[208] used a combination of two deep networks to quantify epicardial and thoracic apidose tissue in CT from 250 patients w... | D |
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ... | Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-... | Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... |
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^... | As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen... | C |
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning.
The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning.
The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera... |
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals. | B |
This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | B |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | Furthermore, we show an interesting difference between the standard advice model and the model we introduce: in the former, an advice bit can be at least as powerful as a random bit, since an advice bit can effectively simulate any efficient choice of a random bit. In contrast, we show that in our model, there are situ... | the former can simulate the efficient choice of the latter, and thus provide a “no-loss” derandomization. However, in the setting of untrusted advice,
the interplay between advice and randomization is much more intricate. This is because random bits, unlike advice bits, are assumed to be trusted. | A second issue we address in this section is related to the comparison of random bits and advice bits as resource. More specifically,
in the standard model in which advice is always trustworthy, an advice bit can be at least as powerful as a random bit since | We show, using online bidding as an example, that there are situations in which a deterministic algorithm with L+1𝐿1L+1italic_L + 1 advice bits is Pareto-incomparable to a
randomized algorithm with 1 random bit and L𝐿Litalic_L advice bits. In particular we focus on the bounded online bidding problem, | A |
As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in whic... | Since the dataset was highly unbalanced we optimized the penalty parameter, C𝐶Citalic_C (C>0)𝐶0(C>0)( italic_C > 0 ), and the class weight parameter w𝑤witalic_w (w≥1)𝑤1(w\geq 1)( italic_w ≥ 1 ) for SVM and LOGREG; for MNB only the class weight w𝑤witalic_w was varied, while for K𝐾Kitalic_KNN the K𝐾Kitalic_K param... | We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed.
For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be... | from the first chunk on, the cumulative confidence value of one of the classes (negative in this case) stays above and always growing faster the other one. In this example, correctly, it was not possible to classify this subject as depressed after reading all its chunks.
|
Given that we do not have previous results available from other participants under this new scenario, for comparison, we had to perform experiments not only with SS3 but also with other standard classifiers Logistic Regression (LOGREG), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and K𝐾Kitalic_K-Neare... | D |
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝdF(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bol... | In the experiments of (Lin et al., 2018), DGC gets far better performance on both accuracy and communication cost than quantization methods. Hence, we do not compare with quantization methods in this paper.
We don’t use the warm-up strategy in the experiments. The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90... | Since the server is typically the busiest node in parameter server architecture, we consider the communication cost on the server in our experiments.
For DMSGD which doesn’t use any communication compression techniques, the communication cost on the server includes receiving vectors from the K𝐾Kitalic_K workers and se... | We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | B |
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components.
Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru... | During validation we selected the models with the kernel size that achieved the best φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG out of all epochs.
During testing we feed the test data into the selected model and calculate CR−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIP... |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components.
Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru... | Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
| D |
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity o... | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it... | A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calc... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve... | B |
\overline{f}\,\,\left(\widehat{\overline{\nabla}}\cdot\left(\widehat{\mathbf{B%
}}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG [ over^ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T en... | \overline{f}\,\,\left(\widehat{\overline{\nabla}}\cdot\left(\widehat{\mathbf{B%
}}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG [ over^ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T en... | 𝐏^=(𝐁^θω^)^𝐏subscript^𝐁𝜃^𝜔\widehat{\mathbf{P}}=\left(\widehat{\mathbf{B}}_{\theta}\,\,\widehat{\mathbf{%
\omega}}\right)over^ start_ARG bold_P end_ARG = ( over^ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT over^ start_ARG italic_ω end_ARG ) | as ∇¯^⋅𝐏^⋅^¯∇^𝐏\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{P}}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_P end_ARG, where
𝐏^=∇^¯U¯^𝐏¯^∇¯𝑈\widehat{\mathbf{P}}=\overline{\widehat{\nabla}}\,\,\overline{U}over^ start_ARG bold_P end_ARG = over¯ start_ARG over^ start_ARG ∇ end_ARG end... | \mathbf{B}}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]+ [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG italic_ω end_ARG ( over^ start_... | B |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | A |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
| Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action v... | where st+1subscript𝑠𝑡1s_{t+1}italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT is the resulting state after applying action a in the state s, r is the immediate reward observed for action a at state s, γ𝛾\gammaitalic_γ is the discount factor, and α𝛼\alphaitalic_α is learning rate.
| B |
Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et... | Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et... | Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol... |
Neff et al. (2018) trained a Wasserstein GAN with gradient penalty (Gulrajani et al., 2017) to generate labeled image data in the form of image-segmenation mask pairs. They evaluated their approach on a dataset of chest X-ray images and the Cityscapes dataset, and found that the WGAN-GP was able to generate images wit... | Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G... | C |
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices.
In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{... | Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices.
In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{... | To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals.
See the supplementary material for an illustration. | However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices.
A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai... | However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices.
A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai... | C |
For generating examples with varying confidence, i.e., the predictions of the individual decision trees diverge, we select a subset of nsubsubscript𝑛subn_{\text{sub}}italic_n start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT decision trees RFsub⊆RFsubscriptRFsubRF\operatorname{RF}_{\text{sub}}\subseteq\operatorname{RF}roman_R... | For generating examples with varying confidence, i.e., the predictions of the individual decision trees diverge, we select a subset of nsubsubscript𝑛subn_{\text{sub}}italic_n start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT decision trees RFsub⊆RFsubscriptRFsubRF\operatorname{RF}_{\text{sub}}\subseteq\operatorname{RF}roman_R... | In the next step, we extend the method to generate data from random forests.
Random forests consist of nTsubscript𝑛𝑇n_{T}italic_n start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT decision trees RF={T1,…,TnT}RFsubscript𝑇1…subscript𝑇subscript𝑛𝑇\operatorname{RF}=\{T_{1},\dots,T_{n_{T}}\}roman_RF = { italic_T start_POS... | The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in RFsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRI... |
All decision trees in RFsubsubscriptRFsub\operatorname{RF}_{\text{sub}}roman_RF start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT are processed in random order to generate a data sample. For each decision tree, the presented method modifies the data sample based on the target class. | D |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | B |
Sparse attention mechanisms and approximations have been proposed to address this issue and improve the efficiency of transformers for longer sequences.
We refer to the work of Tay et al. (2022) which provides an overview of various transformer-based architectures that focus on efficiency, reduced memory-footprint and ... | The user specifies the loss ℒℒ\mathcal{L}caligraphic_L as a computation graph and the gradient ∇𝐖ℒsubscript∇𝐖ℒ\nabla_{\mathbf{W}}\mathcal{L}∇ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT caligraphic_L is calculated automatically by the framework using the backpropagation algorithm (Rumelhart et al., 1986).
| Let f(w)𝑓𝑤f(w)italic_f ( italic_w ) be some non-differentiable operation within the computation graph of ℒℒ\mathcal{L}caligraphic_L such that the partial derivative ∂ℒ/∂wℒ𝑤\partial\mathcal{L}/\partial w∂ caligraphic_L / ∂ italic_w is not defined.
The STE then approximates the gradient ∂ℒ/∂wℒ𝑤\partial\mathcal{L}/\p... | In the forward pass, the solid red line is followed which passes the two piecewise constant functions Q𝑄Qitalic_Q and signsign\operatorname*{sign}roman_sign whose gradient is zero almost everywhere (red boxes).
During backpropagation, the dashed green line is followed which avoids these piecewise constant functions an... |
Many recently developed methods for resource efficiency in DNNs incorporate components in the computation graph of the loss function ℒℒ\mathcal{L}caligraphic_L that are non-differentiable or whose gradient is zero almost everywhere, such as piecewise constant quantizers. | D |
Let M𝑀Mitalic_M be an n𝑛nitalic_n-dimensional metric manifold. Then, note that we have FillRadn(M,G,[M])=FillRad(M)subscriptFillRad𝑛𝑀𝐺delimited-[]𝑀FillRad𝑀\mathrm{FillRad}_{n}(M,G,[M])=\mathrm{FillRad}(M)roman_FillRad start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_M , italic_G , [ italic_M ] ) = roma... | A priori, one can define the generalized filling radius for any metric space X𝑋Xitalic_X. However, we believe that the context of ANR metric spaces is the right level of generalization for our purposes because of the following proposition analogous to Proposition 1.
|
The goal of this section is to provide some partial results regarding the structure of barc∗VR(⋅)subscriptsuperscriptbarcVR∗⋅\mathrm{barc}^{\mathrm{VR}}_{\ast}(\cdot)roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ⋅ ) for non-smooth spaces; see Figure 12. In ord... | Let (X,E)𝑋𝐸(X,E)( italic_X , italic_E ) be a metric pair where X𝑋Xitalic_X is a compact ANR metric space. For any integer k≥1𝑘1k\geq 1italic_k ≥ 1, any abelian group G𝐺Gitalic_G, and any ω∈Hk(X;G)𝜔subscriptH𝑘𝑋𝐺\omega\in\mathrm{H}_{k}(X;G)italic_ω ∈ roman_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( ital... |
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic... | A |
Linear DR methods, such as PCA, are easier to understand and to explain, since the remaining axes are linear combinations of the original dimensions, which establishes a direct relationship between the low-dimensional and the high-dimensional data set.
When the specific constraints of being simple and easily explainabl... | Although non-linear DR methods have also been around for quite some time (e.g., Sammon Mapping [4]), they have gained popularity in the past few years—due to increasingly better performance—with techniques such as Isomap [5], LLE [6], or LAMP [7]; a few comparative review papers on general DR exist already, see the sur... |
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ... | Although our main design goal was to support the investigation of t-SNE projections, most of our views and interaction techniques are not strictly confined to the t-SNE algorithm. For example, the Dimension Correlation view could, in theory, be applied to any projection generated by any other algorithm. Its motivation,... | Linear DR methods, such as PCA, are easier to understand and to explain, since the remaining axes are linear combinations of the original dimensions, which establishes a direct relationship between the low-dimensional and the high-dimensional data set.
When the specific constraints of being simple and easily explainabl... | A |
In this context, complexity is not unusual in Nature: a plethora of complex systems, processes and behaviors have evinced a surprising performance to efficiently address intricate optimization tasks. The most clear example can be found in the different animal species, which have developed over generations very special... | Disregarding their source of inspiration, there is clear evidence of the increasing popularity and notoriety gained by nature- and bio-inspired optimization algorithms in the last two decades. This momentum finds its reason in the capability of these algorithms to learn, adapt, and provide good solutions to complex pro... |
In this context, complexity is not unusual in Nature: a plethora of complex systems, processes and behaviors have evinced a surprising performance to efficiently address intricate optimization tasks. The most clear example can be found in the different animal species, which have developed over generations very special... |
Going deeper into the creation of Machine Learning (ML) and Deep Learning (DL) models: Although most algorithms have been developed in recent years, the impact of EAs, a classical family of algorithms, has risen in the last few years. Their use in ML has been widely studied both for the design of models [615] and also... |
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex... | A |
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent y... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | To apply graph convolution on unsupervised learning, GAE is proposed [20].
GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21... | A |
A range of studies analysed network traces for ingress filtering using IP address characteristics (Moore et al., 2006; Barford et al., 2006; Chen et al., 2008; Czyz et al., 2014; Dainotti et al., 2013), or by inspecting on-path network equipment reaction to unwanted traffic, (Yao et al., 2014). In addition to a limited... |
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes a... |
Identifying DNS resolvers. The main challenge here is to locate the DNS resolvers within a domain/network and to trigger a DNS request to our Name servers. We use Email service in the target networks (retrieved via the MX type request in the target domain) to find the DNS resolvers. We send an email to target domain’s... |
SMap first collects the dataset of services.Our dataset is constructed as follows: we periodically download the entire IPv4 scan from Sonar Project (son, [n. d.]). We use the scan results on UDP port 53 as input for Name servers and DNS resolvers, scan data on TCP port 25 for Mail servers and scan results on TCP port ... | SMap architecture consists of two parts: dataset scan and ingress filtering scan. The dataset scan collects the popular services using two methods: domain-based scan and IPv4 based scan. In IPv4 scan to locate the services SMap probes every IP, checking for open ports that correspond to the services that we need; for i... | D |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... | B |
Let ti+∈𝒯+subscriptsuperscript𝑡𝑖superscript𝒯t^{+}_{i}\in\mathcal{T}^{+}italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_T start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, and let q1subscript𝑞1q_{1}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT be a poi... | Not shown is the property that the neighbour of q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a point in P[1,st(i)−5k2]𝑃1st𝑖5superscript𝑘2P[1,\mathrm{st}(i)-5k^{2}]italic_P [ 1 , roman_st ( italic_i ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ].
| Case (ii): p∈P[st+(i−1)−5k2+1,st(i)−5k2]𝑝𝑃𝑠superscript𝑡𝑖15superscript𝑘21normal-st𝑖5superscript𝑘2p\in P[st^{+}\mkern-2.0mu(i-1)-5k^{2}+1,\mathrm{st}(i)-5k^{2}]italic_p ∈ italic_P [ italic_s italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_i - 1 ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSU... | A(3)[i,q1,q2]:={The length of the shortest path from q1 to q2 that visits all points in P[1,st(i)], such that the neighbour of q2 is a point in P[1,st(i)−5k2].assignsuperscript𝐴3𝑖subscript𝑞1subscript𝑞2casesThe length of the shortest path from q1 to q2 that visits all points in P[1,st(i)], such that the neig... | st}(i)]$, such that the neighbour of $q_{2}$ is a point in $P[1,\mathrm{st}(i)%
-5k^{2}]$.}\end{cases}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT [ italic_i , italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] := { start_ROW start_CELL The length of the sh... | C |
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
| While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ... | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ... | C |
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... | Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o... | C |
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detai... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... |
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detai... | A |
Limitations. Efficiency and scalability were the major concerns raised by all the experts. The inherent computational burden of stacking multiple models still remains, as such complex ensemble learning methods need sufficient resources. Also, the use of VA in between levels makes this even worse.
We believe that, with ... | Considering all that, E3 noted that our system could be useful in solving competition problems, e.g., on Kaggle, and for her team to run tests before applying specific models to their huge data sets.
Progressive VA workflows [53] could also be useful for improving the scalability of our approach for larger data sets. | Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. | Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ... |
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | C |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | A |
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi... | The GP-based MSI prediction is proposed to solve the problem in [31].
Specifically, the r-UAV/t-UAV’s historical MSI is first exchanged with the t-UAV/r-UAV over a lower-frequency band and then the t-UAV will predict the future MSI of the r-UAV based on the historical MSI by using the GP-based MSI prediction model. | A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi... |
Moreover, the data block of MSI is set as BMSI=nMSI×T×BMSIsubscript𝐵MSIsubscript𝑛MSI𝑇subscript𝐵MSIB_{\text{MSI}}=n_{\text{MSI}}\times T\times B_{\text{MSI}}italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT × italic_T × italic_B start_POSTSUBSCRIPT MSI end_POSTS... | The tracking error of beam angles has a negative influence on the beam gain obtained by CCA. The proposed tracking error bounding algorithm uses the position/attitiude prediction error of the GP-based MSI prediction to obtain the beam angle tracking error, wherein the geometry relationship between UAVs and the Monte-Ca... | D |
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the ri... | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping.
This completes the proof for case 2 when the assumptions (a1) and (a2) hold. | A |
*}\cdot(1-\gamma)^{-1}\cdot\alpha^{-1},≤ 1 / 2 ⋅ ( 1 - italic_γ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ⋅ over¯ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ⋅ ( 1 - italic_γ ) st... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
|
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... |
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ... | B |
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f... |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform... | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de... | Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f... | A |
Consider the logical product space Z𝑍Zitalic_Z of the family (Xi)i∈Isubscriptsubscript𝑋𝑖𝑖𝐼(X_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT, thus using the signature σ={ϵi∣i∈I}σconditional-setsubscriptϵ𝑖𝑖𝐼\upsigma=\left\{\upeps... |
Consider the logical product space Z𝑍Zitalic_Z of the family (Xi)i∈Isubscriptsubscript𝑋𝑖𝑖𝐼(X_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT, thus using the signature σ={ϵi∣i∈I}σconditional-setsubscriptϵ𝑖𝑖𝐼\upsigma=\left\{\upeps... | \,n-1})italic_ψ start_POSTSUBSCRIPT ⊇ italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i < italic_j end_POSTSUBSCRIPT ¬ ( italic_x start_POSTSUBS... | \neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i... | x)\wedge\neg(x=y)italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≜ ∃ italic_x . ∃ italic_y . italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ ¬ ( italic_x = italic_y ) for i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and θi,j≜∃x.∃y... | D |
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneo... |
1. The proposed ordinal distortion is a learning-friendly representation for neural networks, which is explicit and homogeneous compared with the implicit and heterogeneous distortion parameters. Thus, our learning model gains sufficient distortion perception of features and shows faster convergence. Moreover, this re... | The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... |
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate... | B |
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... | We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others.
Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes. | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | A |
Given a newly arriving scenario A𝐴Aitalic_A, we can set (HA,πA)←←subscript𝐻𝐴superscript𝜋𝐴absent(H_{A},\pi^{A})\leftarrow( italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_π start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ) ←GreedyCluster(A,R,−R)𝐴𝑅𝑅(A,R,-R)( italic_A , italic_R , - italic_R ),... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... | There is a polynomial-time 3333-approximation for homogeneous RW-MatSup. There is a 3333-approximation algorithm for RW-MuSup, with runtime poly(n,m,Λ)normal-poly𝑛𝑚normal-Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
| 5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
|
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,... | B |
This together with the convergence of {‖X(k,ω)−𝟏N⊗z∗(ω)‖,k≥0}norm𝑋𝑘𝜔tensor-productsubscript1𝑁superscript𝑧𝜔𝑘0\{\|X(k,\omega)-\mathbf{1}_{N}\otimes z^{*}(\omega)\|,k\geq 0\}{ ∥ italic_X ( italic_k , italic_ω ) - bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ⊗ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUP... |
From the definition of Γ2subscriptΓ2\Gamma_{2}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we know that Γ2⊆Γ1subscriptΓ2subscriptΓ1\Gamma_{2}\subseteq\Gamma_{1}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Then, similar to the proof of Theorem 2 in [25], we get ... | k)}|\mathcal{F}(k-1)\right]\succeq O_{N\times N}\ \mbox{a.s.},\ \mathcal{G}(k|%
k-1)\text{ is balanced}\ \mbox{a.s.},\ k\geq 0\Big{\}}.roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = { { caligraphic_G ( italic_k ) , italic_k ≥ 0 } | italic_E [ caligraphic_A start_POSTSUBSCRIPT caligraphic_G ( italic_k ) end_POSTSUBSC... |
At first, we suppose {𝒢(k),k≥0}𝒢𝑘𝑘0\{\mathcal{G}(k),k\geq 0\}{ caligraphic_G ( italic_k ) , italic_k ≥ 0 } is a Markov chain with countable state space. For this case, Condition (b.1) of Theorem III.1 becomes more intuitive and Condition (b.2) is weakened. | The proof of Theorem III.2 is similar to that of Theorem 3.1 and is omitted here. For details, see Appendix A. The only difference is that
by the independence between ℒ𝒢(i)subscriptℒ𝒢𝑖\mathcal{L}_{\mathcal{G}(i)}caligraphic_L start_POSTSUBSCRIPT caligraphic_G ( italic_i ) end_POSTSUBSCRIPT and ℒ𝒢(j)subscriptℒ𝒢𝑗... | C |
Additionally, differing from traditional principles that directly confine the values in microdata, we propose a δ𝛿\deltaitalic_δ-probability principle to control random output tables so as to limit the probability of any QI value being used to re-identify a target person. For instance, the random output tables in Fig... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... | B |
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti... | C |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... | C |
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizo... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analys... | Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al.... | C |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | 75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ... |
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ... | D |
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... |
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti... | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... | GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no... | B |
To further investigate the capability of our method in coping with highly stochastic environments, we conduct experiments on games where both the agent and its opponent are controlled by self-supervised exploratory policies. The stochasticity of the transition dynamics is much higher for both sides of the game since th... |
The complete procedure of self-supervised exploration with VDM is summarized in Algorithm 1. In each episode, the agent interacts with the environment to collect the transition st,at,st+1subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1s_{t},a_{t},s_{t+1}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_... | To further investigate the capability of our method in coping with highly stochastic environments, we conduct experiments on games where both the agent and its opponent are controlled by self-supervised exploratory policies. The stochasticity of the transition dynamics is much higher for both sides of the game since th... | We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ... |
We illustrate the results in Fig. 9. We observe that the episode length becomes longer over training time with the intrinsic reward estimated from VDM, as anticipated. We observe that our method reaches the episode length of 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with the minimum iterati... | D |
Several improvements have been presented, including Floatman–Hormann interpolation [16, 38], that reach better approximation quality than splines.
However, all of them share the above weaknesses (A,B,C), as we demonstrate in the numerical experiments of Section 8. | that these approaches are prevented from approximating a generic class of functions, but are limited to well-behaving bounded analytical or holomorphic functions
occurring, for instance, as solutions of elliptic PDEs. In these scenarios, reasonable uniform approximations of the function f𝑓fitalic_f can be | Therefore, alternative interpolation schemes with better numerical condition and lower computational complexity are desirable.
While previous approaches to addressing this problem relied on tensorial interpolation schemes [33, 48, 59, 75], we here propose a different approach. | Though, approximations of lower accuracy might be reached faster then by polynomial interpolation, this makes these approaches incapable for answering Question 1 when higher-precision
approximations are required. The multivariate polynomial interpolation method presented here reaches this goal. | reached by sparse samples that avoid the curse of dimensionality in high dimensions m∈ℕ𝑚ℕm\in\mathbb{N}italic_m ∈ blackboard_N, m≤16𝑚16m\leq 16italic_m ≤ 16.
However, when asking such approaches to deliver approximations to machine precision, or to leave the tight class of well-behaving functions, | C |
Classical tests (see, e.g., [12]) mainly follow the parametric approaches, which are designed based on prior information about the distributions under each class.
Examples in classical tests include the Hotelling’s two-sample test [13] and the Student’s t-test [14]. | In this paper, we consider non-parametric two-sample testing, in which no prior information about the unknown distribution is available.
Two-sample tests for non-parametric settings are usually constructed based on some metrics quantifying the distance between two distributions. | Given collected samples xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ymsuperscript𝑦𝑚y^{m}italic_y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, a non-parametric two-sample test is usually constructed based on IPMs, which quantify the discrepancy between the associated em... | Classical tests (see, e.g., [12]) mainly follow the parametric approaches, which are designed based on prior information about the distributions under each class.
Examples in classical tests include the Hotelling’s two-sample test [13] and the Student’s t-test [14]. | Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS).
However, it is pointed out in [23] that when the bandwidth is chosen based on ... | A |
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mod... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... | Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf... | Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | B |
Graph described in Fig. 4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma... |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the... |
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation. |
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized... | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... | C |
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... |
A finite group, GFsubscript𝐺𝐹G_{F}italic_G start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, can be generated from Fisubscript𝐹𝑖F_{i}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using composition as the group operation. In this section, we devise a procedure to compute the linear representation of the gro... | Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it... |
A finite field, by definition, is a finite set, and the set of all permutation polynomials over the finite field forms a group under composition. Given a finite subset of such permutations, we can compute a group generated by this set. In this paper, we propose a representation of such a group using the concept of lin... | D |
The NNFS algorithm performed surprisingly well in our simulations given its simple and greedy nature, showing performance very similar to that of the adaptive lasso. However, in both gene expression data sets it was among the two worst performing methods, both in terms of accuracy and view selection stability. If one ... | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi... |
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the add... | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An exam... | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... | D |
For LOF, iForest, FastABOD, OCSVM and SOD, we use the implementations in the dbscan [80] R package, IsolationForest [81] R package, abodOutlier [82] R package, e1071 [83] R package and HighDimOut [84] R package respectively. MBOM, ALSO and COMBN are implemented by ourselves based on the bnlearn [85] R package. All the... |
Thirty-two real-world datasets are used for the evaluation. These datasets cover diverse domains, e.g., spam detection, molecular bioactivity detection, and image object recognition, as shown in Table 4. The AID362, Backdoor, MNIST and caltech16 datasets are obtained from the Kaggle data repository [72]. The Pima, WBC... |
The overall running time of the two DepAD algorithms and the nine benchmark methods are presented in Table 11. In general, the two DepAD algorithms have high efficiency. In the nine benchmark methods, FastABOD, ALSO, SOD and COMBN could not finish in four hours on some datasets. | In the experiments, if a method is unable to produce a result within four hours, we stop the experiments. The stopped methods and data sets include 1) FastABOD and SOD on datasets Backdoor and Census; 2) ALSO on datasets Backdoor, CalTech16, Census, Secom, MNIST, CalTech28, Fashion and Ads; 3) COMBN on datasets Backdoo... | The running times on the 32 datasets and their average values are shown in Table 10. Comparing the five methods, FBED is the most efficient, with an average running time of 2.7 seconds, followed by MI at 23 seconds, HITON-PC at 26 seconds, DC at 133 seconds, and IEPC being the most time-consuming at 1538 seconds. Notab... | C |
\log(t)})| | italic_θ - italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | | start_POSTSUBSCRIPT bold_H start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT = over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_d roman_log ( italic_t ) end_ARG ... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... |
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait... | In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe... |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m... | B |
3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t... |
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates ... |
Inspired by FPN [22], which computes multi-scale features with different levels, we propose a cross-scale graph pyramid network (xGPN). It progressively aggregates features from cross scales as well as from the same scale at multiple network levels via a hybrid module of a temporal branch and a graph branch. As shown ... | Fig. 2 demonstrates the overall architecture of our proposed Video self-Stitching Graph Network (VSGN). It is comprised of three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), scoring and localization (SoL), which will be elaborated in Sec. 3.2, 3.3, and 3.4, respectively. Before delv... |
Figure 2: Architecture of the proposed video self-stitching graph network (VSGN). Its takes a video sequence and generates detected actions with start/end time as well as their categories. It has three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), and scoring and localization (SoL).... | D |
Li et al. [LCW∗18] found that once the ML expert has acquired all the results from an execution stage, he/she should analyze them with various perspectives and decide if the previously explored models’ performance match his/her needs. If not, then more stages should be involved in the process until his/her expectations... | Thus, groups of points represent clusters of models that perform similarly according to all the metrics.
The plot uses the Viridis colormap [LH18] to show the average performance of each model according to all selected metrics. This view provides the user with an overview of the hyperparameter space and ability to look... | (2) project the models into a hyperparameter embedding according to the previous overall performance using DR methods; (3) compare the mean performance of all algorithms and models vs. a selection of models for every metric; and (4) analyze the predictive results for each instance and for all models against a selection... |
At this phase, we want to confirm precisely the cluster affiliation and the relationship with the overall performance (here, the average of 4 validation metrics) for all the models. To achieve that, the beeswarm plots in Figure 2(b.1 and b.2) arrange the models according to the distinct algorithms in the x-axis and so... | Automatic ML commonly delivers to users a model with the highest performance according to a single metric (e.g., accuracy), but fails to take into account other characteristics of models [PNKC21].
In practice, users want to consider several model features and validation metrics for selecting a model (or models). | D |
Note that {i,j}∈Inp,n𝑖𝑗subscript𝐼subscript𝑛𝑝𝑛\{i,j\}\in I_{{n_{p}},n}{ italic_i , italic_j } ∈ italic_I start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_n end_POSTSUBSCRIPT if and only if e[ℓ]<0𝑒delimited-[]ℓ0e[\ell]<0italic_e [ roman_ℓ ] < 0 for all ℓ∈{i,j}ℓ𝑖𝑗\ell\in\{i,j\... |
In Proposition 2, it is proven that the dynamics of the error vector in Algorithm 1 are identical to the dynamics of the value vector in Theorem 1. Condition 1 is used in Theorem 1 to prove that the value vector exponentially converges to 𝟎0\bm{0}bold_0. In Proposition 3, it is proven that Algorithm 1 satisfies Condi... | Let the error vector e(k)𝑒𝑘e(k)italic_e ( italic_k ) be the difference between the probability distribution at time k𝑘kitalic_k and the desired steady-state distribution e(k)=x(k)−v𝑒𝑘𝑥𝑘𝑣e(k)=x(k)-vitalic_e ( italic_k ) = italic_x ( italic_k ) - italic_v.
The DSMC algorithm is designed to ensure that the dyna... | The main idea of the probabilistic swarm guidance is to drive the propagation of density distribution vector x(k)𝑥𝑘x(k)italic_x ( italic_k ), instead of individual agent positions {rl(k)}l=1Nsuperscriptsubscriptsubscript𝑟𝑙𝑘𝑙1𝑁\{r_{l}(k)\}_{l=1}^{N}{ italic_r start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( ita... | Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi... | A |
Cycle consistency is a natural property and constitutes a necessary condition for the pairwise matchings to correspond to the ground truth. As such, cycle consistency can serve as additional constraint in order to better restrict the space of solutions in multi-matching problems.
| In this case, cycle consistency holds implicitly without having to enforce the constraints (2) in the problem formulation, and without having to develop a customised solution strategy.
The union of all distinct points across all k𝑘kitalic_k shapes are called universe points, and we use d𝑑ditalic_d to denote the total... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | Most pipelines for partial matching include the full reference shape to resolve some of the complexity. Although our optimisation does not need any information about the complete geometry, we use a partiality-adjusted version of ZoomOut to obtain the shape-to-universe initialisation for IsoMuSh.
In this case, the optim... | The main idea of the shape-to-universe representation is that each point in each of the k𝑘kitalic_k shapes is brought into correspondence with exactly one of the universe points. Then, all points across the k𝑘kitalic_k shapes that are in correspondence with the same universe point are said to be in correspondence wit... | A |
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect... |
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect... | A graph is an interval graph if it is the intersection graph of a family of intervals on the real line; or, equivalently, the intersection graph of a family of subpaths of a path. Interval graphs are characterized by Lekkerkerker and Boland [15] as chordal graphs with no asteroidal triples, where an asteroidal triple i... |
We now introduce a last class of intersection graphs. A rooted path graph is the intersection graph of directed paths in a rooted tree. Rooted path graphs can be recognized in linear time by using the algorithm by Dietz [7]. All inclusions between introduced classes of graphs are resumed in the following: | We denote by G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) a finite connected undirected graph, where V𝑉Vitalic_V, |V|=n𝑉𝑛|V|=n| italic_V | = italic_n, is a set of vertices and E𝐸Eitalic_E, |E|=m𝐸𝑚|E|=m| italic_E | = italic_m, is a collection of pairs of vertices called edges. Let P𝑃{P}italic_P be a fin... | B |
The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002F... |
The ego-networks dataset contains more than 1000 ego-networks from Facebook, Twitter, and GooglePlus. In an ego-network, all the nodes are friends of one central user and the friendship groups or circles (depending on the platform) set by this user can be used as ground truth communities. The SNAP ego-networks are ope... | The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002F... |
The stochastic blockmodel (SBM) (SBM, ) is one of the most used models for community detection in which all nodes in the same community are assumed to have equal expected degrees. Some recent developments of SBM can be found in (abbe2017community, ) and references therein. Since in empirical network data sets, the deg... | In this paper, we consider the degree-corrected mixed membership (DCMM) model (mixedSCORE, ). For a mixed membership network, nodes could belong to multiple clusters. To measure how likely each node belongs to a certain community, DCMM assumes that node i𝑖iitalic_i belongs to cluster V(k)superscript𝑉𝑘V^{(k)}italic_V... | C |
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
In each iteration, variati... | To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods.
I... | Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity.
Therefore, in this scenario, variational ... | we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe... | Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical... | D |
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-... |
The method is evaluated in two modes: (1) Common Testing Mode: the model trained on one scenario with one traffic flow configuration is tested on the same scenario with the same configuration. It is used to validate the ability of the RL algorithm to find the optimal policy. |
Mixedl. The mixedl is a mixed low traffic flow with a total flow of 2550 in one hour, to simulate a light peak. The arrival rate changes every 10 minutes, which is used to simulate the uneven traffic flow distribution in the real world, the details of the vehicle arrival rate and cumulative traffic flow are shown in F... |
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II. | We run the experiments under three traffic flow configurations: real traffic flow, mixed low traffic flow and mixed high traffic flow. The real traffic flow is real-world hourly statistical data with slight variance in vehicle arrival rates, as shown in Tab. I. Since the real-world strategies tend to break down during ... | B |
𝐟(𝐱j)+J(𝐱j)(𝐱−𝐱j)=𝟎.𝐟subscript𝐱𝑗𝐽subscript𝐱𝑗𝐱subscript𝐱𝑗0\mathbf{f}(\mathbf{x}_{j})+J(\mathbf{x}_{j})\,(\mathbf{x}-\mathbf{x}_{j})~{}~{%
}=~{}~{}\mathbf{0}.bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + italic_J ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ( bold_x - b... | \mathbf{x}}-\check{\mathbf{x}}\|_{2}+\zeta\,\|\mathbf{f}(\hat{\mathbf{x}},\hat%
{\mathbf{y}})\|_{2}\big{)}∥ bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( over^ start_ARG bold_x end_ARG , over^ start_ARG bold_y end_ARG ) start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSC... | \mathbf{x}})+O\big{(}\|\mathbf{x}_{j}-\hat{\mathbf{x}}\|_{2}^{2}\big{)}bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = italic_J ( over^ start_ARG bold_x end_ARG ) ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - over^ start_ARG bold_x end_ARG ) + italic_O ( ∥ bold_x start_POSTSUBSCRIPT itali... | Since J(𝐱^)𝐽^𝐱J(\hat{\mathbf{x}})italic_J ( over^ start_ARG bold_x end_ARG ) is rank-deficient at the nonisolated zero
𝐱^^𝐱\hat{\mathbf{x}}over^ start_ARG bold_x end_ARG and 𝐱jsubscript𝐱𝑗\mathbf{x}_{j}bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is near 𝐱^^𝐱\hat{\mathbf{x}}over^ start_ARG bold_x end... | \mathbf{f}(\mathbf{x}_{j})-\mathbf{f}(\hat{\mathbf{x}})\big{)}~{}~{}=~{}~{}O%
\big{(}\|\mathbf{x}_{j}-\hat{\mathbf{x}}\|_{2}^{2}\big{)}( italic_I - italic_J ( over^ start_ARG bold_x end_ARG ) italic_J ( over^ start_ARG bold_x end_ARG ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ) ( bold_f ( bold_x start_POSTSUBSCRIPT ... | C |
=(1+ϵ)((1+(2+5ϵ)ηk+ϵ)λ+cA(1−λ))|Opt(σ)|,absent1italic-ϵ125italic-ϵ𝜂𝑘italic-ϵ𝜆subscript𝑐𝐴1𝜆Opt𝜎\displaystyle=(1+\epsilon)((1+(2+5\epsilon)\eta k+\epsilon)\lambda+c_{A}(1-%
\lambda))|\textsc{Opt}(\sigma)|,= ( 1 + italic_ϵ ) ( ( 1 + ( 2 + 5 italic_ϵ ) italic_η italic_k + italic_ϵ ) italic_λ + italic_c start... | In order to analyze the performance of an online algorithm, we will rely on the well-established framework of competitive analysis, which provides strict, theoretical performance guarantees against worst-case scenarios. In fact, as stated in (?), bin packing has served as “an early proving ground for this type of analy... | In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity.
Ideally, one aims for a loss that is as small as o(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu... | These algorithms are variants of the classic Harmonic algorithm (?), which places items of approximately equal sizes, according to a harmonic sequence, in the same bin.
The currently best algorithm is the Advanced Harmonic (AH) algorithm, which has a competitive ratio of 1.57829 (?), whereas the best-known lower bound ... | To obtain the best theoretical performance, we can choose A𝐴Aitalic_A as the algorithm of the best known competitive ratio, that is Advanced Harmonic algorithm (?). However, as discussed in Section 2, such algorithms belong to a class that is tailored to worst-case competitive analysis, and do not tend to perform well... | D |
Table 2: Shape auto-encoding on the ShapeNet dataset. The best results are highlighted in bold. CD is multiplied by 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and EMD is multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencod... |
For the point cloud representation, the crucial step is to define reconstruction loss that can be used in the autoencoding framework. In the literature, two distance measures are successively applied: Earth Mover’s (Wasserstein) Distance (Rubner et al., 2000), and Chamfer pseudo-distance (Tran, 2013). | We examine the generative capabilities of the provided LoCondA model compared to the existing reference approaches. In this experiment, we follow the evaluation protocol provided in (Yang et al., 2019). We use standard measures for this task like Jensen-Shannon Divergence (JSD), coverage (COV), and minimum matching dis... | In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar... | In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro... | B |
{Q}}_{i}over¯ start_ARG caligraphic_X end_ARG × caligraphic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT × over¯ start_ARG caligraphic_Y end_ARG × caligraphic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (i=1,…,m𝑖1…𝑚i=1,\ldots,mitalic_i = 1 , … , italic_m).
| Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... | Our paper technique can be generalized to non-smooth problems by using another variant of sliding procedure [34, 15, 23]. By using batching technique, the results can be generalized to stochastic saddle-point problems [15, 23]. Instead of the smooth convex-concave saddle-point problem we can consider general sum-type s... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | B |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... | In the case that we can find some non-star spanning tree T𝑇Titalic_T of
G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Tital... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. | D |
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm(ℱ)≥δ(|ℱ|m)... |
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit... | One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in... |
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and... |
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ... | B |
Various visualization techniques have been proposed for the task of feature selection, including correlation matrices [42, 43], radial visualizations [44, 45, 46], scatterplots [47], scatterplot matrices [48], feature ranking [49, 50, 51, 52, 53, 54, 55, 56], feature clustering [57], and dimensionality reduction (DR) ... | Figure 7: Engineering features for improved predictive performance. From the pre-training phase, we detect that most of the instances belong to the Best slice (a.4), then the Worst slice (a.1), followed by the remaining slices (a.3 and a.2). In view (b), we validate every feature by working in synergy with the table he... | Visual support for the task of feature subset selection requires displaying information on different levels of granularity; highly detailed views are not optimal because they do not scale well with many features. For instance, the tool by Hohman et al. [63] facilitates the visual comparison of feature distributions for... | A use case present in a visual diagnosis tool revealed that feature generation involving the combination of two features is capable of a slight increase in performance [30]. The authors tested the same mathematical operations as in our system (i.e., addition, subtraction, multiplication, and division), but the generati... |
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heat... | B |
}]^{\mathsf{\scriptscriptstyle T}}.italic_θ := [ italic_γ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_γ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_γ start_POSTSUBSCRIPT over¨ start_ARG italic_x end_ARG end_POSTSUBSCRIPT , italic_γ start_POSTSUBSCRIPT over¨ start_ARG italic_y end_ARG end_POSTSUB... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | To explore these trade-offs we formulate high-level optimization problem with cost function and constraints defined based on the entire position and velocity trajectory, which indicate respectively the overall performance of the control scheme and the operation limits.
| This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | B |
While this is a toy problem, in the real world, hidden minority patterns are common and failing on them can have dire consequences. Systems designed to aid human resources, help with medical diagnosis, determine probation, or loan qualification could be biased against minority groups based on age, gender, religion, sex... |
Recently, many methods have been proposed to make neural networks bias resistant. These methods can be grouped into two types: 1) those that assume the bias variables e.g., the gender label in CelebA, are explicitly annotated and can be accessed during training [55, 55, 69, 37] and, 2) those that do not require expli... | Explicit bias mitigation techniques directly access the bias variables: bexpl.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT during training to develop invariance to them. Based on the way these variables are utilized during training, we choose five d... |
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA ... | Without bias mitigation mechanisms, standard models (StdM) often use spurious bias variables for inference, rather than developing invariance to them, which often results in their inability to perform well on minority patterns [27, 11, 3, 61]. To address this, several bias mitigation mechanisms have been proposed, and ... | A |
Figure 14: We illustrate the relation between gaze directions and PoG. Gaze directions are originated from an origin 𝒐𝒐\boldsymbol{o}bold_italic_o and intersect with the screen at the PoG. The PoG is usually denoted as a 2D coordinate 𝒑𝒑\boldsymbol{p}bold_italic_p. \addedIt can be converted to 3D coordinate 𝒕𝒕\b... | Two kinds of evaluation protocols are commonly used for deep-learning based gaze estimation methods, including within-dataset and cross-dataset evaluation.
The within-dataset evaluation assesses the model performance on the unseen subjects from the same dataset. The dataset is divided into training and test set accordi... | The second row in Tab. VII shows the result of PoG estimation methods. \addedAFF-Net [57] and EFE [187] shows the best performance than other compared methods.
The third and fourth rows show the converted results. Compared methods are designed for gaze direction estimation and we convert the result into PoG. |
TABLE V: \addedBenchmark of within-dataset evaluation. We use the provided source codes or re-implement (†) the methods for comparison. The underlines indicate the top three best performances. Note that the methods in the last row are proposed for point of gaze estimation, we convert the result using the post-processi... | We also convert the two definitions with post-processing methods following Sec. 4.2.2.
We respectively conduct benchmarks for 2D PoG and 3D gaze estimation. The 3D gaze estimation also are divided into within-dataset and cross-dataset evaluation. We mark the top three performance in all benchmarks with underlines. | C |
Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60... | Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60... |
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ... |
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine... |
Table 2 reports the classification rates on the SMFRD dataset. The highest recognition rate is achieved by the ResNet-50 through the quantization of DRF features by 88.9%. This performance is achieved using 70 codewords that feed an MLP classifier. AlexNet model realized good recognition rates comparing to the VGG-16 ... | D |
The phrases ϕ∧…italic-ϕ…\phi\land\ldotsitalic_ϕ ∧ … and ϕ⇒…⇒italic-ϕ…\phi\Rightarrow\ldotsitalic_ϕ ⇒ … are constrained types, so that the succsucc\operatorname{succ}roman_succ branch of nat[i]nat𝑖\operatorname{nat}[i]roman_nat [ italic_i ] produces a natnat\operatorname{nat}roman_nat at height i−1𝑖1i-1italic_i - 1 ... |
Postponing the details of our typing judgment for the moment, the signature below describes definitions that project the even- and odd-indexed substreams (referred to by y𝑦yitalic_y) of some input stream (referred to by x𝑥xitalic_x) at half of the original depth. Note that indexing begins at zero. |
The even-indexed substream retains the head of the input, but its tail is the odd-indexed substream of the input’s tail. The odd-indexed substream, on the other hand, is simply the even-indexed substream of the input’s tail. Operationally, the heads and tails of both substreams are computed on demand similar to a lazy... |
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i][z/x]Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita... |
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata... | A |
Ensure efficiency gains and scalability. For one thing, we need to carefully control the owner-side overhead to ensure that the owner can gain significant local resource savings from cloud media sharing. For another, we need to ensure that the two proposed schemes are scalable to handle real-time requests from users.
| Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However... | TTP-free. The two proposed schemes should not require any TTP to participate in the media sharing process. The TTP mentioned here does not cover the judge, who is only responsible for handing down sentences in cases of suspected redistribution and is not involved in the media sharing process. The judge is an indispensa... | Finally, the comparison between the two proposed schemes and the existing relevant schemes is summarized in Table I. As can be seen therein, the two proposed schemes FairCMS-I and FairCMS-II have advantages over the existing works. In addition, the two proposed schemes offer owners the flexibility to choose. If the sec... | Judge. The judge is a trusted entity who is only responsible for arbitration in the case of illegal redistribution, as in existing traitor tracing systems [10, 11, 12, 13, 14, 3]. After receiving the owner’s request for arbitration, the judge makes a fair judgment based on the evidence provided by the owner. Although o... | B |
The high-order relations between nodes can be modeled explicitly by stacking layers.
Gated Graph Neural Networks (GGNN) Li et al. (2015) uses GRU Cho et al. (2014) to update the node representations based on the aggregated neighborhood feature information. |
Currently, Graph Neural Networks (GNN) Kipf and Welling (2017); Hamilton et al. (2017); Veličković et al. (2018) have recently emerged as an effective class of models for capturing high-order relationships between nodes in a graph and have achieved state-of-the-art results on a variety of tasks such as computer vision... |
To capture the diversified polysemy of feature interactions in different semantic subspaces Li et al. (2020) and also stabilize the learning process Vaswani et al. (2017); Veličković et al. (2018), we extend our mechanism to employ multi-head attention. | Due to the strength in modeling relations on graph-structured data, GNN has been widely applied to various applications like neural machine translation Beck et al. (2018), semantic segmentation Qi et al. (2017), image classification Marino et al. (2017), situation recognition Li et al. (2017), recommendation Wu et al. ... | Though based on graph spectral theory Bruna et al. (2013), the learning process of graph convolutional networks (GCN) Kipf and Welling (2017) also can be considered as a mean-pooling neighborhood aggregation.
GraphSAGE Hamilton et al. (2017) concatenates the node features and introduces three | D |
where Q𝑄Qitalic_Q is a symmetric positive definite matrix with log-normally distributed eigenvalues and φℝ+(⋅)subscript𝜑subscriptℝ⋅\varphi_{\mathbb{R}_{+}}(\cdot)italic_φ start_POSTSUBSCRIPT blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ ) | The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless
version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... | In practice, a halving strategy for the step size is preferred for the
implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps | The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy,
the per iteration cost of the stateless step-size is about three times that of the simple step-size. | A |
Furthermore, we make some important observations about invariants that are preserved by operations of our algorithm which we will use later.
In Section 4, we prove the correctness of our algorithm. The approximation analysis as well as the proof of the pass complexity can be found in Section 5. In Section 6 we provide ... | The basic building block in the search for augmenting paths is to find semi-matchings between the vertices and their matched neighbors such that each vertex has a small amount of neighbors in the semi-matching.
In the case of bipartite graphs, they show that their method of searching for augmenting paths in a graph def... | Furthermore, we make some important observations about invariants that are preserved by operations of our algorithm which we will use later.
In Section 4, we prove the correctness of our algorithm. The approximation analysis as well as the proof of the pass complexity can be found in Section 5. In Section 6 we provide ... |
In the first pass, we apply a simple greedy algorithm to find a maximal matching, hence a 2222-approximation. This 2222-approximate maximum matching is our starting matching. The rest of our algorithm is divided into multiples phases. In each phase, we iteratively improve the approximation ratio of our current matchin... | In this section, we give a brief outline of our approach and discuss the challenges we overcome.
As the basic building block, we follow the classic approach by Hopcroft and Karp [HK73] of iteratively finding short augmenting paths to improve a 2222-approximate matching that can easily be found by a greedy algorithm. | D |
\bm{\mathit{A}}}\right)^{k}\overline{\bm{\mathit{v}}}^{\prime}_{1:4},over~ start_ARG bold_italic_d end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : 4 end_POSTSUBSCRIPT ≤ italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_ρ ( over~ start_ARG bold_italic_A end_ARG ) start_POSTSU... | In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies. |
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com... | A |
20: uxmk+1=δkuxmk+(1−δk)xmk+1subscriptsuperscript𝑢𝑘1subscript𝑥𝑚superscript𝛿𝑘subscriptsuperscript𝑢𝑘subscript𝑥𝑚1superscript𝛿𝑘superscriptsubscript𝑥𝑚𝑘1u^{k+1}_{x_{m}}=\delta^{k}u^{k}_{x_{m}}+(1-\delta^{k})x_{m}^{k+1}italic_u start_POSTSUPERSCRIPT italic_k + 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ita... | Our first two methods make several iterations between communications when λ𝜆\lambdaitalic_λ is small (or vice versa, for big λ𝜆\lambdaitalic_λ make some communications between one local iteration). The following method (Algorithm 3) is also sharpened on the alternation of local iterations and communications, but it m... |
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler... | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... | Unlike (2), the formulation (1) penalizes not the difference with the global average, but the sameness with other connected local nodes. Thereby the decentralized case can be artificially created in centralized architecture, e.g., if we want to create the network and W𝑊Witalic_W matrix to connect only some clients bas... | A |
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina... |
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t... | Sheriff (Farina et al., 2019b) is a two-player, general-sum negotiation game. It consists of bargaining rounds between a smuggler, who is motivated to import contraband without getting caught, and a sheriff, who is motivated to find contraband or accept bribes. Figure 2(c) shows that JPSRO is capable of finding the opt... |
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina... | PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an... | B |
Our Covariance Lemma (3.5) shows that there are two possible ways to avoid adaptivity-driven overfitting—by bounding the Bayes factor term, which induces a bound on |q(Dv)−q(D)|𝑞superscript𝐷𝑣𝑞𝐷\left|{q}\left(D^{v}\right)-{q}\left(D\right)\right|| italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSC... | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient... |
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_PO... | Our Covariance Lemma (3.5) shows that there are two possible ways to avoid adaptivity-driven overfitting—by bounding the Bayes factor term, which induces a bound on |q(Dv)−q(D)|𝑞superscript𝐷𝑣𝑞𝐷\left|{q}\left(D^{v}\right)-{q}\left(D\right)\right|| italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSC... | This work was supported in part by a gift to the McCourt School of Public Policy and Georgetown University, Simons Foundation Collaboration 733792, Israel Science Foundation (ISF) grant 2861/20, and a grant from the Israeli Council for Higher Education. Shenfeld’s work was also partly supported by the Apple Scholars in... | D |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali... | We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni... |
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized... | We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, not... |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitali... | C |
Similar to FOPA [111], Zhu et al. [211] proposed to predict the rationality scores of all scales and locations, based on the interaction output between foreground and background using transformer [158]. Zhu et al. [211] also explored using unlabeled images with deliberately designed loss functions for object placement ... | In the previous section, image harmonization methods could adjust the foreground appearance to make it compatible with the background, but they ignore the fact that the inserted object may also have impact on the background (e.g., reflection, shadow). For example, if background objects cast shadows on the ground but th... | Object placement aims to paste the foreground on the background with suitable location, size, and shape. As shown in Fig. 4, the cases of unreasonable object placement are including but not limited to: a) the foreground object has inappropriate size (e.g., the dog is too large); b) the foreground object has unreasonabl... |
In the end, we briefly discuss the occlusion issue. Most of the above methods seek for reasonable placements to avoid the occurrence of occlusion, i.e., the inserted foreground is not occluded by background objects. Differently, a few methods [2, 190, 147] attempt to address the unreasonable occlusion when it occurs. ... | Object placement [2, 24, 65, 154, 197] tends to seek for reasonable location, size, and shape by predicting the foreground transformation to avoid the abovementioned inconsistencies. Previous object placement methods [197, 154] mainly predict simple form of spatial transformation, that is, shifting and scaling the fore... | C |
Multi-task or Not: Out of the 22 tasks examined, multi-task models exhibit the lowest RMSE in 15 (68.2%) tasks and the lowest MAE in 19 (86.4%) tasks. Our findings suggest that a simple multi-task learning approach, utilizing weight sharing, can enhance taxi service predictions by establishing connections among divers... | TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er... | To address this problem, we utilize LSTM as the base model, which is similar to ST-net in MetaST [5], and adopt a multi-task learning approach. We select Beijing and Shanghai as the source cities for transfer learning tasks in cities with large map sizes, and Xi’an as the source city for the transfer learning tasks in ... |
Multi-task or Not: Out of the 22 tasks examined, multi-task models exhibit the lowest RMSE in 15 (68.2%) tasks and the lowest MAE in 19 (86.4%) tasks. Our findings suggest that a simple multi-task learning approach, utilizing weight sharing, can enhance taxi service predictions by establishing connections among divers... |
Graph Models or Not: Among the 16 tasks conducted in Beijing, Shanghai, Shenzhen, Chongqing, GNN models exhibit the lowest RMSE in 8 (50%) tasks and the lowest MAE in all tasks except for Xi’an and Chengdu where CNN models outperform all other models in all tasks. Our analysis, as presented in Table II, reveals that X... | D |
𝒲(Γ,P)≈1|𝒟∗|∑(𝐱,y)∈𝒟∗|u(𝐱)−l(𝐱)|.𝒲Γ𝑃1superscript𝒟subscript𝐱𝑦superscript𝒟𝑢𝐱𝑙𝐱\displaystyle\mathcal{W}(\Gamma,P)\approx\frac{1}{|\mathcal{D}^{*}|}\sum_{(%
\mathbf{x},y)\in\mathcal{D}^{*}}|u(\mathbf{x})-l(\mathbf{x})|\,.caligraphic_W ( roman_Γ , italic_P ) ≈ divide start_ARG 1 end_ARG start_ARG | calig... |
Aside from the above quality measures, some other quantities might be of importance depending on the application. In the experiments only the predictive power is considered. The benefit of working with models that are built upon or include a point predictor is that one also gets a direct estimate of the response varia... |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... |
Meinshausen meinshausen2006quantile modified the random forest prediction method from the previous section to be able to directly estimate quantiles. Ordinary regression forests estimate the conditional mean of the response variable by taking a (weighted) average over the training instances in the leaves where the ne... | A |
We evaluate PTMs on four piano music classification tasks.
These include two note-level classification tasks, i.e., melody extraction \parencitesimonettaCNW19,note-affinity and velocity prediction \parencitewidmer94aaai,jeongKKLN19ismir,jeongKKN19icml and two sequence-level classification tasks, i.e., style classificat... |
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, ... | For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split).
With the same batch size of 12, we fine-tune the pre-trained our model for each ta... | As the major contribution of this article, we report a performance study of variants of PTM for this diverse set of classification tasks, comparing the proposed approach (Section 6) with recurrent neural network (RNN)-based baselines (Section 5).
Results reported in Section 7 show that the “pre-train and fine-tune” str... | CNN-based baselines \parencitesimonettaCNW19 for the two-class “melody versus non-melody” melody classification task.
As the dataset is highly unbalanced (i.e., the melody notes are much fewer than the accompaniment notes), we also report the precision, recall and F1 scores. It turns out that our model greatly outperfo... | C |
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and H𝐻Hitalic_H its spanning subgraph. Then λ(χ(H)−1)+1≤BBCλ(G,H)≤λ(χ(H)−1)+n−χ(H)+1𝜆𝜒𝐻11𝐵𝐵subscript𝐶𝜆𝐺𝐻𝜆𝜒𝐻1𝑛𝜒𝐻1\lambda(\chi(H)-1)+1\leq BBC_{\lambda}(G,H)\leq\lambda(\chi(H)-1)+n-\chi(H)+1italic_λ ( italic_χ ( italic_H ) - 1 ) + 1 ≤ italic_B ... | Additionally, [16] proved for comparability graphs we can find a partition of V(G)𝑉𝐺V(G)italic_V ( italic_G ) into at most k𝑘kitalic_k sets which induce semihamiltonian subgraphs in the complement of G𝐺Gitalic_G (i.e. it contains a Hamiltonian path) and from that it follows that BBC2(Kn,G)𝐵𝐵subscript𝐶2subscr... | The λ𝜆\lambdaitalic_λ-backbone coloring problem was studied for several classes of graphs, for example split graphs [5], planar graphs [3], complete graphs [6], and for several classes of backbones: matchings and disjoint stars [5], bipartite graphs [6] and forests [3].
For a special case λ=2𝜆2\lambda=2italic_λ = 2 i... |
Moreover, it was proved before in [4] that there exists a 2222-approximate algorithm for complete graphs with bipartite backbones and a 3/2323/23 / 2-approximate algorithm for complete graphs with connected bipartite backbones. Both algorithms run in linear time. As a corollary, it was proved that we can compute BBC... | An obvious extension would be an analysis for a class of split graphs, i.e. graphs whose vertices can be partitioned into a maximum clique C𝐶Citalic_C (of size ω(G)=χ(G)𝜔𝐺𝜒𝐺\omega(G)=\chi(G)italic_ω ( italic_G ) = italic_χ ( italic_G )) and an independent set I𝐼Iitalic_I.
A simple application of Theorem 2.18 gi... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.