Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
5.39k
A
stringlengths
250
8.2k
B
stringlengths
250
7.25k
C
stringlengths
250
4.17k
D
stringlengths
250
3.2k
label
stringclasses
4 values
a≡−(n−m)/2;b≡(D+n+m)/2;c≡m+D/2,formulae-sequence𝑎𝑛𝑚2formulae-sequence𝑏𝐷𝑛𝑚2𝑐𝑚𝐷2a\equiv-(n-m)/2;\quad b\equiv(D+n+m)/2;\quad c\equiv m+D/2,italic_a ≡ - ( italic_n - italic_m ) / 2 ; italic_b ≡ ( italic_D + italic_n + italic_m ) / 2 ; italic_c ≡ italic_m + italic_D / 2 ,
\Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
n-m)^{2}+(m+\frac{D}{2})(n-m)+m+\frac{D}{2}-1\right]R_{n}^{m}(x).start_ROW start_CELL - ( 1 + divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG ) ( 1 - italic_n - divide start_ARG italic_D end_ARG start_ARG 2 end_ARG ) divide start_ARG italic_n + italic_m + italic_D end_ARG start_ARG 2 end_ARG italic_R s...
}\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m% }}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s...
Rnm(x)=(−1)a((n+m+D)/2−1(n−m)/2)xm[1−(n−m)⁢(D+n+m)2⁢(D+2⁢m)x2\displaystyle R_{n}^{m}(x)=(-1)^{a}{(n+m+D)/2-1\choose(n-m)/2}x^{m}\Big{[}1-% \frac{(n-m)(D+n+m)}{2(D+2m)}x^{2}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCR...
D
19S:=assign𝑆absentS:=italic_S := Composition of S𝑆Sitalic_S and the MSLP of hℓ−1⁢hr−1superscriptsubscriptℎℓ1superscriptsubscriptℎ𝑟1h_{\ell}^{-1}h_{r}^{-1}italic_h start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start...
First we describe the preprocessing phase during which we initialize the memory of the MSLP to encode particular matrices which will be useful for expressing diagonal matrices as words independently of the given diagonal matrix. The constructed matrices can be reused for all diagonal matrices, and so further diagonal m...
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
The following lemma shows how to compute the matrices of the preprocessing step. Recall that ω𝜔\omegaitalic_ω is a primitive element of 𝔽q=𝔽pfsubscript𝔽𝑞subscript𝔽superscript𝑝𝑓\mathbb{F}_{q}=\mathbb{F}_{p^{f}}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT = blackboard_F start_POSTSUBSCRIPT italic_...
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm.
A
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscrip...
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien...
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
B
Similarly, from a P𝑃Pitalic_P-stable triangle A′⁢B′⁢C′superscript𝐴′superscript𝐵′superscript𝐶′A^{\prime}B^{\prime}C^{\prime}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we can also construct △⁢A⁢B⁢C△𝐴�...
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS. First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS.
C
S⁢V⁢Ms⁢t⁢a⁢t⁢i⁢c+S⁢p⁢i⁢k⁢e⁢M𝑆𝑉subscript𝑀𝑠𝑡𝑎𝑡𝑖𝑐𝑆𝑝𝑖𝑘𝑒𝑀SVM_{static}+SpikeMitalic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_t italic_a italic_t italic_i italic_c end_POSTSUBSCRIPT + italic_S italic_p italic_i italic_k italic_e italic_M [17]
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
B
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
B
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic...
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W...
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi...
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to...
A
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
A
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance ...
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali...
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
B
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
A
The images presented during the acquisition of saliency maps in all aforementioned datasets are largely based on natural scenes. Stimuli of CAT2000 additionally fall into predefined categories such as Action, Fractal, Object, or Social. Together with the corresponding fixation patterns, they constituted the input and ...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Various measures are used in the literature and by benchmarks to evaluate the performance of fixation models. In practice, results are typically reported for all of them to include different notions about saliency and allow a fair comparison of model predictions Kümmerer et al. (2018); Riche et al. (2013). A set of nin...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
We normalized the model output such that all values are non-negative with unit sum. The estimation of saliency maps can hence be regarded as a probability distribution prediction task as formulated by Jetley et al. (2016). To determine the difference between an estimated and a target distribution, the Kullback-Leibler ...
B
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
B
Montoya et al.[206] trained a 30 layer ResNet to generate 3D cerebral angiograms from contrast-enhanced images using three tissue types (vasculature, bone and soft tissue). They created the annotations using thresholding and connected components in 3D space, having a combined dataset of 13790 images.
Lessman et al.[195] method for coronary calcium scoring utilizes three independently trained CNNs to estimate a bounding box around the heart, in which connected components above a Hounsfield unit threshold are considered candidates for CACs. Classification of extracted voxels was performed by feeding two-dimensional p...
In[198] the authors created a method to identify and quantify CAC without a need for coronary artery extraction. The bounding box detection around the heart method employs three CNNs, where each detects the heart in the axial, sagittal and coronal plane.
Every ROI was identified using a combination of three CNNs, each analyzing one orthogonal image plane. While a single CNN predicted the presence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it.
Similar work has been done by the same authors in[233] which they use three CNNs to detect a bounding box around the LV and perform LV voxel classification within the bounding box. Commandeur et al.[208] used a combination of two deep networks to quantify epicardial and thoracic apidose tissue in CT from 250 patients w...
D
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^...
As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen...
C
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
B
This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks...
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in...
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
B
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat...
Furthermore, we show an interesting difference between the standard advice model and the model we introduce: in the former, an advice bit can be at least as powerful as a random bit, since an advice bit can effectively simulate any efficient choice of a random bit. In contrast, we show that in our model, there are situ...
the former can simulate the efficient choice of the latter, and thus provide a “no-loss” derandomization. However, in the setting of untrusted advice, the interplay between advice and randomization is much more intricate. This is because random bits, unlike advice bits, are assumed to be trusted.
A second issue we address in this section is related to the comparison of random bits and advice bits as resource. More specifically, in the standard model in which advice is always trustworthy, an advice bit can be at least as powerful as a random bit since
We show, using online bidding as an example, that there are situations in which a deterministic algorithm with L+1𝐿1L+1italic_L + 1 advice bits is Pareto-incomparable to a randomized algorithm with 1 random bit and L𝐿Litalic_L advice bits. In particular we focus on the bounded online bidding problem,
A
As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in whic...
Since the dataset was highly unbalanced we optimized the penalty parameter, C𝐶Citalic_C (C>0)𝐶0(C>0)( italic_C > 0 ), and the class weight parameter w𝑤witalic_w (w≥1)𝑤1(w\geq 1)( italic_w ≥ 1 ) for SVM and LOGREG; for MNB only the class weight w𝑤witalic_w was varied, while for K𝐾Kitalic_KNN the K𝐾Kitalic_K param...
We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed. For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be...
from the first chunk on, the cumulative confidence value of one of the classes (negative in this case) stays above and always growing faster the other one. In this example, correctly, it was not possible to classify this subject as depressed after reading all its chunks.
Given that we do not have previous results available from other participants under this new scenario, for comparison, we had to perform experiments not only with SS3 but also with other standard classifiers Logistic Regression (LOGREG), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and K𝐾Kitalic_K-Neare...
D
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021). We ado...
We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝd⁡F⁢(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bol...
In the experiments of (Lin et al., 2018), DGC gets far better performance on both accuracy and communication cost than quantization methods. Hence, we do not compare with quantization methods in this paper. We don’t use the warm-up strategy in the experiments. The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90...
Since the server is typically the busiest node in parameter server architecture, we consider the communication cost on the server in our experiments. For DMSGD which doesn’t use any communication compression techniques, the communication cost on the server includes receiving vectors from the K𝐾Kitalic_K workers and se...
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021). We ado...
B
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
During validation we selected the models with the kernel size that achieved the best φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG out of all epochs. During testing we feed the test data into the selected model and calculate C⁢R−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIP...
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
D
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity o...
In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it...
A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calc...
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve...
B
\overline{f}\,\,\left(\widehat{\overline{\nabla}}\cdot\left(\widehat{\mathbf{B% }}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG [ over^ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T en...
\overline{f}\,\,\left(\widehat{\overline{\nabla}}\cdot\left(\widehat{\mathbf{B% }}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG [ over^ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T en...
𝐏^=(𝐁^θ⁢ω^)^𝐏subscript^𝐁𝜃^𝜔\widehat{\mathbf{P}}=\left(\widehat{\mathbf{B}}_{\theta}\,\,\widehat{\mathbf{% \omega}}\right)over^ start_ARG bold_P end_ARG = ( over^ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT over^ start_ARG italic_ω end_ARG )
as ∇¯^⋅𝐏^⋅^¯∇^𝐏\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{P}}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_P end_ARG, where 𝐏^=∇^¯⁢U¯^𝐏¯^∇¯𝑈\widehat{\mathbf{P}}=\overline{\widehat{\nabla}}\,\,\overline{U}over^ start_ARG bold_P end_ARG = over¯ start_ARG over^ start_ARG ∇ end_ARG end...
\mathbf{B}}_{\theta}\,\,\widehat{\mathbf{\omega}}\right)\right)\right\}\right]+ [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG italic_ω end_ARG ( over^ start_...
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
A
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action v...
where st+1subscript𝑠𝑡1s_{t+1}italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT is the resulting state after applying action a in the state s, r is the immediate reward observed for action a at state s, γ𝛾\gammaitalic_γ is the discount factor, and α𝛼\alphaitalic_α is learning rate.
B
Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et...
Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et...
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol...
Neff et al. (2018) trained a Wasserstein GAN with gradient penalty (Gulrajani et al., 2017) to generate labeled image data in the form of image-segmenation mask pairs. They evaluated their approach on a dataset of chest X-ray images and the Cityscapes dataset, and found that the WGAN-GP was able to generate images wit...
Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G...
C
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices. In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{...
To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals. See the supplementary material for an illustration.
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices. A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai...
C
For generating examples with varying confidence, i.e., the predictions of the individual decision trees diverge, we select a subset of nsubsubscript𝑛subn_{\text{sub}}italic_n start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT decision trees RFsub⊆RFsubscriptRFsubRF\operatorname{RF}_{\text{sub}}\subseteq\operatorname{RF}roman_R...
For generating examples with varying confidence, i.e., the predictions of the individual decision trees diverge, we select a subset of nsubsubscript𝑛subn_{\text{sub}}italic_n start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT decision trees RFsub⊆RFsubscriptRFsubRF\operatorname{RF}_{\text{sub}}\subseteq\operatorname{RF}roman_R...
In the next step, we extend the method to generate data from random forests. Random forests consist of nTsubscript𝑛𝑇n_{T}italic_n start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT decision trees RF={T1,…,TnT}RFsubscript𝑇1…subscript𝑇subscript𝑛𝑇\operatorname{RF}=\{T_{1},\dots,T_{n_{T}}\}roman_RF = { italic_T start_POS...
The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in R⁢Fsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRI...
All decision trees in RFsubsubscriptRFsub\operatorname{RF}_{\text{sub}}roman_RF start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT are processed in random order to generate a data sample. For each decision tree, the presented method modifies the data sample based on the target class.
D
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
B
Sparse attention mechanisms and approximations have been proposed to address this issue and improve the efficiency of transformers for longer sequences. We refer to the work of Tay et al. (2022) which provides an overview of various transformer-based architectures that focus on efficiency, reduced memory-footprint and ...
The user specifies the loss ℒℒ\mathcal{L}caligraphic_L as a computation graph and the gradient ∇𝐖ℒsubscript∇𝐖ℒ\nabla_{\mathbf{W}}\mathcal{L}∇ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT caligraphic_L is calculated automatically by the framework using the backpropagation algorithm (Rumelhart et al., 1986).
Let f⁢(w)𝑓𝑤f(w)italic_f ( italic_w ) be some non-differentiable operation within the computation graph of ℒℒ\mathcal{L}caligraphic_L such that the partial derivative ∂ℒ/∂wℒ𝑤\partial\mathcal{L}/\partial w∂ caligraphic_L / ∂ italic_w is not defined. The STE then approximates the gradient ∂ℒ/∂wℒ𝑤\partial\mathcal{L}/\p...
In the forward pass, the solid red line is followed which passes the two piecewise constant functions Q𝑄Qitalic_Q and signsign\operatorname*{sign}roman_sign whose gradient is zero almost everywhere (red boxes). During backpropagation, the dashed green line is followed which avoids these piecewise constant functions an...
Many recently developed methods for resource efficiency in DNNs incorporate components in the computation graph of the loss function ℒℒ\mathcal{L}caligraphic_L that are non-differentiable or whose gradient is zero almost everywhere, such as piecewise constant quantizers.
D
Let M𝑀Mitalic_M be an n𝑛nitalic_n-dimensional metric manifold. Then, note that we have FillRadn⁢(M,G,[M])=FillRad⁢(M)subscriptFillRad𝑛𝑀𝐺delimited-[]𝑀FillRad𝑀\mathrm{FillRad}_{n}(M,G,[M])=\mathrm{FillRad}(M)roman_FillRad start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_M , italic_G , [ italic_M ] ) = roma...
A priori, one can define the generalized filling radius for any metric space X𝑋Xitalic_X. However, we believe that the context of ANR metric spaces is the right level of generalization for our purposes because of the following proposition analogous to Proposition 1.
The goal of this section is to provide some partial results regarding the structure of barc∗VR⁢(⋅)subscriptsuperscriptbarcVR∗⋅\mathrm{barc}^{\mathrm{VR}}_{\ast}(\cdot)roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ⋅ ) for non-smooth spaces; see Figure 12. In ord...
Let (X,E)𝑋𝐸(X,E)( italic_X , italic_E ) be a metric pair where X𝑋Xitalic_X is a compact ANR metric space. For any integer k≥1𝑘1k\geq 1italic_k ≥ 1, any abelian group G𝐺Gitalic_G, and any ω∈Hk⁢(X;G)𝜔subscriptH𝑘𝑋𝐺\omega\in\mathrm{H}_{k}(X;G)italic_ω ∈ roman_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( ital...
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic...
A
Linear DR methods, such as PCA, are easier to understand and to explain, since the remaining axes are linear combinations of the original dimensions, which establishes a direct relationship between the low-dimensional and the high-dimensional data set. When the specific constraints of being simple and easily explainabl...
Although non-linear DR methods have also been around for quite some time (e.g., Sammon Mapping [4]), they have gained popularity in the past few years—due to increasingly better performance—with techniques such as Isomap [5], LLE [6], or LAMP [7]; a few comparative review papers on general DR exist already, see the sur...
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ...
Although our main design goal was to support the investigation of t-SNE projections, most of our views and interaction techniques are not strictly confined to the t-SNE algorithm. For example, the Dimension Correlation view could, in theory, be applied to any projection generated by any other algorithm. Its motivation,...
Linear DR methods, such as PCA, are easier to understand and to explain, since the remaining axes are linear combinations of the original dimensions, which establishes a direct relationship between the low-dimensional and the high-dimensional data set. When the specific constraints of being simple and easily explainabl...
A
In this context, complexity is not unusual in Nature: a plethora of complex systems, processes and behaviors have evinced a surprising performance to efficiently address intricate optimization tasks. The most clear example can be found in the different animal species, which have developed over generations very special...
Disregarding their source of inspiration, there is clear evidence of the increasing popularity and notoriety gained by nature- and bio-inspired optimization algorithms in the last two decades. This momentum finds its reason in the capability of these algorithms to learn, adapt, and provide good solutions to complex pro...
In this context, complexity is not unusual in Nature: a plethora of complex systems, processes and behaviors have evinced a surprising performance to efficiently address intricate optimization tasks. The most clear example can be found in the different animal species, which have developed over generations very special...
Going deeper into the creation of Machine Learning (ML) and Deep Learning (DL) models: Although most algorithms have been developed in recent years, the impact of EAs, a classical family of algorithms, has risen in the last few years. Their use in ML has been widely studied both for the design of models [615] and also...
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex...
A
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
A
A range of studies analysed network traces for ingress filtering using IP address characteristics (Moore et al., 2006; Barford et al., 2006; Chen et al., 2008; Czyz et al., 2014; Dainotti et al., 2013), or by inspecting on-path network equipment reaction to unwanted traffic, (Yao et al., 2014). In addition to a limited...
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes a...
Identifying DNS resolvers. The main challenge here is to locate the DNS resolvers within a domain/network and to trigger a DNS request to our Name servers. We use Email service in the target networks (retrieved via the MX type request in the target domain) to find the DNS resolvers. We send an email to target domain’s...
SMap first collects the dataset of services.Our dataset is constructed as follows: we periodically download the entire IPv4 scan from Sonar Project (son, [n. d.]). We use the scan results on UDP port 53 as input for Name servers and DNS resolvers, scan data on TCP port 25 for Mail servers and scan results on TCP port ...
SMap architecture consists of two parts: dataset scan and ingress filtering scan. The dataset scan collects the popular services using two methods: domain-based scan and IPv4 based scan. In IPv4 scan to locate the services SMap probes every IP, checking for open ports that correspond to the services that we need; for i...
D
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ...
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio...
The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use...
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design...
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
B
Let ti+∈𝒯+subscriptsuperscript𝑡𝑖superscript𝒯t^{+}_{i}\in\mathcal{T}^{+}italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_T start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, and let q1subscript𝑞1q_{1}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT be a poi...
Not shown is the property that the neighbour of q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a point in P⁢[1,st⁢(i)−5⁢k2]𝑃1st𝑖5superscript𝑘2P[1,\mathrm{st}(i)-5k^{2}]italic_P [ 1 , roman_st ( italic_i ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ].
Case (ii): p∈P⁢[s⁢t+⁢(i−1)−5⁢k2+1,st⁢(i)−5⁢k2]𝑝𝑃𝑠superscript𝑡𝑖15superscript𝑘21normal-st𝑖5superscript𝑘2p\in P[st^{+}\mkern-2.0mu(i-1)-5k^{2}+1,\mathrm{st}(i)-5k^{2}]italic_p ∈ italic_P [ italic_s italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_i - 1 ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSU...
A(3)⁢[i,q1,q2]:={The length of the shortest path from q1 to q2 that visits all points in P⁢[1,st⁢(i)], such that the neighbour of q2 is a point in P⁢[1,st⁢(i)−5⁢k2].assignsuperscript𝐴3𝑖subscript𝑞1subscript𝑞2casesThe length of the shortest path from q1 to q2 that visits all points in P⁢[1,st⁢(i)], such that the neig...
st}(i)]$, such that the neighbour of $q_{2}$ is a point in $P[1,\mathrm{st}(i)% -5k^{2}]$.}\end{cases}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT [ italic_i , italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] := { start_ROW start_CELL The length of the sh...
C
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ...
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ...
C
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented...
Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o...
C
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detai...
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ...
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detai...
A
Limitations. Efficiency and scalability were the major concerns raised by all the experts. The inherent computational burden of stacking multiple models still remains, as such complex ensemble learning methods need sufficient resources. Also, the use of VA in between levels makes this even worse. We believe that, with ...
Considering all that, E3 noted that our system could be useful in solving competition problems, e.g., on Kaggle, and for her team to run tests before applying specific models to their huge data sets. Progressive VA workflows [53] could also be useful for improving the scalability of our approach for larger data sets.
Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense. They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data.
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ...
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
A
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
C
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
A
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi...
The GP-based MSI prediction is proposed to solve the problem in [31]. Specifically, the r-UAV/t-UAV’s historical MSI is first exchanged with the t-UAV/r-UAV over a lower-frequency band and then the t-UAV will predict the future MSI of the r-UAV based on the historical MSI by using the GP-based MSI prediction model.
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi...
Moreover, the data block of MSI is set as BMSI=nMSI×T×BMSIsubscript𝐵MSIsubscript𝑛MSI𝑇subscript𝐵MSIB_{\text{MSI}}=n_{\text{MSI}}\times T\times B_{\text{MSI}}italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT × italic_T × italic_B start_POSTSUBSCRIPT MSI end_POSTS...
The tracking error of beam angles has a negative influence on the beam gain obtained by CCA. The proposed tracking error bounding algorithm uses the position/attitiude prediction error of the GP-based MSI prediction to obtain the beam angle tracking error, wherein the geometry relationship between UAVs and the Monte-Ca...
D
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the ri...
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping. This completes the proof for case 2 when the assumptions (a1) and (a2) hold.
A
*}\cdot(1-\gamma)^{-1}\cdot\alpha^{-1},≤ 1 / 2 ⋅ ( 1 - italic_γ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ⋅ over¯ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ⋅ ( 1 - italic_γ ) st...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ...
B
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f...
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform...
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f...
A
Consider the logical product space Z𝑍Zitalic_Z of the family (Xi)i∈Isubscriptsubscript𝑋𝑖𝑖𝐼(X_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT, thus using the signature σ={ϵi∣i∈I}σconditional-setsubscriptϵ𝑖𝑖𝐼\upsigma=\left\{\upeps...
Consider the logical product space Z𝑍Zitalic_Z of the family (Xi)i∈Isubscriptsubscript𝑋𝑖𝑖𝐼(X_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT, thus using the signature σ={ϵi∣i∈I}σconditional-setsubscriptϵ𝑖𝑖𝐼\upsigma=\left\{\upeps...
\,n-1})italic_ψ start_POSTSUBSCRIPT ⊇ italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i < italic_j end_POSTSUBSCRIPT ¬ ( italic_x start_POSTSUBS...
\neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i...
x)\wedge\neg(x=y)italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≜ ∃ italic_x . ∃ italic_y . italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ ¬ ( italic_x = italic_y ) for i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and θi,j≜∃x.∃y...
D
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneo...
1. The proposed ordinal distortion is a learning-friendly representation for neural networks, which is explicit and homogeneous compared with the implicit and heterogeneous distortion parameters. Thus, our learning model gains sufficient distortion perception of features and shows faster convergence. Moreover, this re...
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
B
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. ...
We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others. Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
A
Given a newly arriving scenario A𝐴Aitalic_A, we can set (HA,πA)←←subscript𝐻𝐴superscript𝜋𝐴absent(H_{A},\pi^{A})\leftarrow( italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_π start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ) ←GreedyCluster(A,R,−R)𝐴𝑅𝑅(A,R,-R)( italic_A , italic_R , - italic_R ),...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
There is a polynomial-time 3333-approximation for homogeneous RW-MatSup. There is a 3333-approximation algorithm for RW-MuSup, with runtime poly(n,m,Λ)normal-poly𝑛𝑚normal-Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
B
This together with the convergence of {‖X⁢(k,ω)−𝟏N⊗z∗⁢(ω)‖,k≥0}norm𝑋𝑘𝜔tensor-productsubscript1𝑁superscript𝑧𝜔𝑘0\{\|X(k,\omega)-\mathbf{1}_{N}\otimes z^{*}(\omega)\|,k\geq 0\}{ ∥ italic_X ( italic_k , italic_ω ) - bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ⊗ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUP...
From the definition of Γ2subscriptΓ2\Gamma_{2}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we know that Γ2⊆Γ1subscriptΓ2subscriptΓ1\Gamma_{2}\subseteq\Gamma_{1}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Then, similar to the proof of Theorem 2 in [25], we get ...
k)}|\mathcal{F}(k-1)\right]\succeq O_{N\times N}\ \mbox{a.s.},\ \mathcal{G}(k|% k-1)\text{ is balanced}\ \mbox{a.s.},\ k\geq 0\Big{\}}.roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = { { caligraphic_G ( italic_k ) , italic_k ≥ 0 } | italic_E [ caligraphic_A start_POSTSUBSCRIPT caligraphic_G ( italic_k ) end_POSTSUBSC...
At first, we suppose {𝒢⁢(k),k≥0}𝒢𝑘𝑘0\{\mathcal{G}(k),k\geq 0\}{ caligraphic_G ( italic_k ) , italic_k ≥ 0 } is a Markov chain with countable state space. For this case, Condition (b.1) of Theorem III.1 becomes more intuitive and Condition (b.2) is weakened.
The proof of Theorem III.2 is similar to that of Theorem 3.1 and is omitted here. For details, see Appendix A. The only difference is that by the independence between ℒ𝒢⁢(i)subscriptℒ𝒢𝑖\mathcal{L}_{\mathcal{G}(i)}caligraphic_L start_POSTSUBSCRIPT caligraphic_G ( italic_i ) end_POSTSUBSCRIPT and ℒ𝒢⁢(j)subscriptℒ𝒢𝑗...
C
Additionally, differing from traditional principles that directly confine the values in microdata, we propose a δ𝛿\deltaitalic_δ-probability principle to control random output tables so as to limit the probability of any QI value being used to re-identify a target person. For instance, the random output tables in Fig...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ...
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a...
B
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti...
C
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
C
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
2