Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
7.19k
A
stringlengths
250
4.85k
B
stringlengths
250
5.1k
C
stringlengths
250
8.2k
D
stringlengths
250
5.11k
label
stringclasses
4 values
The two ratios of derivatives are obtained by setting Rnm⁢(x)=0superscriptsubscript𝑅𝑛𝑚𝑥0R_{n}^{m}(x)=0italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = 0 in (29) and (30), then dividing both equations through Rnm′⁢(x)superscriptsuperscriptsubsc...
computed from Rnm⁢(x)/Rnm′⁢(x)=f⁢(x)/f′⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R sta...
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE...
Since Rnm⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥R_{n}^{m}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) is a polynomial of order n𝑛nitalic_n, the (n+1)𝑛1(n+1)( italic_n + 1 )st derivatives
The Newton’s Method of third order convergence is implemented for Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios
D
\ldots,d.italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := { italic_t start_POSTSUBSCRIPT italic_i ( italic_i - 1 ) end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) ∣ roman_ℓ = 0 , … , italic_f - 1 } for italic_i = 2 , … , italic_d .
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
A total of four MSLP instructions (group multiplications or inversions) are required, and only one memory slot is needed in addition to the two memory slots used to permanently store the input elements g,h𝑔ℎg,hitalic_g , italic_h. In other words, there exists an MSLP S𝑆Sitalic_S with memory quota b=3𝑏3b=3italic_b = ...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
B
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscrip...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
D
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
B
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
B
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ...
D
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
. As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
C
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
D
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018], and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular.
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b], as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3.
C
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
D
In this work, we adopted KLD as an objective function and produced fixation density maps as output from our proposed network. This training setup is particularly sensitive to false negative predictions and thus the appropriate choice for applications aimed at salient target detection Bylinskii et al. (2018). Defining ...
A prerequisite for the successful application of deep learning techniques is a wealth of annotated data. Fortunately, the growing interest in developing and evaluating fixation models has lead to the release of large-scale eye tracking datasets such as MIT1003 Judd et al. (2009), CAT2000 Borji and Itti (2015), DUT-OMRO...
A quantitative comparison of results on independent test datasets was carried out to characterize how well our proposed network generalizes to unseen images. Here, we were mainly interested in estimating human eye movements and regarded mouse tracking measurements merely as a substitute for attention. The final outcome...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
B
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for...
The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols ...
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTS...
Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the marking...
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap...
C
A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label. The CNN classification was propagated through the minimum spanning tree of the graph.
Lekadir et al.[224] used a patched-based four layer CNN for characterization of plaque composition in carotid ultrasound images. Experiments done by the authors showed that the model achieved better pixel-based accuracy than single-scale and multi-scale SVMs.
They applied the mean of a series of Gabor filters with varying frequencies and sigma values to the output of the network to determine whether a pixel represents a vessel or not. Besides finding that the optimal filters vary between channels, the authors also state the ‘need’ of enforcing the networks to align with hum...
In[175] the authors used a CNN to learn the features and a PCA-based nearest neighbor search utilized to estimate the local structure distribution. Besides demonstrating good results they argue that it is important for CNN to incorporate information regarding the tree structure in terms of accuracy.
Amongst their experiments they found that rotational and scaling data augmentations did not help increase accuracy, attributing it to interpolation altering pixel intensities which is problematic due to the sensitivity of CNN to pixel distribution patterns.
D
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an import...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
In this paper our focus was to demonstrate the capability and generality of SimPLe only across a suite of Atari games, however, we believe similar methods can be applied to other environments and tasks which is one of our main directions for future work. As a long-term challenge, we believe that model-based reinforcem...
D
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
A high level overview of these combined methods is shown in Fig. 1. Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D. An example of the respective outputs of some of the m𝑚mita...
Figure 1: High level overview of a feed-forward pass of the combined methods. xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ...
B
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con...
In the realm of mobile robotics research, the motion control of terrestrial robots across varied terrains is a complex endeavor. To enhance locomotion efficacy and elevate mobility, hybrid robots have been actively developed in the past decade [1]. These robots astutely choose the most suitable locomotion mode from a s...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure...
A
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O⁢(log⁡n)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration.
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\...
Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the...
bins include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS...
The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm...
D
In Section 4 the proposed framework is compared to state-of-the-art methods used in a recent early depression detection task. Section 5 goes into details of the main contributions of our approach by analyzing quantitative and qualitative aspects of the proposed framework. Finally, Section 6 summarizes the main conclusi...
A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as “early classification”, in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy.
The analysis of sequential data is a very active research area that addresses problems where data is processed naturally as sequences or can be better modeled that way, such as sentiment analysis, machine translation, video analytics, speech recognition, and time series processing.
To put the previous points in context, it is important to note that ERD is essentially a problem of analysis of sequential data. That is, unlike traditional supervised learning problems where learning and classification are done on “complete” objects, here classification (or both) must be done on “partial” objects whic...
Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce [Dean & Ghemawat, 2008], giving the framework the capability of effectively processing very large volumes of data. In Algorithm 2 is shown the training process described earlier. Note that the line...
B
Due to the presence of compressed error, naively compressing the communicated vectors in DSGD or DMSGD will damage the convergence, especially when the compression ratio is high. The most representative technique designed to tackle this issue is error feedback (Stich et al., 2018; Karimireddy et al., 2019), also called...
In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019). There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (...
The error feedback technique keeps the compressed error into the error residual on each worker and incorporates the error residual into the next update. Error feedback based sparse communication methods have been widely adopted by recent communication compression methods and achieved better performance than quantizatio...
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse commun...
Sparsification methods, which are also called sparse communication methods, select only a few components of the vector for communicating with the server or the other workers. The most widely used sparsification compressor adopted in sparse communication methods is top-s𝑠sitalic_s, where each worker selects s𝑠sitalic_...
B
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information. Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
Applying dropout at the activations in order to correct weights that have overshot, especially when they are initialized with high values. However, the effect of dropout on SANs would generally be negative since SANs have much less weights than DNNs thus need less regularization.
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
C
where κ𝜅\kappaitalic_κ is the index which decides the influence of overlap. Since D¯isubscript¯𝐷𝑖\bar{D}_{i}over¯ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT must satisfy that D¯i>0subscript¯𝐷𝑖0\bar{D}_{i}>0over¯ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRI...
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting. The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a hi...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
When there are numbers of UAVs in the network, it is possible for the coverage areas of different UAVs to overlap. When a UAV overlaps with another, they will not support all users but share the mission. The users in the overlaps will be served randomly with equal probability by each UAV. Fig. 2 presents the overlaps b...
In order to support as many users as possible, UAVs are required to enlarge coverage size, which is equal to enlarge the coverage proportion in the mission area. Higher altitude indicates larger coverage size as shown in Fig. 1 (c). The utility of coverage size is denoted as
D
are fired at t=tc⁢o⁢m⁢p=45⁢μ𝑡subscript𝑡𝑐𝑜𝑚𝑝45μt=t_{comp}=45\upmuitalic_t = italic_t start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 45 roman_μs, and the total compression current in the external coils rises over ∼20⁢μsimilar-toabsent20μ\sim 20\upmu∼ 20 roman_μs to its peak, for
is evaluated along the same chords, producing a reasonable match to the experimental data. For this shot (and simulation), with Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV,
Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV, of around 850kA, so that the total combined levitation and compression current is around 1111MA at the time peak compression,
and ψc⁢o⁢m⁢p⁢(𝐫,t)subscript𝜓𝑐𝑜𝑚𝑝𝐫𝑡\psi_{comp}(\mathbf{r},\,t)italic_ψ start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT ( bold_r , italic_t ), which pertain to the peak levitation/compression currents, are scaled over time according to the experimentally measured
is shown in figure 20. For this shot (and simulation), Vc⁢o⁢m⁢p=12subscript𝑉𝑐𝑜𝑚𝑝12V_{comp}=12italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBSCRIPT = 12kV and tc⁢o⁢m⁢p=45⁢μsubscript𝑡𝑐𝑜𝑚𝑝45μt_{comp}=45\upmuitalic_t start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p end_POSTSUBS...
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
A
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
B
Creating large 2D and 3D publicly available medical benchmark datasets for semantic image segmentation such as the Medical Segmentation Decathlon (Simpson et al., 2019). Medical imaging datasets are typically much smaller in size than natural image datasets (Jin et al., 2020), and the curation of larger public dataset...
A possible solution to address the paucity of sufficient annotated medical data is the development and use of physics based imaging simulators, the outputs of which can be used to train segmentation models and augment existing segmentation datasets. Several platforms (Marion et al., 2011; Glatard et al., 2013) as well...
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutio...
A
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are ...
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are ...
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig...
Fig. 9(c) shows that NMF produces graphs that are very dense, as a consequence of the multiplication with the dense soft-assignment matrix to construct the coarsened graph. Finally, Fig. 9(d) shows that NDP produces coarsened graphs that are sparse and preserve well the topology of the original graph.
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig...
C
SVM: Support vector machine (Chang & Lin, 2011) is a popular classifier that tries to find the best hyperplane that maximizes the margin between the classes. As evaluated by Fernández-Delgado et al. (2014), the best performance is achieved with a radial basis function kernel.
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features. Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features...
RF: Random forest (Breiman, 2001) is an ensemble-based method consisting of multiple decision trees. Each decision tree is trained on a different randomly selected subset of features and samples. The classifier follows the same overall setup, i.e., 500500500500 decision trees and a maximum depth of ten.
Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018). The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ...
C
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2⁢|𝒜|⁢T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG |...
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
C
They formulate an expected loss with respect to the distribution over the stochastic binary gates. By incorporating an expected ℓ0superscriptℓ0\ell^{0}roman_ℓ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT-norm regularizer over the weights, the probability parameters associated with these gates are encouraged to be close ...
To enable the use of the reparameterization trick, a continuous relaxation of the binary gates using a modified binary Gumbel-softmax distribution is used (Jang et al., 2017). They show that their approach can be used for structured sparsity by associating the stochastic gates to entire structures such as channels.
Wu et al. (2018a) performed mixed-precision quantization using similar NAS concepts to those used by Liu et al. (2019a) and Cai et al. (2019). They introduce gates for every layer that determine the number of bits used for quantization, and they perform continuous stochastic optimization of probability parameters assoc...
By injecting additive noise to the deterministic weights before rounding, one can compute probabilities of the weights being rounded to specific values in a predefined discrete set. Subsequently, these probabilities are used to differentiably round the weights using the Gumbel-softmax approximation (Jang et al., 2017).
They formulate an expected loss with respect to the distribution over the stochastic binary gates. By incorporating an expected ℓ0superscriptℓ0\ell^{0}roman_ℓ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT-norm regularizer over the weights, the probability parameters associated with these gates are encouraged to be close ...
A
Take any embedding of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT into ℝ4superscriptℝ4\mathbb{R}^{4}blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT and let ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0 be small. Consider the boundary Cϵsubscript𝐶italic-ϵC_{\epsilon}italic_C st...
The reader familiar with concepts from applied algebraic topology will have noticed that the definition of strong filling radius of an n𝑛nitalic_n-dimensional metric manifold coincides with (one half of) the maximal persistence of its associated Vietoris-Rips persistence module. In fact, for each nonnegative integer k...
Given a closed connected n𝑛nitalic_n-dimensional metric manifold M𝑀Mitalic_M and a field 𝔽𝔽\mathbb{F}blackboard_F, we define the strong filling radius sFillRad⁢(M;𝔽)sFillRad𝑀𝔽\mathrm{sFillRad}(M;\mathbb{F})roman_sFillRad ( italic_M ; blackboard_F ) as half the length of the largest interval in the n𝑛nitalic_n-t...
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic...
By invoking the relationship between the Vietoris-Rips persistent homology and the strong filling radius, one can verify that the strong filling radii of two n𝑛nitalic_n-dimensional metric manifolds M𝑀Mitalic_M and N𝑁Nitalic_N are close if these two manifolds are similar in the Gromov-Hausdorff distance sense.
D
Some attempts to enrich scatterplots with automatically-derived statistical descriptions of patterns [38, 39, 40] have shown that static mappings may be useful in simple scenarios, but the complex relations between low- and high-dimensional space in non-linear projections cannot be well represented.
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ...
Cytosplore [11] is an example of tools that use t-SNE for visual data exploration within a specific domain: single-cell analysis with mass cytometry data. Apart from showing a t-SNE projection of the data, Cytosplore is also supported by a domain-specific clustering technique which serves as the base for the rest of th...
In such cases, interactive visual interfaces are necessary, as noted by Sacha et al. [15] in their survey on interaction techniques for DR. Interactive solutions for specific domains such as text [19, 20] and images [41, 7] use inherent characteristics of the data in order to explain layouts, however, they are not easi...
Labels   In order to better explain the contribution of t-viSNE, the data sets used in our use cases contain predefined labels, which is not the case in general when using unsupervised learning techniques, such as t-SNE. There is no restriction, however, to having labels when using t-viSNE; one might use the results of...
C
The combining method can be specific for the problem to be solved or instead, be conceived for a more general family of problems. In fact, combining methods are usually devised to be adaptable to many different solution representations. As mentioned before, the most popular algorithm in this category is GA [98]. Howeve...
The second and third most influential algorithms are GA, a very classic algorithm, and DE, a well-known algorithm whose natural inspiration resides only in the evolution of a population. Both have been used by around 5% of all reviewed nature-inspired algorithms, and they are the most representative approach in the Evo...
Another popular option of creating new solutions relies on stigmergy, namely, an indirect communication and coordination between the different solutions or agents used to create new solutions. This communication is usually done using an intermediate structure, with information obtained from the different solutions, us...
Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous solution. The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations. This soluti...
Solution creation, in which new solutions are not generated by mutation/movement of a single reference solution, but instead by combining several solutions (so there is not only a single parent solution), or other similar mechanism. Two approaches can be utilized for creating new solutions. The first one is by combinat...
B
In this paper, matrices and vectors are represented by uppercase and lowercase letters respectively. A graph is represented as 𝒢=(𝒱,ℰ,𝒲)𝒢𝒱ℰ𝒲\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})caligraphic_G = ( caligraphic_V , caligraphic_E , caligraphic_W ) and |⋅||\cdot|| ⋅ | is the size of some set. Vectors whose ...
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propo...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
A
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
Methodology. The core idea of the Path MTU Discovery (PMTUD) based tool is to send the ICMP Packet too Big (PTB) message from a spoofed source IP address, belonging to the tested network, and in the 8 bytes payload of the ICMP to insert the real IP address belonging to the prober. If the network does not enforce ingres...
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P...
Methodology. We send a DNS request to the tested network from a spoofed IP address belonging to the tested network. If the network does not enforce ingress filtering, the request will arrive at the DNS resolver on that network. A query from a spoofed source IP address will cause the response to be sent to the IP addres...
B
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missin...
The first model in this domain [7] employed SVMs with one-vs-one comparisons between all classes. SVM classifiers project the data into a higher dimensional space using a kernel function and then find a linear separator in that space that gives the largest distance between the two classes compared while minimizing the ...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on...
While SVMs are standard machine learning, NNs have recently turned out more powerful, so the first step is to use them on this task instead of SVMs. In the classification task, the networks are evaluated by the similarity between the odor class label (1-5) and the network’s output class label prediction given the unlab...
A
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic...
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu...
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segm...
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this ...
B
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13]. In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left,
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
C
Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t...
We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance o...
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ...
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible...
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian...
A
We crawled the 3.9 million selected URLs using Scrapy444https://scrapy.org/ for about 48 hours between the 4th and 10th of August 2019, for a few hours each day. 3.2 million URLs were successfully crawled, henceforth referred to as candidate privacy policies, while 0.4 million led to error pages and 0.3 million URLs w...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an...
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier ...
D
Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems. E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could onl...
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS...
In numerous Kaggle competitions [20], stacking ensembles led to award-winning results. But, when studying such ensembles, it is very hard to understand why specific instances, features, algorithms, and models were selected instead of others. Indeed, one of the major challenges in stacking is to select the best combinat...
Limitations. Efficiency and scalability were the major concerns raised by all the experts. The inherent computational burden of stacking multiple models still remains, as such complex ensemble learning methods need sufficient resources. Also, the use of VA in between levels makes this even worse. We believe that, with ...
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
C
where ℒDit⁢r⁢a⁢i⁢n⁢(θ)subscriptℒsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛𝜃\mathcal{L}_{D_{i}^{train}}(\theta)caligraphic_L start_POSTSUBSCRIPT italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_θ ) and ℒDiv...
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d...
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali...
B
&\text{subject to}&&\left\|\boldsymbol{f}_{{k}}\right\|=1,\\ &&&\left\|\boldsymbol{w}_{{k}}\right\|=1.\end{aligned}start_ROW start_CELL end_CELL start_CELL start_UNDERACCENT bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_UNDERACCENT start_...
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe...
In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network. To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebo...
A
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
C
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Suc...
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient.
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
A
Table 6 shows that though the BLEU improvements start saturating with deep depth-wise LSTM Transformers of more than 12121212 layers, depth-wise LSTM is able to ensure convergence of up to 24242424 layer Transformers. The experiments also show that the size differences between these datasets did not lead to differences...
Our approach with the Transformer base setting brings about more improvements on the English-German task than that on the English-French task. We conjecture that maybe because the performance on the English-French task using a large dataset (∼similar-to\sim∼36363636M sentence pairs) may rely more on the capacity of th...
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing dep...
D
\operatorname{Struct}(\upsigma)\right)roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( roman_Struct ( roman_σ ) ).
Consider a monotone sentence φ∈𝖥𝖮⁢[σ]Struct⁡(σ)𝜑𝖥𝖮subscriptdelimited-[]σStructσ\varphi\in\mathsf{FO}[\upsigma]_{\operatorname{Struct}(\upsigma)}italic_φ ∈ sansserif_FO [ roman_σ ] start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT. Let (Ui)i∈Isubscriptsubscript𝑈𝑖𝑖𝐼(U_{i})_{i\in I}( italic_U start_P...
Because ⟦φ⟧Struct⁡(σ)\llbracket\varphi\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_φ ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT is an open set of Struct⁡(σ)Structσ\operatorname{Struct}(\upsigma)roman_Struct ( roman_σ ) for τ𝖥𝖮subscriptτ𝖥𝖮\uptau_{\mathsf{FO}}roman_τ start_POSTSUBSCRIPT ...
⟦𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT and ⟦𝖤𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta...
τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
A
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
Distortion Learning Evaluation: Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training da...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
B
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies. The model is tra...
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1 shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed
A
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ...
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ...
D
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio...
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian...
A
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat...
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
Moreover, the level of protection for identities is possible to be compelled to increase for meeting the condition of l𝑙litalic_l-diversity. For example, the generalized table in Figure 1(c) must comply with at least 5-anonymity for satisfying 5-diversity even if the demand for protecting identities is not that high. ...
D
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
A
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dy...
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are gen...
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio...
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
B
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
2