Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
5.19k
A
stringlengths
250
8.2k
B
stringlengths
250
4.17k
C
stringlengths
250
3.6k
D
stringlengths
250
4.66k
label
stringclasses
4 values
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE...
Rnm⁢(x)=∑s=0(n−m)/2(−1)s⁢(n−m2s)⁢(D2+n−s−1n−m2)⁢xn−2⁢s.superscriptsubscript𝑅𝑛𝑚𝑥superscriptsubscript𝑠0𝑛𝑚2superscript1𝑠binomial𝑛𝑚2𝑠binomial𝐷2𝑛𝑠1𝑛𝑚2superscript𝑥𝑛2𝑠\displaystyle R_{n}^{m}(x)=\sum_{s=0}^{(n-m)/2}(-1)^{s}\binom{\frac{n-m}{2}}{s% }\binom{\frac{D}{2}+n-s-1}{\frac{n-m}{2}}x^{n-2s}.italic_R st...
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2 Gaussian integrations for moments xD−1+n−2⁢ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE...
Gaussian integration rules for integrals ∫01xD−1⁢Rnm⁢(x)⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 ...
B
For example, computing the Bruhat decomposition of a random matrix in GL⁢(250,2)GL2502\textrm{GL}(250,2)GL ( 250 , 2 ) resulted in an SLP of length 353 969353969353\;969353 969. During the evaluation, our MSLP required 32323232 memory slots and it was easily possible to evaluate this MSLP on the standard generators of...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
We note that after applying the function SlotUsagePattern, the resulting SLP only required 12121212 memory slots and could be evaluated in the same time as our MSLP. This is due to the fact that SlotUsagePattern was handed a well-designed SLP. When faced with an SLP not designed to be memory efficient, one might not ex...
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be...
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
B
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
B
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle take nearly linear time [6, 1, 2, 3, 23], that is, O⁢(n⁢log⁡n)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O⁢(n⁢log2⁡n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP...
The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim...
Using a Rotate-and-Kill process (which is shown in Algorithm 5), we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead.
Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-...
in the Rotate-and-Kill process, and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2).
A
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Cred...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
C
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_...
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
C
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
We investigate how the performance of different types of low and high-level features changes over time (during the spreading of rumors); improving the understanding of feature impact and model design for rumor detection at different points in time.
D
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector of ℳL⁢R2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ...
B
RT=𝔼⁢{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST...
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ...
Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many. TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018].
one uses p⁢(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal, i.e., π⁢(A|xt+1,ℋ1:t)=ℙ⁢(A=at+1∗|xt+1,θt,...
C
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
A
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai...
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)...
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone...
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe...
A
We next formally define the computational problems of computing the parameters defined above. By Loc, Cutwidth and Pathwidth, we denote the problems to check for a given word α𝛼\alphaitalic_α or graph G𝐺Gitalic_G and integer k∈ℕ𝑘ℕk\in\mathbb{N}italic_k ∈ blackboard_N, whether loc⁡(α)≤kloc𝛼𝑘\operatorname{\textsf{lo...
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
In this section, we discuss some examples that illustrate the concepts of marking sequences and the locality number, and we also discuss some word combinatorial properties related to the locality number. Note that for illustration purposes, the example words considered in this section are not necessarily condensed.
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
B
Application/Notes131313In parenthesis the databases used by paper or by papers in subsection. In the ‘PCG/Physionet 2016 Challenge’ subtable all papers use PHY besides[113], and in the ‘Other signals’ subtable all papers use private databases besides [118].
Accuracy141414There is a wide variability in results reporting. [109] report specificity, [115] report results for SBP and DBP, [117] report sensitivity, specificity, [118] report positive predictive value, [119] report AUC for diabetest, results are also reported for high cholesterol sleep apnea and high BP.
In[119] the authors trained a semi-supervised, multi-task bi-directional LSTM on data from 14011 users of the Cardiogram app for detecting diabetes, high cholesterol, high BP, and sleep apnoea. Their results indicate that the heart’s response to physical activity is a salient biomarker for predicting the onset of a dis...
Accuracy121212There is a wide variability in results reporting. The results of [77] is for ventricular/supraventricular ectopic beats, [78] is for three types of arrhythmias, [82] is for five types of arrhythmias, [84] report precision, [90] report SNR and multiple results depending on added noise, the result of [91] ...
DBNs have also been used in combination with structured data besides RNNs and AEs. In[73] the authors first performed a statistical analysis of a dataset with 4244 records to find variables related to cardiovascular disease from demographics and lifestyle data (age, gender, cholesterol, high-density lipoprotein, SBP, D...
A
Our work advances the state-of-the-art in model-based reinforcement learning by introducing a system that, to our knowledge, is the first to successfully handle a variety of challenging games in the ALE benchmark. To that end, we experiment with several stochastic video prediction techniques, including a novel model b...
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score...
We evaluate our method on 26262626 games selected on the basis of being solvable with existing state-of-the-art model-free deep RL algorithms222Specifically, for the final evaluation we selected games which achieved non-random results using our method or the Rainbow algorithm using 100100100100K interactions., which in...
and for a summary see Figure 3. It can be seen that our method is more sample-efficient than a highly tuned Rainbow baseline on almost all games, requires less than half of the samples on more than half of the games and, on Freeway, is more than 10x more sample-efficient. Our method outperforms PPO by an even larger ma...
B
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
C
This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
Figure 2: The Cricket robot (left) and its leg joints layout (right). The Cricket robot [20] is a hybrid locomotion system that utilizes four revolute joints on each leg. The outermost leg segment is equipped with a drivable track that encircles it, enabling the robot to move like traditional skid-steer tank robots.
B
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode...
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would...
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
B
It is also worth mentioning that this is a vital and very relevant aspect: if we value these specific words, as is usual, only by their local probability 242424Which is the case, for instance, with Multinomial Naive Bayes. (or frequency), as shown in (b), they will always have almost “no value” since, naturally, their ...
From the previous analysis, it is clear that useful information can be obtained from the study of those cases where our approach was not able to correctly predict a class. With this goal in mind, we also carried out an error analysis and identified four common error cases which could be divided into two groups: those t...
Additionally, in order to better understand the good obtained results, another important aspect to analyze is how the early classification was actually carried out using the simplest policy to decide when to positively classify subjects. In Figure 6 are shown four subjects from the test set that illustrate four types o...
We trained the classifiers using the optimized parameters with the whole training dataset and then, the incremental post-by-post classification was analyzed. Now, writings are processed sequentially, that is, early classification evaluation was carried out, as mentioned, one writing at a time. Additionally, we decided...
Additionally, in order to compare the results among the different participants, the entire dataset was split into two sets: a training set and a test set. The details of the dataset are presented in Table 1. Note that the dataset is highly unbalanced, namely, only 17% of the subjects in the training set are labeled as ...
B
}(\frac{1}{\sqrt{KT}})divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t ∈ [ italic_T ] end_POSTSUBSCRIPT blackboard_E ∥ ∇ italic_F ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≤ caligraphic_O ( divide start_ARG 1 end_ARG start...
Note that the convergence guarantee of DEF-A and its momentum variant for non-convex problems is lacking in (Xu and Huang, 2022). We provide the convergence analysis for GMC+, which can be seen as a global momentum variant of DEF-A. We eliminate the assumption of ring-allreduce compatibility from (Xu and Huang, 2022) a...
DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5). In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=...
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee...
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo...
A
For the same task as the previous one but for 2D, we use MNIST which consists of a training dataset of 60000600006000060000 greyscale images with handwritten digits and a test dataset of 10000100001000010000 images each one having size of 28×28282828\times 2828 × 28.
The first two fully connected layers are followed by a ReLU while the last one produces the predictions. The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as...
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation. The fact that SANs are wide...
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output. DNNs were successful in utilizi...
B
Fig. 12 shows the effect of m𝑚mitalic_m on the behavior of SPBLLA. Setting τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and m>0.028𝑚0.028m>0.028italic_m > 0.028, we choose 5555 values from 0.030.030.030.03 to 0.050.050.050.05. As m𝑚mitalic_m getting higher, SPBLLA needs more time for convergence. Since higher m𝑚mitalic_m ...
For power selection of UAVisubscriptUAV𝑖{\rm UAV}_{i}roman_UAV start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, a large power does not necessarily result in high utility due to the large interference comes with it. Taking energy saving and longer lifetime into consideration, choosing the right amount of power that bal...
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve...
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg...
Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increase...
D
where where 𝐫^^𝐫\hat{\mathbf{r}}over^ start_ARG bold_r end_ARG and 𝐳^^𝐳\hat{\mathbf{z}}over^ start_ARG bold_z end_ARG are the unit vectors in the radial and axial directions, and P¯rsubscript¯𝑃𝑟\overline{P}_{r}over¯ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT
radial coordinates of the nodes. Similarly, the axial centroid coordinates are defined as z^=<z¯>e^𝑧superscriptexpectation¯𝑧𝑒\widehat{z}=<\overline{z}>^{e}over^ start_ARG italic_z end_ARG = < over¯ start_ARG italic_z end_ARG > start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT. The vector of nodal
and P¯zsubscript¯𝑃𝑧\overline{P}_{z}over¯ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT are the nodal representations of the r𝑟ritalic_r and z𝑧zitalic_z components the continuous vector field 𝐩⁢(𝐫)𝐩𝐫\mathbf{p}(\mathbf{r})bold_p ( bold_r ).
where where 𝐫^^𝐫\hat{\mathbf{r}}over^ start_ARG bold_r end_ARG and 𝐳^^𝐳\hat{\mathbf{z}}over^ start_ARG bold_z end_ARG are the unit vectors in the radial and axial directions, and P¯rsubscript¯𝑃𝑟\overline{P}_{r}over¯ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT
poloidal field is calculated at the boundary nodes by calculating trsubscript𝑡𝑟t_{r}italic_t start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and tzsubscript𝑡𝑧t_{z}italic_t start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT, the r𝑟ritalic_r and z𝑧zitalic_z components of the unit tangents
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
A
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the DQN loss, ADAM optimizer was used[25].
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25].
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
A
Multi-task learning (Caruana, 1997) refers to a machine learning approach where multiple tasks are learned simultaneously, and the learning efficiency and the model performance on each of the tasks are improved because of the existing commonalities across the tasks. For visual recognition tasks, it has been shown that...
Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al., 2015) by adding a new branch to predict th...
Khosravan et al. (2019) proposed an adversarial training framework for pancreas segmentation from CT scans. Son et al. (2017) applied GANs for retinal image segmentation. Xue et al. (2018) used a fully convolutional network as a segmenter in the generative adversarial framework to segment brain tumors from MRI images....
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ...
Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b...
A
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
Fig. 4 illustrates how the size of the cut γ⁢(𝐳)𝛾𝐳\gamma({\mathbf{z}})italic_γ ( bold_z ) induced by the spectral partition 𝐳𝐳{\mathbf{z}}bold_z changes as more edges are added and the original structure of the graph is corrupted (blue line). The figure also reports the size of the random cut (orange line) and the...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
B
In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models. We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t...
The proposed method generates data from a random forest and trains a neural network that imitates the random forest. The goal is that the neural network approximates the same function as the random forest. This also implies that the network reaches the same accuracy if successful.
Experiments demonstrate that the accuracy of the imitating neural network is equal to the original accuracy or even slightly better than the random forest due to better generalization while being significantly smaller. To summarize, our contributions are as follows:
Our method significantly reduces the number of parameters of the generated networks while reaching the same or even slightly better accuracy. The current best-performing methods generate networks with an average number of parameters of either 142 000142000142\,000142 000, if sparse processing is available, or 748 00074...
Compared to state-of-the-art methods, the presented implicit transformation significantly reduces the number of parameters of the networks while achieving the same or even slightly improved accuracy due to better generalization. Our approach has shown that it scales very well and is able to imitate highly complex class...
B
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
B
Lin et al. (2016) consider fixed-point quantization of pre-trained full-precision DNNs. They formulate a convex optimization problem to minimize the total number of bits required to store the weights and the activations under the constraint that the total output signal-to-quantization noise ratio is larger than a certa...
In Wu et al. (2018b), weights, activations, weight gradients, and activation gradients are subject to customized quantization schemes that allow for variable bit widths and facilitate integer arithmetic during training and testing. In contrast to Zhou et al. (2016), the work of Wu et al. (2018b) accumulates weight chan...
The challenge is to reduce the number of bits as much as possible while at the same time keeping the prediction accuracy close to that of a well-tuned full-precision DNN. Subsequently, we provide a literature overview of approaches that train reduced-precision DNNs, and, in a broader view, we also consider methods that...
At test-time, the full-precision weights are abandoned and only the quantized reduced-precision weights are kept. We term this scheme quantization-aware training since quantization is an essential part during forward-propagation and it is intuitive to think of the real-valued weights as becoming robust to quantization.
In recent years, the STE (Bengio et al., 2013) (see Section 2.6) became the method of choice to compute an approximate gradient for training DNNs with weights that are represented using a very small number of bits. Such methods typically maintain a set of full-precision weights that are quantized during forward propaga...
D
Hence, the diameter of the subspace {x1,…,xk,a1,…,ak}subscript𝑥1…subscript𝑥𝑘subscript𝑎1…subscript𝑎𝑘\{x_{1},\dots,x_{k},a_{1},\dots,a_{k}\}{ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic...
Note that whereas the proof of Lemma 1 in [54] takes place at the level of L∞⁢(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ), the proof of Proposition 9.1 given above takes place at the level of simplicial complexes and simplicial maps.
In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞⁢(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem...
The complete characterization of the different homotopy types of VRr⁢(𝕊1)subscriptVR𝑟superscript𝕊1\mathrm{VR}_{r}(\mathbb{S}^{1})roman_VR start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ) as r>0𝑟0r>0italic_r > 0 grows was obtained by Adamaszek and Adams in ...
The following corollary was already established by Gromov (who attributes it to Rips) in [47, Lemma 1.7.A]. The proof given by Gromov operates at the simplicial level. By invoking Proposition 8.1 we obtain an alternative proof, which instead of operating the simplicial level, exploits the isometric embedding of X𝑋Xita...
A
For Task 3, Evaluating Original Space Distances, participants had to judge the quality of the distance preservation in the projection. Most participants from the t-viSNE group chose answer 3—good (but not perfect) distance preservation—which seems to align well with the Shepard Heatmap from Figure 6(b), for example. T...
Task 4, Extracting Patterns from the Projection, consisted simply of determining the number of clusters in the projection. The results from both groups were quite similarly distributed, with most participants choosing 2 clusters (as expected, see e.g. Figure 6(a)). One difference is that 4 participants from the GEP gro...
In Task 2, Deciding About (Ir-)Relevant Sizes of Clusters, the goal was to determine the relative density (or, conversely, the sparsity) of the clusters. The expected answer—see the visualization in Figure 6(c), for example—is that the benign cluster is denser (even though it may appear less dense, when no extra infor...
For Task 3, Evaluating Original Space Distances, participants had to judge the quality of the distance preservation in the projection. Most participants from the t-viSNE group chose answer 3—good (but not perfect) distance preservation—which seems to align well with the Shepard Heatmap from Figure 6(b), for example. T...
For Task 5, Observing and Exploring Shapes, participants were asked to determine the least important dimension that affected the shape of the clusters. All participants from the t-viSNE group chose answer 4, mitoses, in agreement with our own observations for this data set (e.g., Figure 6(d)) and previous work (e.g., ...
A
One possible criterion is to use all the individuals in the population to generate the movement of each solution. In these algorithms, all individuals have a degree of influence on the movement of the other solutions. Such a degree is usually weighted according to the fitness difference and/or distance between solution...
In this group (the most populated in this second taxonomy), the different movement of each solution is only influenced by a small group of representative solutions. It is often the case that these representative solutions are selected to be the best solutions found by the algorithm (as per the objective of the problem...
Bearing the above criteria in mind, Figure 5 shows the classification reached after our literature analysis. The plot indicates, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that in most nature- and bio-inspired algorithms, new solut...
Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous solution. The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations. This soluti...
Algorithms within this category do not resort to representative solutions of the entire population (such as the current best), but they only consider solutions of a subset or group of the solutions in the population. When the differential movement considers both a group and a representative of all the population, the ...
A
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi...
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo...
It should be emphasized that a large k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT frequently leads to capture the wrong information. After the transformation of GAE, the nearest neighbors are more likely to belong with the same cluster
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
B
Our techniques are not susceptible to false positives, that is, classification of the tested network as filtering spoofed packets when in fact it does not do so. This is a side effect of our methodology - only when spoofing is not filtered will the “test action” be triggered.
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
We define the result of SMap evaluation successful (i.e., true positive) if at least one of the three tests outputs that the tested network does not filter spoofed packets: either the IPID value on the server in the tested network was incremented as expected (IPID test) or we receive a query at our domain (DNS test) o...
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremente...
IPID technique. When spoofing is not filtered the counter on the server will be incremented - which is the test action. At the probing phase the counter’s value will equal or large than the expected value after the increment phase. The repeated measurements ensure that we do not accidentally interpret noise (i.e., pac...
D
\cdot\mathbf{h}_{p-1}+\mathbf{b}_{d}),\\ \hat{\mathbf{y}}=\mathbf{W}_{dy}\cdot\mathbf{d}+\mathbf{b}_{y}.\end{split}start_ROW start_CELL bold_s = roman_ReLU ( bold_W start_POSTSUBSCRIPT italic_x italic_s end_POSTSUBSCRIPT ⋅ bold_x + bold_b start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) , end_CELL end_ROW start_ROW sta...
TABLE I: Mean generalization accuracy. Listed is the classification accuracy (correct / total) of various models evaluated on the unseen testing data, i.e., batch T𝑇Titalic_T. The values represent the average accuracy over 30 trials. The final column lists the mean of the values for batches 3 through 10. A bolded valu...
The second comparison is between the weighted ensembles of SVMs, i.e., the state of the art [7], and the weighted ensembles of neural networks. For each batch, an SVM and a neural network were trained with that batch as the training set. Weighted ensembles were constructed for each batch T𝑇Titalic_T by assigning weig...
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
Figure 3: Generalization accuracy. The generalization accuracy of each model was evaluated on batch T𝑇Titalic_T. For each model type and every batch, 30 models were trained. The line represents the average over the 30 trials, and the error bar is the 95% confidence interval. (A.) The skill and context+skill models are...
A
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
A
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13]. In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left,
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
D
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 an...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi...
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p...
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 an...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
B
Prior work in privacy and human-computer interaction establishes the motivation for studying these documents. Although most internet users are concerned about privacy (Madden, 2017), Rudolph et al. (2018) reports that a significant number do not make the effort to read privacy notices because they perceive them to be ...
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ...
We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
To build the PrivaSeer corpus, we create a pipeline concentrating on focused crawling Chakrabarti et al. (1999); Diligenti et al. (2000) of privacy policy documents. We used Common Crawl,222https://commoncrawl.org/ described below, to gather seed URLs to privacy policies on the web. We filtered the Common Crawl URLs to...
D
The selection of \raisebox{-.0pt} {\tiny\bfS2}⃝ leads us to 170 models, cf. Figure 7(d). By selecting these models, we get a new prediction space projection, shown in Figure 7(b). While some predictions are clearly in the positive or negative class, we focus on the unclear cases and select them using the lasso tool. Th...
Evaluation of the Results with the Test Data Set. To confirm that our findings are solid, we applied the resulting metamodel to the same test data as Skeppstedt et al. [51], see Table 1. For the hypotheticality category, the reported f1-score for the baseline approach was 66%.
The training data set was collected using the tool by Kucher et al. [23] and it consists of 2,095 instances of annotated training samples. The 300 feature vectors are based on the counts of the most frequent words in the corpus. The data set is very imbalanced, with most cases being on the absence side. Skeppstedt et a...
Using our approach, we managed to achieve an f1-score of approximately 82% compared to 54% reported by Skeppstedt et al. [51] for the baseline approach. Finally, it is important to note that, while our approach seems to perform very well for both applications described in this paper, the gain does not come only from th...
In this section, we describe how StackGenVis can be used to improve the results of sentiment/stance detection in texts from social media, when compared to previous work from Skeppstedt et al. [51]. The authors studied the automatic detection of seven stance categories: certainty, uncertainty, hypotheticality, predicti...
A
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
C
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
To answer RQ2, we find the fine-tuning epochs for each task in Persona where its BLEU and C Score reaches the best respectively to find the impact of data quantity and the task profile (persona description) on fine-tuning. (Table 1) We cluster the tasks with similar best fine-tuning epoch number and calculate the aver...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
B
Unmanned aerial vehicles (UAVs) have many critical applications in civilian and military areas [1, 2], such as fire fighting, plant protection, remote monitoring, etc. Therein, a UAV acts as an Internet of Things (IoT) node or a peripheral node to assist IoT. For instance, it is very attractive to deploy UAVs to assis...
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm...
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in...
A
Thus, a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregula...
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the ri...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
C
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
A
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-...
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform...
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing dep...
C
σ1subscriptσ1\upsigma_{1}roman_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-structure A∈X𝐴𝑋A\in Xitalic_A ∈ italic_X is the σ2subscriptσ2\upsigma_{2}roman_σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-structure f⁢(A)𝑓𝐴f(A)italic_f ( italic_A ) with domain |f⁢(A)|≜{a∈|A|∣A⊧δ⁢(a)}≜𝑓𝐴𝑎conditional𝐴𝐴models𝛿𝑎|f(A)|\trian...
(a1,…,an)∈𝐑Asubscript𝑎1…subscript𝑎𝑛superscript𝐑𝐴(a_{1},\dots,a_{n})\in\mathbf{R}^{\!A}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ∈ bold_R start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT, (f⁢(a1),…,f⁢(an))∈𝐑B𝑓subscript𝑎1…𝑓subscript𝑎𝑛s...
(a1,…,an)∈|A|nsubscript𝑎1…subscript𝑎𝑛superscript𝐴𝑛(a_{1},\dots,a_{n})\in|A|^{n}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ∈ | italic_A | start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, (f⁢(a1),…,f⁢(an))∈𝐑B𝑓subscript𝑎1…𝑓subscript𝑎𝑛sup...
by |fi⁢(A)|≜|A|≜subscript𝑓𝑖𝐴𝐴|f_{i}(A)|\triangleq|A|| italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_A ) | ≜ | italic_A | and (a1,…,an)∈𝐑fi⁢(A)subscript𝑎1…subscript𝑎𝑛superscript𝐑subscript𝑓𝑖𝐴(a_{1},\dots,a_{n})\in\mathbf{R}^{\!f_{i}(A)}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … ...
and such that (a1,…,an)∈𝐑f⁢(A)⇔A⊧ρR⁢(a1,…,an)iffsubscript𝑎1…subscript𝑎𝑛superscript𝐑𝑓𝐴models𝐴subscript𝜌𝑅subscript𝑎1…subscript𝑎𝑛(a_{1},\dots,a_{n})\in\mathbf{R}^{\!f(A)}\iff A\models\rho_{R}(a_{1},\dots,a_{% n})( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end...
D
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneo...
1. The proposed ordinal distortion is a learning-friendly representation for neural networks, which is explicit and homogeneous compared with the implicit and heterogeneous distortion parameters. Thus, our learning model gains sufficient distortion perception of features and shows faster convergence. Moreover, this re...
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
B
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
A
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
We are given a set of clients 𝒞𝒞\mathcal{C}caligraphic_C and a set of facilities ℱℱ\mathcal{F}caligraphic_F, in a metric space with a distance function d𝑑ditalic_d. We let n=|𝒞|𝑛𝒞n=|\mathcal{C}|italic_n = | caligraphic_C | and m=|ℱ|𝑚ℱm=|\mathcal{F}|italic_m = | caligraphic_F |. Our paradigm unfolds in two stages...
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C...
D
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian...
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio...
A
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
D
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
B
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
C
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ⁢(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bo...
After showing the action-value function estimate is the optimistic upper bound of the optimal action-value function, we can derive the dynamic regret bound within one epoch via recursive regret decomposition. The dynamic regret within one epoch for Algorithm 1 with the knowledge of B𝜽,ℰsubscript𝐵𝜽ℰB_{\bm{\theta},\m...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
Our proposed algorithm LSVI-UCB-Restart has two key ingredients: least-squares value iteration with upper confidence bound to properly handle the exploration-exploitation trade-off (Jin et al., 2020), and restart strategy to adapt to the unknown nonstationarity. Our algorithm is summarized in Algorithm 1. From a high-...
D
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ...
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ...
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
C
Conventional KG embedding approaches broadly fall into two types: Triplet-based and GNN-based methods. Triplet-based methods include translational methods [11, 27, 28], semantic matching methods [29, 30, 31, 32], and neural methods [33, 34]. For a detailed understanding, interested readers can refer to surveys [3, 35, ...
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti...
The proposed DAN is compatible with most existing GNN-based methods, allowing these methods to leverage our DAN as the GNN module for entity encoding. Furthermore, the computational cost is comparable to that of existing methods. Therefore, we offer an efficient and general GNN architecture for KG embedding.
GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no...
The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ...
C
Figure 5: Result of VDM in ‘Noisy-Mnist’. (a) When we input an image of digit ‘0’, we sample 10 latent variables {𝐳𝟏,…,𝐳𝟏𝟎}subscript𝐳1…subscript𝐳10\{\mathbf{z_{1}},...,\mathbf{z_{10}}\}{ bold_z start_POSTSUBSCRIPT bold_1 end_POSTSUBSCRIPT , … , bold_z start_POSTSUBSCRIPT bold_10 end_POSTSUBSCRIPT } and generate...
As an example, we model the transition dynamics in MDP of ‘Noisy-Mnist’ in Fig. 2. We first use an ensemble-based model that contains three individual encoder-decoder networks as a baseline. According to a resent research in model-based RL [48], the ensemble model with probabilistic neural networks achieves the state-o...
The ensemble-based baseline contains three individual encoder-decoder networks. As shown in Fig. 4, three images are generated from each model with the same input. We do not average the outputs of the three models. In (a), we use the image of digit ‘0’ as the input and generate a prediction from each network in the en...
Figure 4: Result of the probabilistic-ensemble dynamic model in ‘Noisy-Mnist’. (a) When we input an image of the digit ‘0’, three images are generated from different models. Different models all generate the correct prediction of image class but lacks the diversity of writing styles. (b) When we input an image of the d...
We analyze the possible reasons in the following. (i) The probabilistic-ensemble model proposed in [48] is used in continuous control tasks, where the state is low-dimensional and unstructured. However, Noisy-Mnist has high-dimensional image-based observations. The probabilistic ensemble may not suitable for this setti...
A
Until today, the classic Gauss quadrature formula is the best approach to approximating integrals IGauss⁢(f)≈∫Ωf⁢(x)⁢dxsubscript𝐼Gauss𝑓subscriptΩ𝑓𝑥differential-d𝑥I_{\mathrm{Gauss}}(f)\approx\int_{\Omega}f(x)\,\mathrm{d}xitalic_I start_POSTSUBSCRIPT roman_Gauss end_POSTSUBSCRIPT ( italic_f ) ≈ ∫ start_POSTSUBSCRIPT...
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros...
However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Tr...
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
Leslie Greengard, Christian L. Mueller, Alex Barnett, Manas Rachh, Heide Meissner, Uwe Hernandez Acosta, and Nico Hoffmann are deeply acknowledged for their inspiring hints and helpful discussions. Further, we are grateful to Michael Bussmann and thank the whole CASUS institute (Görlitz, Germany) for hosting stimulatin...
D
3(a)) Illustration of the projection mapping trained on two collections of samples generated from two different target distributions with m=n=100𝑚𝑛100m=n=100italic_m = italic_n = 100. Here the red and blue points are generated from Gaussian distributions with two different covariance matrix.
The computation of projected Wasserstein distance was recently studied in [43, 32, 34]. We use the Riemannian gradient method discussed in [32, Algorithm 3] to compute the projected Wasserstein distance, where the details of the corresponding algorithm are summarized in Appendix B.
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
The finite-sample convergence of general IPMs between two empirical distributions was established. Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality.
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
A
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
B
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working...
If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve...
Graph described in Fig.  4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma...
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3