context stringlengths 250 6.18k | A stringlengths 250 3.82k | B stringlengths 250 8.2k | C stringlengths 250 4.99k | D stringlengths 250 4.17k | label stringclasses 4
values |
|---|---|---|---|---|---|
)z}{c(c+1)}}{\frac{(a+1-b)z}{c+1}+1-\cdots}\,\frac{\frac{(a+2)(c+1-b)z}{(c+1)(%
c+2)}}{\frac{(a+2-b)z}{c+2}+1-\cdots}divide start_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) end_ARG start_ARG italic_F ( italic_a + 1 , italic_b + 1 ; italic_c + 1 ; italic_z ) end_ARG ≡ divide start_ARG - italic_b italic_z... | \frac{f^{\prime\prime}(x)}{f^{\prime}(x)}\right)roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / ( 1 - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG 2 italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ... | This already suffices to implement the standard Newton iteration, i.e., to
approximate (1) by Δx=−f(x)/f′(x)Δ𝑥𝑓𝑥superscript𝑓′𝑥\Delta x=-f(x)/f^{\prime}(x)roman_Δ italic_x = - italic_f ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ). | to not exist because Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT changes sign over the integration interval.
(i) (14) suggests to split Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POS... | \prime\prime}(x)+\frac{(\Delta x)^{3}}{3!}f^{\prime\prime\prime}(x)\approx 0.italic_f ( italic_x + roman_Δ italic_x ) ≈ italic_f ( italic_x ) + roman_Δ italic_x italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) + divide start_ARG ( roman_Δ italic_x ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG s... | B |
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices.
Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar... |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... |
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec... | Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. | A |
It then follows from Lemma 1 that 1≤αiF≤α1superscriptsubscript𝛼𝑖𝐹𝛼1\leq\alpha_{i}^{F}\leq\alpha1 ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ≤ italic_α for all the local eigenvalues. Thus, Λ~h△=Λ~hfsuperscriptsubscript~Λℎ△superscriptsubscript~Λℎ𝑓\ti... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | C |
Moreover, Alg-A is more stable than the alternatives.
During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses a... | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | B |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... | Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne... | Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... |
Rumor Detection Model Settings. For the time series classification model, we only report the best performing classifiers, SVM and Random Forest, here. The parameters of SVM with RBF kernel are tuned via grid search to C=3.0𝐶3.0C=3.0italic_C = 3.0, γ=0.2𝛾0.2\gamma=0.2italic_γ = 0.2. For Random Forest, the number of t... | A |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | B |
+1\{y^{(i)}=y_{news}\}log(\tilde{y}_{news}^{(i)})sansserif_L ( italic_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) = 1 { italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_y start_POSTSUBSCRIPT italic_r italic_u italic... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... |
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | C |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | B |
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b],
as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3. | C |
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | This very low threshold for now serves to measure very basic movements and to check for validity of the data.
Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity). | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | B |
For related visual tasks such as semantic segmentation, information distributed over convolutional layers at different levels of the hierarchy can aid the preservation of fine spatial details Hariharan et al. (2015); Long et al. (2015). The prediction of fixation density maps does not require accurate class boundaries ... |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the... |
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | A |
Since a marking sequence is just a linear arrangement of the symbols of the input word, computing marking sequences seems to be well tailored to greedy algorithms: until all symbols are marked, we choose an unmarked symbol according to some greedy strategy and mark it. Unfortunately, we can formally show that many nat... | These strategies are – except for LeftRightLeftRight\operatorname{\textsf{LeftRight}}LRstrategy – nondeterministic, since there are in general several valid choices of the next symbol to mark. However, we will show poor performances for these strategies independent of the nondeterministic choices (i. e., the approximat... | This proposition points out that even simple words can have only optimal marking sequences that are not block-extending. In terms of greedy strategies however, Proposition 5.4 only shows a lower bound of roughly 2222 for the approximation ratio of any greedy algorithm that employs some block-extending greedy strategy (... |
We call a marking sequence σ𝜎\sigmaitalic_σ for a word α𝛼\alphaitalic_α block-extending, if every symbol that is marked except the first one has at least one block-extending occurrence. This definition leads to the general combinatorial question of whether every word has an optimal marking sequence that is block-ext... |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie... | A |
In[175] the authors used a CNN to learn the features and a PCA-based nearest neighbor search utilized to estimate the local structure distribution.
Besides demonstrating good results they argue that it is important for CNN to incorporate information regarding the tree structure in terms of accuracy. | Convolutional Neural Networks (CNNs), as shown in Fig. 2, consist of a convolutional part where hierarchical feature extraction takes place (low-level features such as edges and corners and high-level features such as parts of objects) and a fully connected part for classification or regression, depending on the nature... | They argue that the learnt features of their model are more reliable to pathology, noise and different imaging conditions, because the learning process exploits the characteristics of vessels in all training images.
In[177] the authors employed unsupervised hierarchical feature learning using a two level ensemble of sp... | In[90] the authors added noise signals from the NSTDB to the MITDB and then used scale-adaptive thresholding WT to remove most of the noise and a denoised AE to remove the residual noise.
Their experiments indicated that increasing the number of training data to 1000 the signal-to-noise ratio increases dramatically aft... | Their model consisted of two parallel parts; the statistical learning and a rule inference.
In statistical learning the ECGs are preprocessed using bandpass and lowpass filters, then fed to two parallel lead-CNNs and finally Bayesian fusion is employed to combine the probability outputs. | B |
Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | The structure of the model-based RL algorithm that we employ consists of alternating between learning a model, and then using this model to optimize a policy with model-free reinforcement learning. Variants of this basic algorithm have been proposed in a number of prior works, starting from Dyna Q Sutton (1991) to more... | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | Sodhani et al. (2019) proposes learning a model consistent with RNN policy which helps to train policies that are more powerful than their model-free baseline.
Ha & Schmidhuber (2018) present a way to compose a variational autoencoder with a recurrent neural network into an architecture | Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies
from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Haf... | C |
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | D |
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... | D |
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O(logn)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration. | The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm... | bins
include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS... | Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the... |
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\... | A |
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual... | In Subsection 4.2 we will introduce the time-aware metric used to evaluate the effectiveness of the classifiers, in relation to the time taken to make the decision. Finally, Subsection 4.4 describes the different types of experiments carried out and the obtained results.
| This subsection describes how classification is carried out.
However, before we illustrate the overall process and for the sake of simplicity, we are going to assume there exist a function gv(w,c)𝑔𝑣𝑤𝑐gv(w,c)italic_g italic_v ( italic_w , italic_c ) to value words in relation to categories —and whose formal defini... | In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual... | Note that this allows us to compare words across different categories since their values are all normalized in relation to stop words, which should have a similar frequency across all the categories111111Note that we are assuming here that we are working with textual information in which there exist highly frequent ele... | B |
Sparsification methods, which are also called sparse communication methods, select only a few components of the vector for communicating with the server or the other workers. The most widely used sparsification compressor adopted in sparse communication methods is top-s𝑠sitalic_s, where each worker selects s𝑠sitalic_... | Each worker computes stochastic gradients locally and communicates with the server or other workers to obtain the aggregated stochastic gradients for updating the model parameter. Recently, more and more large-scale deep learning models, such as large language models (Devlin et al., 2019; Brown et al., 2020; Touvron et... | In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019).
There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (... | Due to the presence of compressed error, naively compressing the communicated vectors in DSGD or DMSGD will damage the convergence, especially when the compression ratio is high.
The most representative technique designed to tackle this issue is error feedback (Stich et al., 2018; Karimireddy et al., 2019), also called... | Researchers have proposed two main categories of communication compression methods for reducing communication cost: quantization (Wen et al., 2017; Alistarh et al., 2017; Jiang and Agrawal, 2018) and sparsification (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al.... | C |
Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24].
Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o... | The increased number of weights and non-zero activations make DNNs more complex, and thus more difficult to use in problems that require corresponding causality of the output with a specific set of neurons.
The majority of domains where machine learning is applied, including critical areas such as healthcare [26], requ... | Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output.
DNNs were successful in utilizi... | Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations.
In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | A |
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo... | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit... | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | D |
=2[dV¯T∗{𝐏1¯⋅(∇¯^(μ^r^2(∇^¯⋅𝐏1¯)))}+dV^T∗{μ^r^2(∇^¯⋅𝐏1¯)2}]absent2delimited-[]superscript¯𝑑𝑉𝑇⋅¯subscript𝐏1^¯∇^𝜇superscript^𝑟2⋅¯^∇¯subscript𝐏1superscript^𝑑𝑉𝑇^𝜇superscript^𝑟2superscript⋅¯^∇¯subscript𝐏12\displaystyle=2\left[\overline{dV}^{T}*\left\{\overline{\mathbf{P}_{1}}\cdot%
\left(\widehat{\ov... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{3}}\right)^{2}\right\}\right]+ [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_ARG ... | (\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{1}}\right)^{2}\right\}\right]= 2 [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_AR... | \,\widehat{r}^{2}\,\,\left(\overline{\widehat{\nabla}}\cdot\overline{\mathbf{P%
}_{2}}\right)^{2}\right\}\right]+ 2 [ over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { over¯ start_ARG bold_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ⋅ ( over^ start_ARG over¯ start_AR... | C |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | A |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | A |
\left|\mathcal{A}\cap\mathcal{B}\right|}{\left|\mathcal{A}\right|+\left|%
\mathcal{B}\right|},\ \ \ \ \textrm{and,}Dice coefficient , Dice ( caligraphic_A , caligraphic_B ) = 2 divide start_ARG | caligraphic_A ∩ caligraphic_B | end_ARG start_ARG | caligraphic_A | + | caligraphic_B | end_ARG , and, | where 𝜽ssubscript𝜽𝑠\bm{\theta}_{s}bold_italic_θ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and 𝜽asubscript𝜽𝑎\bm{\theta}_{a}bold_italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT denote the parameters of the segmentation and adversarial model, respectively. lbcesubscript𝑙𝑏𝑐𝑒l_{bce}italic_l start_PO... | The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pix... |
Figure 14: A 5×5555\times 55 × 5 overlap scenario with (a) the ground truth, (b) the predicted binary masks, and (c) the overlap. In (a) and (b), black and white pixels denote the foreground and the background respectively. In (c), green, grey, blue, and red pixels denote TP, TN, FP, and FN pixels respectively. |
Figure 13: Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., ... | C |
Computing all eigenvectors has a cost 𝒪(N3)𝒪superscript𝑁3\mathcal{O}(N^{3})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ), where N𝑁Nitalic_N is the number of nodes. However, computing only the eigenvector corresponding to the largest eigenvalue is fast when using the power method [29], whic... | We propose a graph sparsification procedure that reduces the computational cost of MP operations applied after pooling and has a small impact on the representations learned by the GNN.
In particular, we show both analytically and empirically that many edges can be removed without significantly altering the graph struct... | Computing all eigenvectors has a cost 𝒪(N3)𝒪superscript𝑁3\mathcal{O}(N^{3})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ), where N𝑁Nitalic_N is the number of nodes. However, computing only the eigenvector corresponding to the largest eigenvalue is fast when using the power method [29], whic... | To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals.
See the supplementary material for an illustration. | We notice that the coarsened graphs are pre-computed before training the GNN.
Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP... | D |
The input data is normalized to [−1,1]11[-1,1][ - 1 , 1 ].
For generating a wide variety of data, the prioritization of the current path wpath∼1+|𝒩(0,5)|similar-tosubscript𝑤path1𝒩05w_{\text{path}}\sim 1+\lvert\mathcal{N}(0,5)\rvertitalic_w start_POSTSUBSCRIPT path end_POSTSUBSCRIPT ∼ 1 + | caligraphic_N ( 0 , 5 ) |... | In all our experiments, stochastic gradient descent with Nesterov momentum as optimizer and cross-entropy loss are used.
The initial learning rate is set to 0.10.10.10.1, momentum to 0.90.90.90.9, and weight decay to 0.00050.00050.00050.0005. The batch size is set to 128128128128 and 512512512512, respectively, for gen... | Figure 6:
Analyzing the influence of training with original data, NRFI data, and combinations of both for different number of samples per class. Using only NRFI data (wgen=100%subscript𝑤genpercent100w_{\text{gen}}=100\%italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT = 100 %) achieves better results than using only... | A new random forest is trained every 100100100100 epochs to average the influence of the stochastic process, and the generated data samples are mixed.
In the following, training on generated data will be denoted as NRFI (gen) and training on generated and original data as NRFI (gen+ori). The fraction of NRFI data is se... | fraction of NRFI data wgensubscript𝑤genw_{\text{gen}}italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT is varied, which weights the loss of the generated data. Accordingly, the weight for the original data is set to wori=1−wgensubscript𝑤ori1subscript𝑤genw_{\text{ori}}=1-w_{\text{gen}}italic_w start_POSTSUBSCRIPT or... | C |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... | C |
Molchanov et al. (2017) exploited this freedom to optimize individual weight dropout rates wαsubscript𝑤𝛼w_{\alpha}italic_w start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT such that weights w𝑤witalic_w can be safely pruned if their dropout rate wαsubscript𝑤𝛼w_{\alpha}italic_w start_POSTSUBSCRIPT italic_α end_POSTSUB... | In the following, we present methods that determine dynamically in the course of forward propagation which structures should be computed or, equivalently, which structures should be pruned.
The intuition behind this idea is to vary the time spent for computing predictions based on the difficulty of the given input samp... | They introduce gates that determine how many recursive quantization steps should be performed which in turn determines the number of used bits.
While the quantization itself is subject to the STE, they propose to train gate probabilities using the Bayesian variational inference framework. | In this section, we start with the unstructured case which includes many of the earlier approaches and continue with structured pruning that has been the focus of more recent works.
Then we review approaches that relate to Bayesian principles before we discuss approaches that prune structures dynamically during forward... | A weight-magnitude-based decision using trainable threshold parameters determines which operation should be performed, allowing for gradient-based training of both the weight parameters and the architecture.
Again, the STE is employed to backpropagate through the threshold function. | A |
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm. | Of central interest in topological data analysis has been the question of providing a complete characterization of the Vietoris-Rips persistence barcodes of spheres of different dimensions. Despite the existence of a complete answer to the question for the case of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTS... | In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence b... |
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm. | The simplicial complex nowadays referred to as the Vietoris-Rips complex was originally introduced by Leopold Vietoris in the early 1900s in order to build a homology theory for metric spaces [79]. Later, Eliyahu Rips and Mikhail Gromov [47] both utilized the Vietoris-Rips complex in their study of hyperbolic groups.
| B |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | The goals of the comparative study presented in this paper were to provide initial evidence of the acceptance of t-viSNE by analysts, the consistency of their results when exploring a t-SNE projection using our tool, and the improvement over another state-of-the-art tool.
The tasks of the study were designed to test ho... |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | Study Design
Each participant took part individually (i.e., the study was performed asynchronously for each subject, in a silent room), using the same hardware, and the study was organized into four main steps, which were identical for both groups except that each interacted with the corresponding group’s tool (GEP o... | Finally, the goal of Task 6, Interpreting and Assessing Local Topology, was to find and interpret “unusual” patterns in the projection, more specifically formations that are known to happen in this data set because of identical points, i.e., data points which have the same values for all dimensions. This corresponded t... | C |
Does the physical analogue exist?: The inspiration of several bio-inspired algorithms does not strictly follow the rules of a phenomenon. An example is Cat Swarm Optimization, in which cats form a swarm, but in real life, they do not seem to cooperate in any way. Authors show more examples (Coyote Optimization Algorith... | In [18, 19], the authors analyze the algorithm called Intelligent Water Drops, providing several proofs that “ all main algorithmic components of Intelligent Water Drops are simplifications or special cases of ant colony optimization (ACO)”. They also examine the natural metaphor of “water drops flowing in rivers remov... |
Algorithms under this category are characterized by the fact that they imitate the behavior of physical or chemical phenomena, such as gravitational forces, electromagnetism, electric charges and water movement (in relation to physics-based approaches), and chemical reactions and gases particles movement as for chemis... |
Nature inspired optimization algorithms or simply variations of metaheuristics? - 2021 [15]: This overview focuses on the study of the frequency of new proposals that are no more than variations of old ones. The authors critique a large set of algorithms based on three criteria: (1) whether there is a physical analogy... | Similar inspiration or duplicate methods?: Authors analyze several classes of bio-inspired algorithms such as those based on gravitational forces, water phenomena, bees, penguins, wolves, and bacteria, and conclude that not all the different variations are real contributions.
| D |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | D |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... |
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we ... |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... |
∙∙\bullet∙ Limited coverage. Previous studies infer spoofability based on measurements of a limited set of networks, e.g., those that operate servers with faulty network stack (Kührer et al., 2014) or networks with volunteers that execute the measurement software (Beverly and Bauer, 2005; Beverly et al., 2009; Mauch, ... | C |
Natural systems need to adapt to a changing world continuously; seasons change, food sources and shelter opportunities vary, cooperation and competition with other animals evolves over time. Moreover, their embodiment also changes over their lifetime. Young animals experience a period of growth where their size increa... |
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
| C |
The values ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT play an important role in the analysis of the algorithm, and it will be convenient to assume that the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are independent.
However, when the x𝑥xitalic_x-c... | First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent.
Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this ... | In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings.
In the third step, we will explain which changes are made to the algorithm. | In the first step, we will show that long edges are unlikely to be viable.
For the second step, recall the definition of the spacing of pisubscript𝑝𝑖p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (in P𝑃Pitalic_P) as Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman_Δ start_... | The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets:
a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut... | B |
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup.
For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups: | The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... |
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment... | D |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance o... | C |
We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... | To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... |
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w... | D |
Pie charts on top of projections show probability distributions of action classes. Although this work is not similar to StackGenVis in general, we use a gradient color scale to map the performance of each model in the projected space.
EnsembleMatrix [55] linearly fuses multiple models with the help of a confusion matri... | In our VA system, the user can explore how models perform on each class of the data set, and the performance metrics are instilled into a combined user-driven value. Manifold [66] generates pairs of models and compares them over all classes of a data set, including feature selection. We adopt a similar approach, but in... | Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th... |
To illustrate how to choose different metrics (and with which weights), we start our exploration by selecting the heart disease data set in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a). Knowing that the data set is balanced, we pick accuracy (weight... |
Selection of Algorithms and Models. Similar to the workflow described in section 4, we start by setting the most appropriate parameters for the problem (see Figure 6(a)). As the data set is very imbalanced, we emphasize g-mean over accuracy, and ROC AUC over precision and recall. Log loss is disabled because the inves... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | C |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | B |
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roma... | The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
Tracking the AOAs and AODs is essential for beam tracking, which... |
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C. | ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively.
In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent... |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Sectio... | D |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | B |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| C |
Though Zhang et al. (2019); Xu et al. (2020b) suggest using a large batch size which may lead to improved performance, we only used a batch size of 25k25𝑘25k25 italic_k target tokens (through gradient accumulation of small batches) to fairly compare with previous work Vaswani et al. (2017); Xu et al. (2020a). |
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ... | We used a beam size of 4444 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5555 checkpoints for the Transformer Base setting and 20202020 checkpoints for the Transformer Big setting saved at intervals of 1,50015001,5001 , 500 training steps. We also conducted significance ... |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform... |
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing dep... | B |
For all A∈Fin(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ), let ψA𝖤𝖥𝖮superscriptsubscript𝜓𝐴𝖤𝖥𝖮\psi_{A}^{\mathsf{EFO}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT be the
diagram sentence such that ⟦ψA𝖤𝖥𝖮⟧Struct(σ)... | \prime},y^{\prime})}\subseteq f^{-1}(U)( italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ∈ italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ′ end_PO... | we can write F=(Uc∩F)∪(Vc∩F)𝐹superscript𝑈𝑐𝐹superscript𝑉𝑐𝐹F=(U^{c}\cap F)\cup(V^{c}\cap F)italic_F = ( italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∩ italic_F ) ∪ ( italic_V start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∩ italic_F )
an conclude that F𝐹Fitalic_F is the disjoint union of two no... | then {C}𝐶\{C\}{ italic_C } is open in (𝒞,τ|C|)𝒞subscriptτ𝐶(\mathcal{C},\uptau_{|C|})( caligraphic_C , roman_τ start_POSTSUBSCRIPT | italic_C | end_POSTSUBSCRIPT ) and therefore
f−1({C})superscript𝑓1𝐶f^{-1}(\{C\})italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( { italic_C } ) is open in X𝑋Xitalic_X. Sinc... | F⊆Uc∪Vc𝐹superscript𝑈𝑐superscript𝑉𝑐F\subseteq U^{c}\cup V^{c}italic_F ⊆ italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ∪ italic_V start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, but F⊊Uc𝐹superscript𝑈𝑐F\subsetneq U^{c}italic_F ⊊ italic_U start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT
and F⊊Vc... | D |
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Sinc... | Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ... | Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re... |
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... | C |
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... |
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ... | argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32]
observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a... | C |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>... | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s... | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa... | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C... |
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star... | B |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | C |
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | A |
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | A |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | B |
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizo... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analys... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al.... | B |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... |
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ... | 75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ... | C |
where 𝒮+superscript𝒮\mathcal{S}^{+}caligraphic_S start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, 𝒮−superscript𝒮\mathcal{S}^{-}caligraphic_S start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT represent the positive entity pair set (i.e., the training set) and sampled negative entity pair set, respectively. The term ||⋅||||\c... |
In Table 8, we present more detailed entity prediction results on open-world FB15K-237, considering the influence of different decoders. Our observations indicate that decentRL consistently outperforms the other methods across most metrics when using TransE and DistMult as decoders. Furthermore, we provide results on ... | We employ different adaptation strategies for various KG embedding tasks. In entity alignment, we follow the existing GNN-based methods [12, 39] to concatenate the output embeddings from each layer to form the final representation. This process can be written as follows:
| Similarly, for entity prediction, we leverage a decoder to predict missing entities [13]. In our experiments, we employ ComplEx [30] and DistMult [29] as the decoders due to their superior performance without compromising efficiency. We initialize the input entity embeddings, relation embeddings, and weight matrices us... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... | C |
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode... |
(i) For the network architecture, the important hyper-parameters include the dimensions of latent space Z𝑍Zitalic_Z, the dimensions of state features d𝑑ditalic_d, and the use of skip-connection between the prior and generative networks. We add an ablation study in Tab. IV to perform a grid search. The result shows t... |
The related exploration methods aim to remove the stochasticity of the dynamics rather than modeling it. For example, Inverse Dynamics [10], Random Features [11], and EMI [30] learn a feature space to remove the task-irrelevant information in feature space such as white-noise. Curiosity-Bottleneck [31] and Dynamic Bot... | We compare the model complexity of all the methods in Table I. VDM, RFM, and Disagreement use a fixed CNN for feature extraction. Thus, the trainable parameters of feature extractor are 0. ICM estimates the inverse dynamics for feature extraction with 2.21M parameters. ICM and RFM use the same architecture for dynamics... |
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode... | C |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3