context stringlengths 250 5.66k | A stringlengths 250 6.9k | B stringlengths 250 8.2k | C stringlengths 250 7.79k | D stringlengths 250 4.85k | label stringclasses 4
values |
|---|---|---|---|---|---|
(−1)a(b−1−a)[d3dx3xmF(a,b;c;z)+3d2dx2xmddxF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+%
3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div... |
+xmd2dx2F(a,b;c;z)];\displaystyle\quad\quad+x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)\Big{]};+ italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_F ( ... | 2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR... | +3ddxxmd2dx2F(a,b;c;z)+xmd3dx3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}%
\frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic... | (−1)a(b−1−a)[d3dx3xmF(a,b;c;z)+3d2dx2xmddxF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+%
3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div... | C |
19S:=assign𝑆absentS:=italic_S := Composition of S𝑆Sitalic_S and the MSLP of hℓ−1hr−1superscriptsubscriptℎℓ1superscriptsubscriptℎ𝑟1h_{\ell}^{-1}h_{r}^{-1}italic_h start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start... |
The following lemma shows how to compute the matrices of the preprocessing step. Recall that ω𝜔\omegaitalic_ω is a primitive element of 𝔽q=𝔽pfsubscript𝔽𝑞subscript𝔽superscript𝑝𝑓\mathbb{F}_{q}=\mathbb{F}_{p^{f}}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT = blackboard_F start_POSTSUBSCRIPT italic_... |
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows. | First we describe the preprocessing phase during which we initialize the memory of the MSLP to encode particular matrices which will be useful for expressing diagonal matrices as words independently of the given diagonal matrix. The constructed matrices can be reused for all diagonal matrices, and so further diagonal m... |
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm. | C |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov... |
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT. |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... | C |
We think Alg-A is better in almost every aspect. This is because it is essentially simpler.
Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others: | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | A |
Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... | D |
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh... | loss function (Assumption 1) with an exponential
tail (Assumption 3), any stepsize η<2β−1σmax−2(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ(𝐰(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | C |
To overcome this issue, we set up a threshold 72 hours. We only consider the first candidate within 72 hours before or after the beginning time of the event as timestamp of human confirming rumors. On average the human editors of Snopes need 25.49 hours to verify the rumors and post it. Our system already achieves 87% ... | The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999htt... |
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”. | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | B |
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU... |
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes... | to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector
of ℳLR2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | A |
The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | with Bernoulli and contextual linear Gaussian reward functions [Kaufmann et al., 2012; Garivier and Cappé, 2011; Korda et al., 2013; Agrawal and Goyal, 2013b],
as well as for context-dependent binary rewards modeled with the logistic reward function Chapelle and Li [2011]; Scott [2015] —Appendix A.3. | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | The special case of piecewise-stationary, or abruptly changing environments, has attracted a lot of interest in general [Yu and Mannor, 2009; Luo et al., 2018],
and for UCB [Garivier and Moulines, 2011] and Thompson sampling [Mellor and Shapiro, 2013] algorithms, in particular. | C |
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | D |
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed. |
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai... |
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed. | Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba... |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... | D |
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for... | The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols ... | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap... |
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTS... | Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the marking... | D |
In their article Luo et al.[79] utilized quality assessment to remove low quality heartbeats, two median filters for removing power line noise, high-frequency noise and baseline drift.
Then, they used a derivative-based algorithm to detect R-peaks and time windows to segment each heartbeat. | Most of the methods convert PCGs to images using spectrogram techniques.
Rubin et al.[108] used a logistic regression hidden semi-markov model for segmenting the start of each heartbeat which then were transformed into spectrograms using Mel-Frequency Cepstral Coefficients (MFCCs). | Modified frequency slice WT was used to calculate the spectrogram of each heartbeat and a SDAE for extracting features from the spectrogram.
Then, they created a classifier for four arrhythmias from the encoder of the SDAE and a softmax, achieving an overall accuracy of 97.5%. | Each spectrogram was classified into normal or abnormal using a two layer CNN which had a modified loss function that maximizes sensitivity and specificity, along with a regularization parameter.
The final classification of the signal was the average probability of all segment probabilities. | In[111] the authors used Adaboost which was fed with spectrogram features from PCG and a CNN which was trained using cardiac cycles decomposed into four frequency bands.
Finally, the outputs of the Adaboost and the CNN were combined to produce the final classification result using a simple decision rule. | B |
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans... | D |
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning.
The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera... | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning.
The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera... |
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals. | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | A |
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3... | A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure... |
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised con... | C |
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. | We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... |
Second, our model considers the size of advice and its impact on the algorithm’s performance, which is the main focus of the advice complexity field. For all problems we study, we parameterize advice by its size, i.e., we allow advice of a certain size k𝑘kitalic_k. Specifically, the advice need not necessarily encode... | We first show how to find a Pareto-optimal strategy, when the advice encodes the hidden value, and thus can have unbounded size.
Moreover, we study the competitiveness of the problem with only k𝑘kitalic_k bits of advice, for some fixed k𝑘kitalic_k, and | D |
Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce [Dean & Ghemawat, 2008], giving the framework the capability of effectively processing very large volumes of data.
In Algorithm 2 is shown the training process described earlier. Note that the line... | Otherwise, it can be omitted since, during classification, gv𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries.
It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents co... | It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5.
Additionally, the classification is also i... | Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix be... | This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category.
Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated. | A |
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1).
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter... | Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai... | GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )... | With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
These methods can be implemented on distributed frameworks like parameter server and al... | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red... | C |
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output.
DNNs were successful in utilizi... | Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24].
Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o... | Previous literature addressing this problem has focused on weight pruning from trained DNNs [11] and weight pruning during training [12].
Pruning minimizes the model capacity for use in environments with low computational capabilities, or low inference time requirements and helps reducing co-adaptation between neurons,... | In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14].
Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler... | After training, we consider 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculated during the feed-forward pass from Eq. 11) and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculat... | B |
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg... | Supposing that a UAV covers a round area below it with a field angle θ𝜃\thetaitalic_θ as shown in Fig. 1 (b). Thus the coverage of UAVisubscriptUAV𝑖{\rm UAV}_{i}roman_UAV start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is Di=π(hitanθ)2subscript𝐷𝑖𝜋superscriptsubscriptℎ𝑖tan𝜃2D_{i}=\pi(h_{i}{\rm tan}\theta)^{2}it... |
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting. The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a hi... |
In order to support as many users as possible, UAVs are required to enlarge coverage size, which is equal to enlarge the coverage proportion in the mission area. Higher altitude indicates larger coverage size as shown in Fig. 1 (c). The utility of coverage size is denoted as | To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ... | B |
{z\phi}\hat{\mathbf{z}})+\frac{1}{\mu_{0}}\mathbf{B}\cdot\nabla f- [ italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r ∇ ⋅ ( italic_n bold_v ) + italic_ρ bold_v ⋅ ∇ ( italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) ] - ∇ ⋅ ( itali... | \mathbf{r}}+r\pi_{z\phi}\hat{\mathbf{z}}-\frac{f}{\mu_{0}}\mathbf{B}\bigg{)}- ∇ ⋅ ( italic_ρ italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r bold_v + italic_r italic_π start_POSTSUBSCRIPT italic_r italic_ϕ end_POSTSUBSCRIPT over^ start_ARG bold_r end_ARG + italic_r italic_π start_POSTSUBSCRIPT italic_z... | 1}{r^{2}}\frac{\partial}{\partial z}(r^{2}\pi_{z\phi})=\frac{1}{r}\nabla\cdot(%
r\pi_{r\phi}\hat{\mathbf{r}}+r\pi_{z\phi}\hat{\mathbf{z}})( ∇ ⋅ under¯ start_ARG bold_italic_π end_ARG ) start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_r end_ARG divide start_ARG ∂ end_ARG start... | \boldsymbol{\Gamma}over˙ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = divide start_ARG ∂ end_ARG start_ARG ∂ italic_t end_ARG ( ∫ ( italic_ρ italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) italic_d italic_V ) = - ∫ ( italic_ρ italic_v start_POSTSUBSCRIPT italic_ϕ end_PO... | {z\phi}\hat{\mathbf{z}})+\frac{1}{\mu_{0}}\mathbf{B}\cdot\nabla f- [ italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT italic_r ∇ ⋅ ( italic_n bold_v ) + italic_ρ bold_v ⋅ ∇ ( italic_r italic_v start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) ] - ∇ ⋅ ( itali... | A |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | A |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25]. |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... |
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u... | For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the
DQN loss, ADAM optimizer was used[25]. | D |
\log(\hat{p})\right.\left.+(1-\alpha)\hat{p}^{\gamma}(1-p)\log(1-\hat{p})%
\right).\end{split}start_ROW start_CELL roman_FL ( italic_p , over^ start_ARG italic_p end_ARG ) = - ( italic_α ( 1 - over^ start_ARG italic_p end_ARG ) start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT italic_p roman_log ( over^ start_ARG ital... |
Two popular overlap-based measures used to evaluate segmentation performance are the Sørensen–Dice coefficient (also known as the Dice coefficient) and the Jaccard index (also known as the intersection over union or IoU). Given two sets 𝒜𝒜\mathcal{A}caligraphic_A and ℬℬ\mathcal{B}caligraphic_B, these metrics are def... | In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d... | A significant problem in image segmentation (particularly in medical images) is to overcome class imbalance for which overlap measure based methods have shown reasonably good performance in overcoming the imbalance. In Section 5, we summarize the approaches which use new loss functions, particularly for medical image s... | Another popular loss function for image segmentation tasks is based on the Dice coefficient, which is essentially a measure of overlap between two samples and is equivalent to the F1 score. This measure ranges from 0 to 1, where a Dice coefficient of 1 denotes perfect and complete overlap. The Dice coefficient (DC) is ... | D |
At each level l𝑙litalic_l, two vertices i𝑖iitalic_i and j𝑗jitalic_j are clustered together in a new vertex k𝑘kitalic_k.
Then, a standard pooling operation (average or max pool) is applied to compute the node feature 𝐱k(l+1)subscriptsuperscript𝐱𝑙1𝑘{\mathbf{x}}^{(l+1)}_{k}bold_x start_POSTSUPERSCRIPT ( italic_l +... | Similarly to pooling operations in Convolutional Neural Networks (CNNs) that compute local summaries of neighboring pixels, we propose a pooling procedure that provides an effective coverage of the whole graph and reduces the number of nodes approximately by a factor of 2.
This can be achieved by partitioning nodes in ... | Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
| Second, GRACLUS pooling adds “fake” nodes so that they can be exactly halved at each pooling step; this not only injects noisy information in the graph signal, but also increases the computational complexity in the GNN. Finally, clustering depends on the initial ordering of the nodes, which hampers stability and reprod... | From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are ... | C |
Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages.
Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ... | In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples.
Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014). | RF: Random forest (Breiman, 2001) is an ensemble-based method consisting of multiple decision trees. Each decision tree is trained on a different randomly selected subset of features and samples. The classifier follows the same overall setup, i.e., 500500500500 decision trees and a maximum depth of ten.
| Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features.
Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features... | Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018).
The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ... | C |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | A |
Machine learning is a key technology in the 21st century and the main contributing factor for many recent performance boosts in computer vision, natural language processing, speech recognition and signal processing.
Today, the main application domain and comfort zone of machine learning applications is the “virtual wor... | However, in real-world applications the computing infrastructure during the operation phase is typically limited, which effectively rules out most of the current resource-hungry machine learning approaches.
There are several key challenges—illustrated in Figure 1—which have to be jointly considered to facilitate machin... | Machine learning is a key technology in the 21st century and the main contributing factor for many recent performance boosts in computer vision, natural language processing, speech recognition and signal processing.
Today, the main application domain and comfort zone of machine learning applications is the “virtual wor... | We furthermore point out that hardware properties and the corresponding computational efficiency form a large fraction of resource efficiency.
This highlights the need to consider particular hardware targets when searching for resource-efficient machine learning models. | However, we are currently witnessing a transition of machine learning moving into “the wild”, where most prominent examples are autonomous navigation for personal transport and delivery services, and the Internet of Things (IoT).
Evidently, this trend opens several real-world challenges for machine learning engineers. | D |
}\}+\{v_{4},v_{5}\}+\{v_{5},v_{0}\},{ italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_v start_POST... |
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by |
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by | and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\m... | ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
| B |
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | We give extra support to the user by providing the results of 5 quality measures for each representative projection: neighborhood hit (NH), trustworthiness (T), continuity (C), normalized stress (S), and Shepard diagram correlation (SDC), accompanied by the quality metrics average (QMA). They are shown as a grayscale h... | Significantly-different t-SNE projections can be generated from the same data set, due to its well-known sensitivity to hyper-parameter settings [14]. We propose to support users in finding a good t-SNE projection for their data by using visual exploration, as follows. A Grid Search mode (Figure 1(a)) initiates a syste... |
Figure 2: Hyper-parameter exploration (presented in a dialog at the beginning of an analytical session), with 25 representative projections from a pool of 500 alternatives obtained through a grid search. Five quality metrics, plus their Quality Metrics Average (QMA), are also displayed to support the visual analysis. ... | D |
Another popular option of creating new solutions relies on stigmergy, namely, an indirect communication and coordination between the different solutions or agents used to create new solutions. This communication is usually done using an intermediate structure, with information obtained from the different solutions, us... | Table 31 lists the reviewed algorithms that employ stigmergy when creating new solutions. This is a reduced list when comparing with preceding categories, with the majority of the algorithms relying on Swarm Intelligence among insects (similarly to ACO). However, other algorithms inspired in physics have also a stigmer... |
It has not been until relatively recent times that the community has embraced the need for arranging the myriad of existing bio-inspired algorithms and classifying them under principled, coherent criteria. In 2013, [74] presented a classification of meta-heuristic algorithms as per their biological inspiration that di... | The combining method can be specific for the problem to be solved or instead, be conceived for a more general family of problems. In fact, combining methods are usually devised to be adaptable to many different solution representations. As mentioned before, the most popular algorithm in this category is GA [98]. Howeve... |
When inspecting the influential approaches from a higher perspective, two are the categories whose algorithms have been more frequently used to create new nature-based algorithms. The first one is Swarm Intelligence: about 14% of all studied nature-inspired algorithms are variations of SI algorithms (PSO, ACO, and ABC... | A |
where φ(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12A~D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 e... |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | To apply graph convolution on unsupervised learning, GAE is proposed [20].
GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21... | Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | B |
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremente... |
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this eva... | Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ... | There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th... |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... | B |
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missin... | Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a... | The first model in this domain [7] employed SVMs with one-vs-one comparisons between all classes. SVM classifiers project the data into a higher dimensional space using a kernel function and then find a linear separator in that space that gives the largest distance between the two classes compared while minimizing the ... | While SVMs are standard machine learning, NNs have recently turned out more powerful, so the first step is to use them on this task instead of SVMs. In the classification task, the networks are evaluated by the similarity between the odor class label (1-5) and the network’s output class label prediction given the unlab... |
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on... | B |
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses.
Recall that for... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | A |
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
| By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ... | A |
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ... | We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu... |
In order to truly assess if existing methods are using relevant regions to produce correct answers, we use our proposed metric: Correctly Predicted but Improperly Grounded (CPIG). If the CPIG values are large, then it implies that large portion of correctly predicted samples were not properly grounded. Fig. A4 shows %... |
Following Selvaraju et al. (2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT trained on human attention maps... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | C |
Prior work in privacy and human-computer interaction establishes the motivation for studying these documents. Although most internet users are concerned about privacy (Madden, 2017), Rudolph et al. (2018) reports that a significant number do not make the effort to read privacy notices because they perceive them to be ... | To build the PrivaSeer corpus, we create a pipeline concentrating on focused crawling Chakrabarti et al. (1999); Diligenti et al. (2000) of privacy policy documents. We used Common Crawl,222https://commoncrawl.org/ described below, to gather seed URLs to privacy policies on the web. We filtered the Common Crawl URLs to... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... | We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with... | A |
Figure 3(a) is a t-SNE projection [61] of the instances (MDS [22] and UMAP [31] are also available in order to empower the users with various perspectives for the same problem, based on the DR guidelines from Schneider et al. [47]).
The point size is based on the predictive accuracy calculated using all the chosen mode... | Figure 3: The data space projection with the importance of each instance measured by the accuracy achieved by the stack models (a). The parallel coordinates plot view for the exploration of the values of the features (b); a problematic case is highlighted in red with values being null (‘4’ has no meaning for Ca). (c.1)... |
Selection of Algorithms and Models. Similar to the workflow described in section 4, we start by setting the most appropriate parameters for the problem (see Figure 6(a)). As the data set is very imbalanced, we emphasize g-mean over accuracy, and ROC AUC over precision and recall. Log loss is disabled because the inves... |
The Ca attribute, for example, has a range of 0–3, but by selection we can see five points with Ca values of ‘4’, see Figure 3(b). These values can be considered as unknown and should be further examined. One of these points belongs to the healthy class (due to the olive color) but is very small in Figure 3(c.1)—meani... | As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS... | C |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | A |
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor... | Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2).
When the data quantity i... | To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor... | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o... | Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d... | A |
The CCA codebook-based multi-UAV beam tracking scheme with TE awareness. Based on the designed codebook, by exploiting the Gaussian process (GP) tool, both the position and attitude of UAVs can be fast tracked for fast multiuser beam tracking along with dynamic TE estimation. Moreover, the estimated TE is leveraged to... |
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe... |
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... | Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base... | A |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | D |
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... | A |
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne... | The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
| We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
|
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ... | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | A |
φ𝜑\varphiitalic_φ is closed under homomorphisms. Therefore B~⊧φmodels~𝐵𝜑\widetilde{B}\models\varphiover~ start_ARG italic_B end_ARG ⊧ italic_φ because A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG and B~~𝐵\widetilde{B}over~ start_ARG italic_B end_ARG are
n𝑛nitalic_n-elementary equivalent. Finally, B~→B→C→~𝐵�... | the corresponding Alexandroff topologies:
X≜⟨X,τ→,𝖥𝖮[σ]⟩≜𝑋𝑋subscriptτ→𝖥𝖮delimited-[]σX\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}[\upsigma]\right\rangleitalic_X ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ and for n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ blackboard_N, l... | Let (⟨Xi,τi,𝖥𝖮[σi]⟩)i∈Isubscriptsubscript𝑋𝑖subscriptτ𝑖𝖥𝖮delimited-[]subscriptσ𝑖𝑖𝐼(\left\langle X_{i},\uptau_{i},\mathsf{FO}[\upsigma_{i}]\right\rangle)_{i\in I}( ⟨ italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , sansserif_FO [ roman_σ start_P... | that ⟦𝖥⟧X\llbracket\mathsf{F}\rrbracket_{X}⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is a base of
⟨τ≤∩⟦𝖥𝖮[σ]⟧X⟩\left\langle\uptau_{\leq}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRI... | of the topological spaces
Yn≜⟨X,τ→,𝖥𝖮n[σ]⟩≜subscript𝑌𝑛𝑋subscriptτ→subscript𝖥𝖮𝑛delimited-[]σY_{n}\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}_{n}[\upsigma]\right\rangleitalic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO start... | D |
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in distortion rectification. However, two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as... |
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like... | The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... | Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective... | A |
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
We set aside 20% of the samples as the test set and divide the rema... | If we avoid these tricks, these methods may suffer from severe performance degradation.
For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective. | We compare SNGM with four baselines: MSGD, LARS [34], EXTRAP-SGD [19] and CLARS [12]. For LARS, EXTRAP-SGD and CLARS, we adopt the open
source code 222https://github.com/NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch 333http://proceedings.mlr.press/v119/lin20b.html 444https://github.com/slowbull/largebatch | We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets.
The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs. | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | D |
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... |
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,... | The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... |
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | D |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg... | That is, the mean square error at the next time can be controlled by that at the
previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... | C |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | D |
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti... | In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | B |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | B |
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... | In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en... | A |
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... |
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures 1 and 2) which is statistically significant (r(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t... |
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | 75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ... | B |
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem... |
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different down... |
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... |
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem... | A |
Figure 13: Result comparison of different settings of k𝑘kitalic_k when calculating the intrinsic reward rkisubscriptsuperscript𝑟𝑖𝑘r^{i}_{k}italic_r start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT in sticky Atari games. |
One may curious about whether other variational models can be used in exploration. In this section, we discuss the basic latent-variable models, i.e., variational auto-encoder (VAE), and applying Conditional VAE (CVAE) in modeling the multimodality and stochasticity of dynamics. Considering a typical VAE in modeling a... |
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch... | In Introduction, the two-roads MDP and Noisy-Mnist illustrate what latent variables can be in specific MDPs. The next-state relies on the latent variables contained in the underlying dynamics. However, we do not need to choose or describe the latent variables manually in practice. Considering the learning objective of ... |
In this paper, we propose the Variational Dynamic Model (VDM), which models the multimodality and stochasticity of the dynamics explicitly based on conditional variational inference. VDM considers the environmental state-action transition as a conditional generative process by generating the next-state prediction unde... | A |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3