context stringlengths 250 4.99k | A stringlengths 250 5.11k | B stringlengths 250 3.8k | C stringlengths 250 8.2k | D stringlengths 250 3.9k | label stringclasses 4
values |
|---|---|---|---|---|---|
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2
Gaussian integrations for moments xD−1+n−2ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage | +x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e... |
Gaussian integration rules for integrals ∫01xD−1Rnm(x)f(x)𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 ... | rules for
the lifted integrals ∫01xD−1[1+Rnm(x)]f(x)𝑑xsuperscriptsubscript01superscript𝑥𝐷1delimited-[]1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}[1+R_{n}^{m}(x)]f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT ita... | {n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic... | C |
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices.
Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... |
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec... | Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre... | B |
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | B |
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle
take nearly linear time [6, 1, 2, 3, 23], that is, O(nlogn)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O(nlog2n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP... | in the Rotate-and-Kill process,
and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2). | Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-... | The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that
an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim... | Using a Rotate-and-Kill process (which is shown in Algorithm 5),
we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead. | C |
Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne... | Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | A |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | C |
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... |
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | A |
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU... | to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector
of ℳLR2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... |
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | C |
\right)\;.\\
\end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT... | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | —i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag—
one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein. | More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states,
as the Monte Carlo error of pM(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( ital... | C |
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | This very low threshold for now serves to measure very basic movements and to check for validity of the data.
Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity). | D |
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity |
The categorical organization of the CAT2000 database also allowed us to quantify the improvements by the ASPP module with respect to individual image classes. Table 7 lists the four categories that benefited the most from multi-scale information across the subset of evaluation metrics on the validation set: Noisy, Sat... |
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity |
Figure 3: A visualization of four example images from the CAT2000 validation set with the corresponding fixation heat maps, our best model predictions, and estimated maps based on the ablated network. The qualitative results indicate that multi-scale information augmented with global context enables a more accurate es... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | A |
Many existing algorithms constructing path decompositions are of theoretical interest only, and this disadvantage carries over to the possible algorithms computing the locality number or cutwidth (see Section 6) based on them. However, the reduction of 5.7 is also applicable in a purely practical scenario, since any ki... |
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under... | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... |
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection.... |
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u... | C |
In[148] the authors adopt a 3D multi-scale CNN to identify pixels that belong to the RV.
The network has two convolutional pathways and their inputs are centered at the same image location, but the second segment is extracted from a down-sampled version of the image. | In their article Hong et al.[201] trained a DBN using image patches for the detection, segmentation and severity classification of Abdominal Aortic Aneurysm region in CT images.
Liu et al.[202] used an FCN with twelve layers for left atrium segmentation in 3D CT volumes and then refined the segmentation results of the ... | In their article Tran et al.[142] trained a four layer FCN model for LV/RV segmentation on SUN09, STA11.
They compared previous state-of-the-art methods along with two initializations of their model: a fine-tuned version of their model using STA11 and a Xavier initialized model with the former performing best in almost... | In their article Acharya et al.[85] trained a four layer CNN on AFDB, MITDB and CREI, to classify between normal, AF, atrial flutter and ventricular fibrillation.
Without detecting the QRS they achieved comparable performance with previous state-of-the-art methods that were based on R-peak detection and feature enginee... | This model compared with vanilla conv-deconv and u-net performs better by an average of 5% in terms of Dice.
Patravali et al.[140] trained a model based on u-net using Dice combined with cross entropy as a metric for LV/RV and myocardium segmentation. | B |
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... | Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | B |
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | Figure 1: High level overview of a feed-forward pass of the combined methods.
xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters. | The names of the classes are depicted at the right along with the predictions for this example signal.
The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h... | B |
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra... | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi... |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... |
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga... | D |
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O(logn)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration. | Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the... | bins
include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS... |
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\... | The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm... | D |
In the context of online environments such as social media, an ADD scenario that is gaining interest, as we will see in Subsection 2.2, is the one known as early depression detection (EDD). In EDD the task is, given users’ data stream, to detect possible depressive people as soon and accurate as possible.
| Algorithms capable of dealing with this scenario are said to support incremental learning and/or incremental classification.
In the present article we will focus on incremental classification since, so far, it is the only EDD scenario we have data to compare to, as we will see in Subsection 2.2. | However, EDD poses really challenging aspects to the “standard” machine learning field.
The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its ration... | In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD).
The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification... | At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability.
Unfortunately, to the best of our knowledge, there is no text classifier able to support thes... | B |
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | process. As for global momentum, the momentum term −(𝐰t−𝐰t−1)/ηsubscript𝐰𝑡subscript𝐰𝑡1𝜂-({\bf w}_{t}-{\bf w}_{t-1})/\eta- ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) / italic_η contains global information from all the workers. Since we are... | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca... |
We find that due to the momentum factor masking (mfm) in DGC (Lin et al., 2018), DGC (w/ mfm) will degenerate to DSGD rather than DMSGD if sparse communication is not adopted, while GMC will degenerate to DMSGD if sparse communication is not adopted. | D |
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ... |
The Extrema activation function (defined at Algorithm 3) detects candidate extrema using zero crossing of the first derivative, then sorts them in an descending order and gradually eliminates those extrema that have less absolute amplitude than a neighboring extrema within a predefined minimum extrema distance (med�... | The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error.
However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the data... | Imposing a med𝑚𝑒𝑑meditalic_m italic_e italic_d on the extrema detection algorithm makes 𝜶𝜶\bm{\alpha}bold_italic_α sparser than the previous cases and solves the problem of double extrema activations that Extrema-Pool indices have (as shown in Fig. 1LABEL:sub@subfig:extrema).
The sparsity parameter in this case ... | Figure 1: Visualization of the activation maps of five activation functions (Identity, ReLU, top-k absolutes, Extrema-Pool indices and Extrema) for 1D and 2D input in the top and bottom row respectively.
The 1D input to the activation functions is denoted with the continuous transparent green line using an example from... | A |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... |
In this part, we investigate the influence of environment dynamic on the network states. With different scenarios’ dynamic degree τ∈(0,∞)𝜏0\tau\in(0,\infty)italic_τ ∈ ( 0 , ∞ ), PBLLA and SPBLLA will converge to the maximizer of goal function with different altering strategy probability. Fig. 6 presents the influence... | We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative ga... |
We establish a multi-factor system model based on large-scale UAV networks in highly dynamic post-disaster scenarios. Considering the limitations in existing algorithms, we devise a novel algorithm which is capable of updating strategies simultaneously to fit the highly dynamic environments. The main contributions of ... | We propose a novel UAV ad-hoc network model with the aggregative game which is compatible with the large-scale highly dynamic environments, in which several influences are coupled together. In the aggregative game, the interference from other UAVs can be regarded as the integral influence, which makes the model more pr... | C |
nodal locations), is equal to one. This property also hold for the
pyramid side functions, i.e.,formulae-sequence𝑖𝑒i.e.,italic_i . italic_e . , Σn=1Nnϕn(𝐫)=Σn=1Nnψn(𝐫)=1subscript𝑁𝑛𝑛1Σsubscriptitalic-ϕ𝑛𝐫subscript𝑁𝑛𝑛1Σsubscript𝜓𝑛𝐫1\overset{N_{n}}{\underset{n=1}{\Sigma}}\phi_{n}(\textbf{$\mathbf{r}$})=% | ϕn(𝐫)=Σenψne(𝐫)subscriptitalic-ϕ𝑛𝐫subscript𝑒𝑛absentΣsuperscriptsubscript𝜓𝑛𝑒𝐫\phi_{n}(\textbf{$\mathbf{r}$})=\overset{e_{n}}{\underset{}{\Sigma}}\,\psi_{n}%
^{e}(\textbf{$\mathbf{r}$})italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( bold_r ) = start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_n ... | u(𝐫)≈U(𝐫)=Σn=1NnUnϕn(𝐫)𝑢𝐫𝑈𝐫subscript𝑁𝑛𝑛1Σsubscript𝑈𝑛subscriptitalic-ϕ𝑛𝐫u(\mathbf{r})\approx U(\mathbf{r})=\overset{N_{n}}{\underset{n=1}{\Sigma}}\,U_%
{n}\,\phi_{n}(\mathbf{r})italic_u ( bold_r ) ≈ italic_U ( bold_r ) = start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVER... | In total, there are K𝐾Kitalic_K tilted planes defined in the solution domain,
where K=Σn=1Nnen=3Ne𝐾subscript𝑁𝑛𝑛1Σsubscript𝑒𝑛3subscript𝑁𝑒K=\overset{N_{n}}{\underset{n=1}{\Sigma}}e_{n}=3N_{e}italic_K = start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDER... | \overset{N_{n}}{\underset{n=1}{\Sigma}}\psi_{n}(\textbf{$\mathbf{r}$})=1start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT italic_n = 1 end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( bold_r ) = ... | D |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | D |
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | C |
Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b... | The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pix... |
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized ... |
Figure 13: Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., ... |
Two popular overlap-based measures used to evaluate segmentation performance are the Sørensen–Dice coefficient (also known as the Dice coefficient) and the Jaccard index (also known as the intersection over union or IoU). Given two sets 𝒜𝒜\mathcal{A}caligraphic_A and ℬℬ\mathcal{B}caligraphic_B, these metrics are def... | A |
We use a graph that encodes the similarity of all words in the vocabulary.
Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise. | In the following, we compare NDP with GRACLUS [11], NMF [38], DiffPool [40], and Top-K𝐾Kitalic_K [41].
In each experiment we adopt a fixed network architecture, MP(32)-P(2)-MP(32)-P(2)-MP(32)-AvgPool-Softmax, where MP(32) stands for a MP layer as described in (1) configured with 32 hidden units and ReLU activations, P... | We use a graph that encodes the similarity of all words in the vocabulary.
Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise. | The first (LSTM), is a network where the dense hidden layer is replaced by an LSTM layer [55], which allows capturing the temporal dependencies in the sequence of words in the review.
The other baseline (TCN) is a network where the hidden layers are 1D convolutions with different dilation rates [56]. | Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation.
After training, we extract the embedding vector of each word in the vocabulary and construct a... | D |
Sparse connectivity maintains the tree structures and has fewer weights to train. In practice, sparse weights require a special differentiable implementation, which can drastically decrease performance, especially when training on a GPU. Full connectivity optimizes all parameters of the fully connected network.
Massice... | The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees.
Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim... | These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption.
In this work, we address this issue by proposing an imitation learning-based me... | In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models.
We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t... | For training, we generate input-target pairs (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) as described in the last section.
These training examples are fed into the training process to teach the network to predict the same results as the random forest. To avoid overfitting, the data is generated on-the-fly so that each traini... | B |
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2|𝒜|T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG |... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... |
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting. | D |
The computational cost of performing inference should match the (usually limited) resources in deployed systems and exploit the available hardware optimally in terms of time and energy.
Computational efficiency, in particular, also includes mapping the representational efficiency to available hardware structures. | While the previous section indicates that quantized DNNs do not provide throughput improvements on general-purpose processors without explicit hardware support, there are other hardware platforms where quantization is mandatory.
Data-flow architectures, as found typically on FPGAs, where the main objective is to keep a... | In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1.
Note that this requires observing overall constraints such as pre... | A pre-trained full-precision DNN is then quantized using these bit widths and fine-tuned for one epoch to obtain a reward signal that is subsequently used to update the controller.
Their method incorporates hardware-specific constraints, such as latency and energy consumption, that must be met by the controller. | This is in contrast to theoretical inference costs, such as numbers of parameters and required mathematical operations, that often do not reflect inference running time on real hardware well.
Furthermore, power constraints are key for autonomous and embedded systems, as the device lifetime for a given battery charge ne... | D |
Choose the canonical inclusion map X\longhookrightarrowX′𝑋\longhookrightarrowsuperscript𝑋′X\longhookrightarrow X^{\prime}italic_X italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT as ρ𝜌\rhoitalic_ρ, the identity map on ΛΛ\Lambdaroman_Λ as π𝜋\piitalic_π, and apply Theorem 5.
∎ | By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ... | Finally, recently we became aware of [81, Lemma 5.1], which is similar to Theorem 5. The author considers spaces with numerable covers (i.e. the spaces admit locally finite partition of unity subordinate to the covers), whereas in our version that condition is automatically satisfied since we only consider paracompact ... | In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem... |
A result which is similar to Corollary 4.1 was already proved in [24, Lemma 3.4] for finite index sets whereas in our version index sets can have arbitrary cardinality. In [75, Theorem 25, Theorem 26], the authors prove simplicial complex version of Corollary 4.1 for finite index sets and invoke a certain functorial v... | D |
Some attempts to enrich scatterplots with automatically-derived statistical descriptions of patterns [38, 39, 40] have shown that static mappings may be useful in simple scenarios, but the complex relations between low- and high-dimensional space in non-linear projections cannot be well represented. | Labels
In order to better explain the contribution of t-viSNE, the data sets used in our use cases contain predefined labels, which is not the case in general when using unsupervised learning techniques, such as t-SNE. There is no restriction, however, to having labels when using t-viSNE; one might use the results of... | Cytosplore [11] is an example of tools that use t-SNE for visual data exploration within a specific domain: single-cell analysis with mass cytometry data. Apart from showing a t-SNE projection of the data, Cytosplore is also supported by a domain-specific clustering technique which serves as the base for the rest of th... |
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ... | In such cases, interactive visual interfaces are necessary, as noted by Sacha et al. [15] in their survey on interaction techniques for DR.
Interactive solutions for specific domains such as text [19, 20] and images [41, 7] use inherent characteristics of the data in order to explain layouts, however, they are not easi... | D |
Aquatic animals: This type of meta-heuristic algorithm focuses on aquatic animals. The aquatic ecosystem in which they live has inspired different exploration mechanisms. It contains popular algorithms such as Krill Herd (KH, [259]), Whale Optimization Algorithm (WOA, [380]), and algorithms based on the echolocation u... |
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n... | Foraging: Rather than the movement strategy, in some other algorithmic variants it is the mechanism used to obtain their food what drives the behavior of the animal and, ultimately, the design of the meta-heuristic algorithm. This foraging behavior can in turn be observed in many flavors, from the tactics used by the a... |
Microorganisms: Meta-heuristics based on microorganisms are related with the food search performed by bacteria. A bacteria colony performs a movement to search for food. Once they have found and taken it, they split to search again in the environment. Other types of meta-heuristics that can be part of this category ar... |
Terrestrial animals: Meta-heuristics in this category are inspired by foraging or movements in terrestrial animals. The most renowned approach within this category is the classical ACO meta-heuristic [115], which replicates the stigmergic mechanism used by ants to locate food sources and inform of the existence of the... | C |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | C |
The downside of this approach is that the Spoofer Project requires users to download, compile and execute a software - which also needs administrative privileges to run - once per measurement. This requires not only technically knowledgeable volunteers that agree to run untrusted code, but also networks which agree to... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | In a recent longitudinal data analysis by the Spoofer Project (Luckie et al., 2019) the authors observed that despite increase in the coverage of ASes that do not perform ingress filtering in the Internet, the test coverage across networks and geo-locations is still non-uniform.
|
Our work provides the first comprehensive view of ingress filtering in the Internet. We showed how to improve the coverage of the Internet in ingress filtering measurements to include many more ASes that were previously not studied. Our techniques allow to cover more than 90% of the Internet ASes, in contrast to best ... | Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin... | B |
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
|
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso... |
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | C |
Unfortunately, T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic... | Note that the time waited is independent of Q𝑄Qitalic_Q.
Together, these two types of waiting ensure that (i) the time needed by ℬℬ\mathcal{B}caligraphic_B is monotone in |Q|𝑄|Q|| italic_Q | and (ii) the total expected time needed by ℬℬ\mathcal{B}caligraphic_B equals the calculated upper bound for 𝒜𝒜\mathcal{A}cali... | Algorithm ℬℬ\mathcal{B}caligraphic_B is simply algorithm 𝒜𝒜\mathcal{A}caligraphic_A, but after every step it waits as long as necessary to make its expected running time for that step equal to the bound calculated for this step.
To be precise, there are two types of waiting, best explained by an example. | Unfortunately, T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic... | 3:Compute the sets ℬ1(1),…,ℬ|𝒯|+1(1)superscriptsubscriptℬ11…superscriptsubscriptℬ𝒯11\mathcal{B}_{1}^{(1)}\!,\ldots,\mathcal{B}_{|\mathcal{T}|+1}^{(1)}caligraphic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … , caligraphic_B start_POSTSUBSCRIPT | caligraphic_T | + 1 end_... | B |
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
|
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment... | idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup.
For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups: | The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | B |
P(a|𝒬,ℐ)=fVQA(v,𝒬).𝑃conditional𝑎𝒬ℐsubscript𝑓𝑉𝑄𝐴𝑣𝒬\displaystyle P(a|\mathcal{Q},\mathcal{I})=f_{VQA}(v,\mathcal{Q}).italic_P ( italic_a | caligraphic_Q , caligraphic_I ) = italic_f start_POSTSUBSCRIPT italic_V italic_Q italic_A end_POSTSUBSCRIPT ( italic_v , caligraphic_Q ) . |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ... | Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... | B |
We crawled the 3.9 million selected URLs using Scrapy444https://scrapy.org/ for about 48 hours between the 4th and 10th of August 2019, for a few hours each day. 3.2 million URLs were successfully crawled, henceforth referred to as candidate privacy policies, while 0.4 million led to error pages and 0.3 million URLs w... |
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an... | The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
|
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier ... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | C |
Furthermore, we allow the users to define specific weights for each metric and focus on the models that perform well for both the entire data space and specific instances.
Finally, StackGenVis does not support direct manipulation of model ensembles [47], as it focuses on exploration of a large solution space before nar... | Stacking methods (or stacked generalizations) refer to a group of ensemble learning methods [45] where several base models are trained and combined into a metamodel with improved predictive power [63]. In particular, stacked generalization can reduce the bias and decrease the generalization error when compared to the u... | Visualization systems have been developed for the exploration of diverse aspects of bagging, boosting, and further strategies such as “bucket of models”.
Stacking, however, has so far not received comparable attention by the InfoVis/VA communities: actually, we have not found any literature describing the construction ... | In a bucket of models, the best model for a specific problem is automatically chosen from a set of available options. This strategy is conceptually different to the ideas of bagging, boosting, and stacking, but still related to ensemble learning.
Chen et al. [6] utilize a bucket of latent Dirichlet allocation (LDA) mod... | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | C |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | B |
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali... | We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019]. |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ... | In the field of Natural Language Processing (NLP), the abundance of training data plays a crucial role in the performance of deep learning models [Dodge et al., 2021]. However, numerous NLP applications face a substantial challenge due to the scarcity of annotated data [Schick and Schütze, 2021]. For example, in person... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | A |
],j\in\mathcal{J}_{m_{s}}=\left[1,J=\lceil\frac{\pi}{BW_{\text{e}}}\rceil\right]italic_i ∈ caligraphic_I start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = [ 1 , italic_I = ⌈ divide start_ARG 2 italic_π end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT a end_POSTSUBSCR... | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... |
Given the maximum resolution of the codebook, we continue to discuss the characteristic of the multi-resolution and the beamwidth with the CCA codebook. For the multi-resolution codebook, the variable resolution is tuned by the beamwidth, which is determined by the number of the activated elements [12]. Note that the ... |
The conventional UPA/ULA codebook design mainly controls the beamwidth by the subarray activation/deactivation with different numbers of elements. In contrast, the codebook for DRE-covered CCA focuses on both the number of subarray elements and the specific subarray localization. The number of subarray elements determ... | D |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | C |
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | B |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... | Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de... | Table 6 shows that though the BLEU improvements start saturating with deep depth-wise LSTM Transformers of more than 12121212 layers, depth-wise LSTM is able to ensure convergence of up to 24242424 layer Transformers. The experiments also show that the size differences between these datasets did not lead to differences... | In our deep Transformer experiments, Table 6 shows that our depth-wise LSTM Transformer with fewer layers, parameters and computations can lead to competitive/better performance and faster decoding speed than vanilla Transformers with more layers but a similar BLEU score, and the depth-wise LSTM Transformer is in fact ... | We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis... | B |
Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for some n𝑛nitalic_n, hence g−1(U)=gn−1(U)superscript𝑔1𝑈superscriptsubscript𝑔𝑛1𝑈g^{-1}(U)=g_{n}^{-1}(U)italic_g start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) = italic_g start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_P... | commute. As I𝐼Iitalic_I is non empty, consider some i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, we have
gi=idi∘g=idi∘g′subscript𝑔𝑖subscriptid𝑖𝑔subscriptid𝑖superscript𝑔′g_{i}=\mathrm{id}_{i}\circ g=\mathrm{id}_{i}\circ g^{\prime}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_id start_POSTSUBSCRIPT italic_i... | fi∘g=gisubscript𝑓𝑖𝑔subscript𝑔𝑖f_{i}\circ g=g_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∘ italic_g = italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for every i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I. In particular, consider
a compact open set of X𝑋Xitalic_X: it can written as fi−1(K)superscripts... | We finally prove that X𝑋Xitalic_X is the limit in 𝐏𝐫𝐞𝐒𝐩𝐞𝐜𝐏𝐫𝐞𝐒𝐩𝐞𝐜\mathbf{PreSpec}bold_PreSpec. Consider {gi:Y→Xi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑌subscript𝑋𝑖𝑖𝐼\{g_{i}\colon Y\to X_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Y → italic_X start_POSTSUBSCRIP... | Assume that {gi:Z→Yi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑍subscript𝑌𝑖𝑖𝐼\{g_{i}\colon Z\to Y_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Z → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT is a collection
o... | A |
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation. However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem. These problems seriously limit... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... | The ordinal distortion represents the image feature in terms of the distortion distribution, which is jointly determined by the distortion parameters and location information. We assume that the camera model is the division model, and the ordinal distortion 𝒟𝒟\mathcal{D}caligraphic_D can be defined as
| (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | B |
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others.
Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes. | D |
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮... |
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮... |
For this guess R𝑅Ritalic_R, the polynomial-scenarios algorithm returns a stage-I set FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, and a stage-II set FAsubscript𝐹𝐴F_{A}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT for each A∈Q𝐴𝑄A\in Qitalic_A ∈ italic_Q. Our polynomial-sce... |
Following the lines of [26], it may be possible to replace this dependence with a notion of dimension of the underlying convex program. However, such general bounds would lead to significantly larger complexities, consisting of very high order polynomials of n𝑛nitalic_n, m𝑚mitalic_m. | An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... | C |
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... |
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian... | We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions.
We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio... | B |
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | C |
3D-FUTURE dataset is a recently public large-scale indoor dataset with 34 categories. Following the official splits, we adopt 12,144 images for training, 2,024 for validation and 6,072 for testing. From the size distribution of bounding boxes in 3D-FUTURE and COCO shown in Figure 1, the medium object size of 3D-FUTURE ... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | C |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... | B |
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... |
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi... | In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t... |
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh... | B |
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... | D |
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti... | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... |
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex... |
The results in Table 10 demonstrate that all variants of decentRL achieves state-of-the-art performance on Hits@1, empirically proving the superiority of using neighbor context as the query vector for aggregating neighbor embeddings. The proposed decentRL outperforms both decentRL w/ infoNCE and decentRL w/ L2, provid... | Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da... | A |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3