context stringlengths 250 4.63k | A stringlengths 250 5.47k | B stringlengths 250 5.47k | C stringlengths 250 4.62k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
After insertion of these three formulas into (22)–(2.3),
the derivatives of Rnm≅xmFsuperscriptsubscript𝑅𝑛𝑚superscript𝑥𝑚𝐹R_{n}^{m}\cong x^{m}Fitalic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ≅ italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ita... | Rnm′′/Rnm′superscriptsuperscriptsubscript𝑅𝑛𝑚′′superscriptsuperscriptsubscript𝑅𝑛𝑚′{R_{n}^{m}}^{\prime\prime}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT it... | Rnm′′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′′𝑥\displaystyle{R_{n}^{m}}^{\prime\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x )
≅\displaystyle\cong≅ | Rnm′′′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′′′𝑥\displaystyle{R_{n}^{m}}^{\prime\prime\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ ′ end_POSTSUPERSCRIPT ( italic_x )
≅\displaystyle\cong≅ | Rnm′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥\displaystyle{R_{n}^{m}}^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x )
≅\displaystyle\cong≅ | D |
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices.
Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. | Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre... |
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec... |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... | D |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscrip... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien... | C |
Similarly, from a P𝑃Pitalic_P-stable triangle A′B′C′superscript𝐴′superscript𝐵′superscript𝐶′A^{\prime}B^{\prime}C^{\prime}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we can also construct △ABC△𝐴�... | Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS.
First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | B |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
|
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Cred... | D |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | B |
Text Features are derived from a tweet’s text content. We consider 16 text features including lengthOftweet and smile (contain :−>,:−),;−>,;−)..):->,:-),;->,;-)..): - > , : - ) , ; - > , ; - ) . . ), sad, exclamation, I-you-heshe (contain first, second, third pronouns). In addition, we use the natural language Toolkit ... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... | In this section, we compare the performance our model with the human rumor debunking websites: snopes.com and urbanlegend.com. Snopes has their own Twitter account141414https://twitter.com/snopes. They regularly post tweets via this account about rumors which they collected and verified. We consider the creation time o... | Twitter Features refer to basic Twitter features, such as hashtags, mentions, retweets. In addition, we derive three more URL-based features. The first is the WOT–trustworthy-based– score which is crawled from the APIs of WOT.com555https://www.mywot.com/en/api. The second is domain categories which we have collected fr... | D |
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}%
)\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | C |
\right)\;.\\
\end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT... | More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states,
as the Monte Carlo error of pM(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( ital... | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | —i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag—
one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein. | D |
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | B |
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was rep... | To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S... |
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was rep... | A |
loc(Zi)=|Zi|+14=2i−2locsubscript𝑍𝑖subscript𝑍𝑖14superscript2𝑖2\operatorname{\textsf{loc}}(Z_{i})=\frac{|Z_{i}|+1}{4}=2^{i-2}loc ( italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG | italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | + 1 end_ARG start_ARG 4 end_ARG = 2 start_POSTS... | It is easy to see that 1111-locality implies some sort of palindromic structure of a word. For example, palindromes like the English words radar, refer and rotator are obviously 1111-local, while the palindrome 𝚊𝚋𝚊𝚋𝚊𝚋𝚊𝚊𝚋𝚊𝚋𝚊𝚋𝚊\mathtt{a}\mathtt{b}\mathtt{a}\mathtt{b}\mathtt{a}\mathtt{b}\mathtt{a}typewriter_... |
Notice that both Zimin words and 1111-local words have an obvious palindromic structure. However, in the Zimin words, the letters occur multiple times, but not in large blocks, while in 1111-local words there are at most 2222 blocks of each letter. With respect to palindromes, we can show the following general result ... | Regarding the locality of Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, note that marking x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT leads to 2i−2superscript2𝑖22^{i-2}2 start_POSTSUPERSCRIPT italic_i - 2 end_POSTSUPERSCRIPT marked blocks; further, marking x1subsc... |
Observation 2.1 justifies that in the following, we are only concerned with condensed words (and therefore words with at most 2loc(α)2loc𝛼2\operatorname{\textsf{loc}}(\alpha)2 loc ( italic_α ) occurrences per symbol and total length of at most |X|2loc(α)𝑋2loc𝛼|X|2\operatorname{\textsf{loc}}(\alpha)| italic_X |... | B |
It consists of a three layer CNN that extracts cardiac representations, then two parallel LSTM-based RNNs for modeling the temporal dynamics of cardiac sequences.
Finally, there is a Bayesian framework capable of learning multitask relationships and a softmax classifier for classification. | Zubair et al.[75] detected the R-peak using a non-linear transformation and formed a beat segment around it.
Then, they used the segments to train a three layer 1D CNN with variable learning rate depending on the mean square error and achieved better results than previous state-of-the-art. | In[148] the authors adopt a 3D multi-scale CNN to identify pixels that belong to the RV.
The network has two convolutional pathways and their inputs are centered at the same image location, but the second segment is extracted from a down-sampled version of the image. | They used multi-scale discrete WT to facilitate the extraction of MI features at specific frequency resolutions and softmax regression to build a multi-class classifier based on the learned features.
Their validation experiments show that their method performed better than previous methods in terms of sensitivity and s... | Extensive comparisons with the state-of-the-art show the effectiveness of this method in terms of mean absolute error.
In[161] the authors created an unsupervised cardiac image representation learning method using multi-scale convolutional RBM and a direct bi-ventricular volume estimation using RF. | D |
In search for an effective world model we experimented with various architectures, both new and modified versions of existing ones. This search resulted in a novel stochastic video prediction model (visualized in Figure 2) which achieved superior results compared to other previously proposed models. In this section, we... | In our experiments, we varied details of the architecture above. In most cases, we use a stack of four convolutional layers with 64646464 filters followed by three dense layers (the first two have 1024102410241024 neurons). The dense layers are concatenated with 64646464 dimensional vector with a learnable action embed... | Our basic architecture, presented as part of Figure 2, resembles the convolutional feedforward network from Oh et al. (2015). The input X𝑋Xitalic_X consists of four consecutive game frames and an action a𝑎aitalic_a. Stacked convolution layers process the visual input. The actions are one-hot-encoded and embedded in a... | A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ... | Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-... | B |
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | B |
In this equation, T𝑇Titalic_T is the time taken to negotiate, Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the energy used by the actuated joint i𝑖iitalic_i, n𝑛nitalic_n denotes the actuated joint number, and dt𝑑𝑡dtitalic_d italic_t signifies the time step.
| In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... |
Fig. 7 illustrates the hierarchical control design for the autonomous locomotion mode transition. The decision-making process for this transition is accomplished in MATLAB, whereas the control of each separate locomotion mode is enacted in CoppeliaSim. The connection between MATLAB and the physical robot model in Copp... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | A |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | The above observations were recently made in the context of online algorithms with machine-learned predictions.
Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ... | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | D |
It is important to note that, as it is described in Section 2.2 of [Losada & Crestani, 2016], to construct the depression group, authors first collected users by doing specific searches on Reddit (e.g. “I was diagnosed with depression”) to obtain self-expressions of depression diagnoses, and then they manually reviewed... | In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings.
In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of th... | As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in whic... | Another important aspect of this incremental approach is that since this confidence vector is a value that “summarizes the past history”, keeping track of how this vector changes over time should allow us to derive simple and clear rules to decide when the system should make an early classification. As an example of th... | In the first one, we performed experiments in accordance with the original eRisk pilot task definition, using the described chunks.
However, since this definition assumes, by using chunks, that the total number of user’s writings is known in advance191919Which is not true when working with a dynamic environment, such a... | A |
Considering the computation overhead of the top-s𝑠sitalic_s selection, we use an approximate way to implement it (Lin et al., 2018): given a vector 𝐚∈ℝd𝐚superscriptℝ𝑑{\bf a}\in{\mathbb{R}}^{d}bold_a ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, we first randomly choose a subset S⊂[d]𝑆delimited... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... |
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods... | Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data dis... | C |
Although ReLU creates exact zeros (unlike its predecessors sigmoid𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes.
The same a... | Recently, in k𝑘kitalic_k-Sparse Autoencoders [21] the authors used an activation function that applies thresholding until the k𝑘kitalic_k most active activations remain, however this non-linearity covers a limited area of the activation map by creating sparsely disconnected dense areas (Fig. 1LABEL:sub@subfig:topkabs... | Although ReLU creates exact zeros (unlike its predecessors sigmoid𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes.
The same a... |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | ϕ=ReLU(s)italic-ϕ𝑅𝑒𝐿𝑈𝑠\phi=ReLU(s)italic_ϕ = italic_R italic_e italic_L italic_U ( italic_s ).
The ReLU activation function produces sparsely disconnected but internally dense areas as shown in Fig. 1LABEL:sub@subfig:relu instead of sparse spikes. | A |
A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calc... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... | A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calc... | C |
μ𝜇\muitalic_μ, in the viscous terms in the momentum equation (see equation
3.18), as μnum=min0νnumsubscript𝜇𝑛𝑢𝑚subscript𝑚𝑖subscript𝑛0subscript𝜈𝑛𝑢𝑚\mu_{num}=m_{i}n_{0}\,\nu_{num}italic_μ start_POSTSUBSCRIPT italic_n italic_u italic_m end_POSTSUBSCRIPT = italic_m start_POSTSUBSCRIPT italic_i end_POSTSUB... | μ𝜇\muitalic_μ, in the viscous terms in the momentum equation (see equation
3.18), as μnum=min0νnumsubscript𝜇𝑛𝑢𝑚subscript𝑚𝑖subscript𝑛0subscript𝜈𝑛𝑢𝑚\mu_{num}=m_{i}n_{0}\,\nu_{num}italic_μ start_POSTSUBSCRIPT italic_n italic_u italic_m end_POSTSUBSCRIPT = italic_m start_POSTSUBSCRIPT italic_i end_POSTSUB... | a reduced value μphys=min0νphyssubscript𝜇𝑝ℎ𝑦𝑠subscript𝑚𝑖subscript𝑛0subscript𝜈𝑝ℎ𝑦𝑠\mu_{phys}=m_{i}n_{0}\,\nu_{phys}italic_μ start_POSTSUBSCRIPT italic_p italic_h italic_y italic_s end_POSTSUBSCRIPT = italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRI... | for νnumsubscript𝜈𝑛𝑢𝑚\nu_{num}italic_ν start_POSTSUBSCRIPT italic_n italic_u italic_m end_POSTSUBSCRIPT and νphyssubscript𝜈𝑝ℎ𝑦𝑠\nu_{phys}italic_ν start_POSTSUBSCRIPT italic_p italic_h italic_y italic_s end_POSTSUBSCRIPT would be avoided because it breaks
the conservation of system energy pertaining to the ... | νnum=700subscript𝜈𝑛𝑢𝑚700\nu_{num}=700italic_ν start_POSTSUBSCRIPT italic_n italic_u italic_m end_POSTSUBSCRIPT = 700m2/s and νphys=410subscript𝜈𝑝ℎ𝑦𝑠410\nu_{phys}=410italic_ν start_POSTSUBSCRIPT italic_p italic_h italic_y italic_s end_POSTSUBSCRIPT = 410m2/s: νnumsubscript𝜈𝑛𝑢𝑚\nu_{num}italic_ν start_P... | B |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | C |
Q∗(s,a)=maxπQπ(s,a)superscript𝑄𝑠𝑎𝑚𝑎subscript𝑥𝜋superscript𝑄𝜋𝑠𝑎Q^{*}(s,a)=max_{\pi}Q^{\pi}(s,a)italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_s , italic_a ) = italic_m italic_a italic_x start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRI... |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action v... | The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
| C |
ADE20K: The ADE20K dataset (Zhou et al., 2017) contains 25,210 images from other existing datasets, e.g, the LabelMe (Russell et al., 2008), the SUN (Xiao et al., 2010), and the Places (Zhou et al., 2014) datasets. The images are annotated with labels belonging to 150 classes for “scenes, objects, parts of objects, and... | ADE20K: The ADE20K dataset (Zhou et al., 2017) contains 25,210 images from other existing datasets, e.g, the LabelMe (Russell et al., 2008), the SUN (Xiao et al., 2010), and the Places (Zhou et al., 2014) datasets. The images are annotated with labels belonging to 150 classes for “scenes, objects, parts of objects, and... | CamVid: The Cambridge-driving Labeled Video Database (CamVid) (Brostow et al., 2008, 2009) contains 10 minutes of video captured at 30 frames per second from a driving automobile’s perspective, along with pixel-wise semantic segmentation annotations for 701 frames and 32 object classes.
| Cityscapes: The Cityscapes dataset (Cordts et al., 2016) contains annotated images of urban street scenes. The data was collected during daytime from 50 cities and exhibits variance in the season of the year and traffic conditions. Semantic, instance wise, and dense pixel-wise annotations are provided, with ‘fine’ anno... |
Collecting large-scale accurate pixel-level annotation is time-consuming and financially expensive. However, unlabeled and weakly-labeled images can be collected in large amounts in a relatively fast and cheap manner. As shown in Figure 2, varying levels of supervision are possible when training deep segmentation mode... | B |
The red line indicates the number of edges that remain in 𝐀¯¯𝐀\bar{{\mathbf{A}}}over¯ start_ARG bold_A end_ARG after sparsification.
It is possible to see that for small increments of ϵitalic-ϵ\epsilonitalic_ϵ the spectral distance increases linearly, while the number of edges in the graph drops exponentially. | The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
|
The proposed spectral algorithm is not designed to handle very dense graphs; an intuitive explanation is that 𝐯maxssubscriptsuperscript𝐯𝑠max{\mathbf{v}}^{s}_{\text{max}}bold_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT can be interpreted as the graph signal with the... | We notice that the coarsened graphs are pre-computed before training the GNN.
Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP... | The GNN is then trained to fit its node representations to these pre-determined structures.
Pre-computing graph coarsening not only makes the training much faster by avoiding to perform graph reduction at every forward pass, but it also provides a strong inductive bias that prevents degenerate solutions, such as entire... | D |
Neural random forest imitation enables an implicit transformation of random forests into neural networks. Usually, data samples are propagated through the individual decision trees and the split decisions are evaluated during inference.
We propose a method for generating input-target pairs by reversing this process and... | Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages.
Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ... | The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes.
Olson et al. (2018) evaluate the performance of modern... | In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples.
Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014). |
We now compare the proposed method to state-of-the-art methods for mapping random forests into neural networks and classical machine learning classifiers such as random forests and support vector machines with a radial basis function kernel that have shown to be the best two classifiers across all UCI datasets (Fernán... | A |
In the latter two settings with unknown transition dynamics, all the existing algorithms (Neu et al., 2012; Rosenberg and Mansour, 2019a, b) follow the gradient direction with respect to the visitation measure, and thus, differ from most practical policy optimization algorithms. In comparison, OPPO is not restricted to... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... | B |
\mathbf{W};\mathcal{D})]blackboard_E start_POSTSUBSCRIPT italic_q ( bold_W | italic_ν ) end_POSTSUBSCRIPT [ caligraphic_L ( bold_W ; caligraphic_D ) ] for the variational parameters ν𝜈\mathbf{\nu}italic_ν with gradient-based optimization using the local reparameterization trick (Kingma et al., 2015).
After training ha... | Since their approach is limited to the ReLU activation function, Peters and Welling (2018) extended their work to the signsign\operatorname*{sign}roman_sign activation function.
This involves several non-trivial changes since the sign activation, due to its zero derivative, requires that the local reparameterization tr... | where f~(w)~𝑓𝑤\tilde{f}(w)over~ start_ARG italic_f end_ARG ( italic_w ) is an arbitrary differentiable function with a similar functional shape as f(w)𝑓𝑤f(w)italic_f ( italic_w ).
For instance, in case of the sign activation function f(w)=sign(w)𝑓𝑤sign𝑤f(w)=\operatorname*{sign}(w)italic_f ( italic_w ) = roman... |
While most activation binarization methods use the sign function which can be seen as an approximation of the tanh\tanhroman_tanh function, Cai et al. (2017) proposed a half-wave Gaussian quantization that more closely resembles the predominant ReLU activation function. | In the forward pass, the solid red line is followed which passes the two piecewise constant functions Q𝑄Qitalic_Q and signsign\operatorname*{sign}roman_sign whose gradient is zero almost everywhere (red boxes).
During backpropagation, the dashed green line is followed which avoids these piecewise constant functions an... | A |
We thank Prof. Henry Adams and Dr. Johnathan Bush for very useful feedback about a previous version of this article. We also thank Prof. Mikhail Katz and Prof. Michael Lesnick for explaining to us some aspects of their work. We thank Dr. Qingsong Wang for bringing to our attention the paper [76] which was critical for ... | In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem... | Albeit for the notation sFillRadksubscriptsFillRad𝑘\mathrm{sFillRad}_{k}roman_sFillRad start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, the above stability result should be well known to readers familiar with applied algebraic topology concepts – we state and prove it here however to provide some background for those r... | In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori no... | In this section we cover the background needed for proving our main results. We alert readers that, in this paper, the same notation can mean either a simplicial complex itself or its geometric realization, interchangeably. The precise meaning will be made clear in each context.
| D |
One initial observation is that the overall Completion Time for both groups was remarkably similar. With the exception of Tasks 1 and 5, where t-viSNE users performed faster than GEP, in general the results have not shown any statistically significant difference. To answer RQ1, we detected no statistically significant... |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | On the other hand, t-viSNE obtained consistently higher scores for Tool Supportiveness, with a higher average in all the proposed tasks. The bulk of the distributions of the supportiveness scores from the two groups overlap little, mostly near outliers (the “N/A” option was chosen three times, all in the GEP group).
Wh... | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... |
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde... | B |
After reviewing the algorithms and both taxonomies, we have identified several key learned lessons which serve as recommendations to deal with in forthcoming years for that is working on nature- and bio-inspired optimization. The learned lessons gained from the taxonomies and research outlined in [1] form the foundati... |
The prior related work reviewed above indicates that the community widely acknowledges (with more emphasis in recent times) the need for properly organizing the plethora of bio- and nature-inspired algorithms in a coherent taxonomy. However, the majority of them are only focused on the natural inspiration of the algor... | In summary, although in the last years many nature-inspired algorithms have been proposed by the community and their number grows steadily every year, more than half of the proposals reviewed in our work are incremental, minor versions of only three very classical algorithms (PSO, DE, and GA). We, therefore, conclude t... | The behavior is more relevant than the natural inspiration: As was exposed in Section 5, the current literature is flooded with a huge number of nature- and bio-inspired algorithms. However, as has been spotted by our proposed taxonomies, several algorithms belonging to categories with different sources of inspiration ... | Considering the previous examples, it is clear that the real behavior of the algorithm is much more informative than its natural or biological inspiration. Even more, we have observed that in our first proposed taxonomy, built upon the review of 518 proposals, the huge diversity of inspirational sources does not corres... | C |
TABLE IV: k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT: initial sparsity; γ𝛾\gammaitalic_γ: learning rate; tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT: number of iterations to update GAE; struct: neurons of each layer used in AdaGAE.
|
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... |
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph. | In our experiments, the encoder consists of two GCN layers. If the input dimension is 1024, the first layer has 256 neurons and the second layer has 64 neurons. Otherwise, the two layers have 128 neurons and 64 neurons respectively. The activation function of the first layer is set as ReLU while the other one employs t... | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | D |
Although the agents provide the optimal setup for testing filtering, with control over the packets that can be crafted and sent from both sides, as we explain in Related Work Section 2, this approach is limited only to networks that deploy agents on their networks. In contrast, SMap provides better coverage since it i... | Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi... | Agents Active Measurements. Agents with active probes found 608 ASes that were found not to be enforcing ingress filtering using the agents approach of the Spoofer Project (these include duplicates with the traceroute loops measurements). Those contain some of the duplicates from traceroute measurements: together both ... |
These findings show that SMap offers benefits over the existing methods, providing better coverage of the ASes in the Internet and not requiring agents or conditions for obtaining traceroute loops, hence improving visibility of networks not enforcing ingress filtering. |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... | C |
All neural networks in this section were trained using stochastic gradient descent with momentum [24] on the loss function ℒℒ\mathcal{L}caligraphic_L. The learning rate was set to 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT and the momentum factor to 0.90.90.90.9. Networks were trained fo... | First, the effect of sensor drift on classification accuracy is demonstrated using classifiers trained on a single batch. For each batch 1111 through 10101010, a feedforward model was trained on that batch. Training of a new model was repeated 30 times on each batch. The accuracy of all classifiers were evaluated on ev... |
In order to improve performance, Vergara et al. [7] employed an ensemble technique on the SVM classifiers (Fig. 2B). The same technique was reimplemented and tested on the modified dataset in this paper. The ensemble meant to generalize to batch T𝑇Titalic_T was constructed by training a collection of single-batch cla... |
The skill network approach incorporates all available data into a single training set, disregarding the sequential structure between batches of the dataset. For each batch T𝑇Titalic_T, a network was trained using batches 1111 through T−1𝑇1T-1italic_T - 1 as the training set and evaluated on batch T𝑇Titalic_T. | Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a... | C |
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We previously placed these separators in every second nonempty drum σi:=[iδ,(i+1)δ]×Balld−1(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=... | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu... | Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above.
The only nontrivial requirement is that T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBS... | We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O(1)𝑂1O(1)italic_O ( ... | However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1.
Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide,... | D |
Then mapping all elements of Sisubscript𝑆𝑖S_{i}italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to e𝑒eitalic_e for i<n𝑖𝑛i<nitalic_i < italic_n gives the homomorphisms
ϕisubscriptitalic-ϕ𝑖\phi_{i}italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the hypothesis of 12. | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... |
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t... | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ... | D |
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... | Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... | A |
Readability. Readability of a text can be defined as the ease of understanding or comprehension due to the style of writing (Klare et al., 1963). Along with length, readability plays a role in internet users’ decisions to either read or ignore a privacy policy (Ermakova et al., 2015). While prior studies on readability... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... |
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as P... | Topic modelling showed the distribution of themes of privacy practices in policies, corresponding to the expectations of legal experts in some ways, but differing in others. The positive relationship between PageRank of a domain and the number of topics covered in its policy indicates that more popular domains have a s... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... | B |
T3: Manage the performance metrics for enhancing trust in the results. Many performance or validation metrics are used in the field of ML. For each data set, there might be a different set of metrics to measure the best-performing stacking. Controlling the process by alternating these metrics and observing their influe... |
G4: Facilitate human interaction and offer guidance. During development of any VA tool, it is key to decide on concrete visual representations and interaction technologies between multiple coordinated views. It is not uncommon to find gaps between visualization design guidelines and their applicability in implemented ... | and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme... |
T4: Compare the results of two stages and receive feedback to guide interaction. To assist the knowledge generation, a comparison between the currently active stack against previously stored versions is important. In general, this includes monitoring the historical process of the stacking ensemble, facilitating intera... |
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, ve... | C |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | C |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | D |
where SNRthsubscriptSNRth\text{SNR}_{\text{th}}SNR start_POSTSUBSCRIPT th end_POSTSUBSCRIPT is a certain SNR threshold. If an arbitrary link between the t-UAV and the r-UAV is in the outage, there is an outage in the network. Hence, the outage probability is determined by the minimum SNR among K𝐾Kitalic_K t-UAVs. Fig... | Given the received signal in (4), the signal-to-interference-plus-noise ratio (SINR) 111When the transmit signals of t-UAVs have similar AOAs at the r-UAV, the combiners may not separate them correctly and the inter-UAV interference exists in the UAV networks. Due to the constraint on the minimum distance between UAVs,... | and the CCA scheme achieves higher SE than the UPA scheme obviously with different t-UAV number K𝐾Kitalic_K. The main reason is that the UPA with DREs can only receive/transmit the signal within a limited angular range at a certain time slot while the CCA does not have such limitation. It is also shown that the gap be... | As shown in Fig. 11, the SE of the CCA codebook scheme and the traditional codebook scheme is compared. The proposed DRE-covered CCA codebook is used in the CCA codebook scheme. In the traditional codebook scheme, the codebook without subarray partition is used. The CCA on the r-UAV is equally partitioned into K𝐾Kital... |
In this paper, we mainly focus on the analog beam tracking without considering the inter-UAV interference. The sum SE calculated by (11) and (28) with different numbers of t-UAVs and the given transmit power is shown in Fig. 10, respectively, to verify the influence of the inter-UAV interference. It is shown that the ... | D |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | A |
*}\cdot(1-\gamma)^{-1}\cdot\alpha^{-1},≤ 1 / 2 ⋅ ( 1 - italic_γ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ⋅ over¯ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ⋅ ( 1 - italic_γ ) st... |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... |
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| D |
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ... |
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin... | For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... | We applied joint Byte-Pair Encoding Sennrich et al. (2016) with 32k32𝑘32k32 italic_k merging operations on all data sets to address the unknown word issue. We only kept sentences with a maximum of 256256256256 subword tokens for training. For fair comparison, we did not tune any hyperparameters but followed Vaswani e... | D |
ψ⊇Pn≜∃x0,…,xn−1.⋀i<j¬(xi=xj)∧⋀0≤i<n−1E(xi,xi+1)formulae-sequence≜subscript𝜓subscript𝑃𝑛absentsubscript𝑥0…subscript𝑥𝑛1subscript𝑖𝑗subscript𝑥𝑖subscript𝑥𝑗subscript0𝑖𝑛1𝐸subscript𝑥𝑖subscript𝑥𝑖1\psi_{\supseteq P_{n}}\triangleq\exists x_{0},\dots,x_{n-1}.\bigwedge_{i<j}% | X≜{x→∈∏i∈IXi∣∀i≤j∈I,xj=fi,j(xi)}≜𝑋conditional-set→𝑥subscriptproduct𝑖𝐼subscript𝑋𝑖formulae-sequencefor-all𝑖𝑗𝐼subscript𝑥𝑗subscript𝑓𝑖𝑗subscript𝑥𝑖X\triangleq\left\{\vec{x}\in\prod_{i\in I}X_{i}\mid\forall i\leq j\in I,x_{j}=%
f_{i,j}(x_{i})\right\}italic_X ≜ { over→ start_ARG italic_x end_ARG ∈ ∏ start_POST... | \neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i... | _{i}^{1})_{1\leq i\leq n};(y\models\psi_{i}^{2})_{1\leq i\leq n}\right)=1\;.∀ italic_x ∈ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y ∈ italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_f ( italic_x , italic_y ) ⊧ italic_φ ⇔ italic_β ( ( italic_x ⊧ italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSU... | \,n-1})italic_ψ start_POSTSUBSCRIPT ⊇ italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i < italic_j end_POSTSUBSCRIPT ¬ ( italic_x start_POSTSUBS... | B |
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Sinc... | Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ... | Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul... | D |
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD.
However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory. | In this paper, we first review the convergence property of MSGD, one of the most widely used variants of SGD, and analyze the failure of MSGD in large-batch training from an optimization perspective. Then, we propose a novel method, called
stochastic normalized gradient descent with momentum (SNGM), for large-batch tra... | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | If we avoid these tricks, these methods may suffer from severe performance degradation.
For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective. | SGD and its variants are iterative methods. In the t𝑡titalic_t-th iteration, these methods randomly
choose a subset (also called a mini-batch) ℐt⊂{1,2,…,n}subscriptℐ𝑡12…𝑛{\mathcal{I}}_{t}\subset\{1,2,\ldots,n\}caligraphic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ⊂ { 1 , 2 , … , italic_n } and compute the | C |
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
|
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m... | The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... | C |
That is, the mean square error at the next time can be controlled by that at the
previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... | As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg... |
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an... | C |
\lceil 1/\delta\rceil$}\end{cases}.{ start_ROW start_CELL ∑ start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT > italic_k ⋅ italic_m italic_a italic_x ( italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSC... | For instance, since the random output tables in Figure 3 comply with 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG-probability, for any QI value whose corresponding column has at least one probability greater than 0, there are at least 2 records can carry the QI value.
| Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... |
Additionally, differing from traditional principles that directly confine the values in microdata, we propose a δ𝛿\deltaitalic_δ-probability principle to control random output tables so as to limit the probability of any QI value being used to re-identify a target person. For instance, the random output tables in Fig... | This experiment measures the effectiveness of privacy preservation for MuCo. We assume that an adversary has all the QI values of the target person, and each QI value is combined to match the target person with the probability of Pmatchsubscript𝑃𝑚𝑎𝑡𝑐ℎP_{match}italic_P start_POSTSUBSCRIPT italic_m italic_a ital... | A |
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set... | As shown in Table 3, all PointRend models achieve promising performance. Even without ensemble, our PointRend baseline, which yields 77.38 mAP, has already achieved 1st place on the test leaderboard. Note that several attempts, like BFP Pang et al. (2019) and EnrichFeat, give no improvements against PointRend baseline,... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | A |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | C |
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio... |
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... |
Compared to OPT-WLSVI and MASTER, our proposed algorithms achieve comparable empirical performance. More specifically, MASTER outperforms our proposed algorithm which agrees with its dynamic regret upper bound. However, the variance of MASTER is larger due to the random scheduling of multiple base algorithms. Our algo... | Using the master algorithm to select the window size is reminiscent of model selection approaches to online RL (Agarwal et al., 2017; Pacchiano et al., 2020; Lee et al., 2021; Abbasi-Yadkori et al., 2020). Typically model selection approaches achieve worse rates compared to the best base algorithm. However, in our sett... | C |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3