context stringlengths 250 5.39k | A stringlengths 250 7.25k | B stringlengths 250 4.32k | C stringlengths 250 8.2k | D stringlengths 250 11.4k | label stringclasses 4
values |
|---|---|---|---|---|---|
ddxRnm(x)𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d}{dx}R_{n}^{m}(x)divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x )
=\displaystyle== | (−1)a(b−1−a)[d3dx3xmF(a,b;c;z)+3d2dx2xmddxF(a,b;c;z)\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d^{3}}{dx^{3}}x^{m}F(a,b;c;z)+%
3\frac{d^{2}}{dx^{2}}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ div... | 2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR... | +3ddxxmd2dx2F(a,b;c;z)+xmd3dx3F(a,b;c;z)].\displaystyle\quad\quad+3\frac{d}{dx}x^{m}\frac{d^{2}}{dx^{2}}F(a,b;c;z)+x^{m}%
\frac{d^{3}}{dx^{3}}F(a,b;c;z)\Big{]}.+ 3 divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic... | (−1)a(b−1−a)[ddxxmF(a,b;c;z)+xmddxF(a,b;c;z)];superscript1𝑎binomial𝑏1𝑎delimited-[]𝑑𝑑𝑥superscript𝑥𝑚𝐹𝑎𝑏𝑐𝑧superscript𝑥𝑚𝑑𝑑𝑥𝐹𝑎𝑏𝑐𝑧\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d}{dx}x^{m}F(a,b;c;z)+x^{m}%
\frac{d}{dx}F(a,b;c;z)\Big{]};( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRI... | D |
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... |
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set a... |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... |
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows. | A |
λ~hf=−P(Tλ0+Tλ~h0+T~g).superscriptsubscript~𝜆ℎ𝑓𝑃𝑇superscript𝜆0𝑇subscriptsuperscript~𝜆0ℎ~𝑇𝑔\tilde{\lambda}_{h}^{f}=-P(T\lambda^{0}+T\tilde{\lambda}^{0}_{h}+\tilde{T}{g}).over~ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = - it... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... | To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}... | We start by recasting the continuous problem in a weak formulation that depends on a polyhedral regular mesh 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and let ℱHsubscriptℱ𝐻\mathcal{F}_{H}caligraphic_F start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT be the set of faces... |
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT. | D |
We think Alg-A is better in almost every aspect. This is because it is essentially simpler.
Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others: | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | D |
Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... | CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, [18] use an extensive list of bipolar sentiments with a set of combinational rules. In... | For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | D |
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”. |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... |
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level... | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... | A |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | A |
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | the fundamental operation in the proposed SMC-based MAB Algorithm 1
is to sequentially update the random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali... | SMC weights are updated based on the likelihood of the observed rewards:
wt,a(m)∝pa(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it... | The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits.
The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making. | we propagate forward the sequential random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ... | C |
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | C |
Table 4: The results after evaluating our model with respect to its computational efficiency. We tested five versions trained on different eye tracking datasets, each receiving input images of their preferred sizes in pixels (px). After running each network on 10,000 test set instances from the ImageNet database for 10... |
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed. |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... | To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... | The proposed encoder-decoder model was evaluated on five publicly available eye tracking datasets that yielded qualitative and quantitative results. First, we provide a brief description of the images and empirical measurements utilized in this study. Second, the different metrics commonly used to assess the predictive... | A |
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11... | j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111.
This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act position... | We first prove pw(Gα)≤2loc(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α... | This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending po... |
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration)... | D |
Xia et al.[88] compared two CNNs, with three and two layers, that were fed with spectrograms of signals from AFDB using Short-Term Fourier Transform and stationary WT respectively.
Their experiments concluded that the use of stationary WT achieves a slightly better accuracy for this task. | Then, they segmented the RR intervals to 30 samples each and fed them to a network with two layers followed by a pooling layer and a LSTM layer with 100 units.
The method was validated on MITDB and NSRDB achieving an accuracy that indicates its generalizability. | They trained a five layer CNN in a sequence of short windows with movement artifacts and its output was combined with features calculated based on beat-to-beat variability and the signal quality index.
An accuracy of 91.8% in AF detection was achieved by the method and in combination with its computational efficiency i... | Gotlibovych et al.[117] trained an one layer CNN network followed by a LSTM using 180h of PPG wearable data to detect AF.
Use of the LSTM layer allows the network to learn variable-length correlations in contrast with the fixed length of the convolutional layer. | Experiments by the authors showed that the three layer 1D CNN created better and more stable results.
In[101] the authors trained a network with an one convolutional layer with dropout followed by two RNNs to identify stress using short-term ECG data. | A |
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans... | Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | A |
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | Figure 1: High level overview of a feed-forward pass of the combined methods.
xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ... | The names of the classes are depicted at the right along with the predictions for this example signal.
The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters. | C |
This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | B |
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow
information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution.... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | The above observations were recently made in the context of online algorithms with machine-learned predictions.
Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ... | D |
Since ⊕1subscriptdirect-sum1\oplus_{1}⊕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the addition, instead of processing the whole document again, we could update the already computed vector, (0.15,3.65,2.0,0.15)0.153.652.00.15(0.15,3.65,2.0,0.15)( 0.15 , 3.65 , 2.0 , 0.15 ), by adding it to the new sentence confidence v... | However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579292929Note that this is the same subject who was prev... | Another important aspect of this incremental approach is that since this confidence vector is a value that “summarizes the past history”, keeping track of how this vector changes over time should allow us to derive simple and clear rules to decide when the system should make an early classification. As an example of th... | In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings.
In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of th... | We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed.
For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be... | B |
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | We improve DEF-A by changing its local momentum to global momentum, getting a new method called GMC+. The detail of GMC+ is shown in Algorithm 2.
We also adopt parameter server architecture for illustration. GMC+ can also be easily implemented on all-reduce frameworks. | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red... | B |
For the same task as the previous one but for 2D, we use MNIST which consists of a training dataset of 60000600006000060000 greyscale images with handwritten digits and a test dataset of 10000100001000010000 images each one having size of 28×28282828\times 2828 × 28. | The first two fully connected layers are followed by a ReLU while the last one produces the predictions.
The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function. | During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs.
The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as... | Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output.
DNNs were successful in utilizi... | From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | B |
We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative ga... | To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ... |
With the rapid commercialization of UAVs, a lot of research has emerged in this field [16]. To efficiently deploy UAVs, studies have been made to find out UAV distribution on network graph [9] and a graphical model has been proposed for channels reuse [17]. The resource allocation of channel and time is also a hot are... |
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity o... | When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg... | B |
, 𝐏2¯=(v¯z/r¯)𝐳^¯subscript𝐏2subscript¯𝑣𝑧¯𝑟^𝐳\overline{\mathbf{P}_{2}}=\left(\overline{v}_{z}\,/\,\overline{r}\right)%
\widehat{\mathbf{z}}over¯ start_ARG bold_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG = ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT / over¯ start_ARG i... | \theta}\cdot\left(\overline{\widehat{\nabla}}\,\,\overline{f}\right)\right)\,/%
\,\left(\mu_{0}\,\overline{r}\,\,\overline{\rho}\right)\right\}\right\}= italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { - ( over¯ star... | \mathbf{z}}over¯ start_ARG bold_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG = ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT / over¯ start_ARG italic_r end_ARG ) over^ start_ARG bold_r end_ARG + ( over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT / ... | \widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG e... | (\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s... | B |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | C |
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... |
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requirin... |
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | B |
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th... |
In the following sections, we discuss deep semantic image segmentation improvements under different categories visualized in Figure 1. For each category, we first review the improvements on non-medical datasets, and in a subsequent section, we survey the improvements for medical images. | In contrast to natural images, it is difficult to tabulate and summarize the performance of medical image segmentation methods because of the vast number of (a) medical imaging modalities and (b) medical image segmentation datasets. Figure 15 presents a breakdown of the various attributes of the medical image segmentat... | Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic... |
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th... | A |
Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices.
In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{... | To train the GNN on mini-batches of graphs with a variable number of nodes, we consider the disjoint union of the graphs in each mini-batch and train the GNN on the combined Laplacians and graph signals.
See the supplementary material for an illustration. | However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices.
A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai... | However, this solution is particularly inefficient in terms of memory cost, especially when there are many graphs with less than Nmaxsubscript𝑁maxN_{\text{max}}italic_N start_POSTSUBSCRIPT max end_POSTSUBSCRIPT vertices.
A more efficient solution is to build the disjoint union of the graphs in each mini-batch and trai... | Problems such as graph classification and graph regression are characterized by samples of graphs that, generally, have a variable number of vertices.
In order to apply MP and pooling operations when training a GNN on mini-batches, one solution is to perform zero-padding and obtain all graphs with Nmaxsubscript𝑁maxN_{... | B |
The input data is normalized to [−1,1]11[-1,1][ - 1 , 1 ].
For generating a wide variety of data, the prioritization of the current path wpath∼1+|𝒩(0,5)|similar-tosubscript𝑤path1𝒩05w_{\text{path}}\sim 1+\lvert\mathcal{N}(0,5)\rvertitalic_w start_POSTSUBSCRIPT path end_POSTSUBSCRIPT ∼ 1 + | caligraphic_N ( 0 , 5 ) |... | In all our experiments, stochastic gradient descent with Nesterov momentum as optimizer and cross-entropy loss are used.
The initial learning rate is set to 0.10.10.10.1, momentum to 0.90.90.90.9, and weight decay to 0.00050.00050.00050.0005. The batch size is set to 128128128128 and 512512512512, respectively, for gen... | A new random forest is trained every 100100100100 epochs to average the influence of the stochastic process, and the generated data samples are mixed.
In the following, training on generated data will be denoted as NRFI (gen) and training on generated and original data as NRFI (gen+ori). The fraction of NRFI data is se... | fraction of NRFI data wgensubscript𝑤genw_{\text{gen}}italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT is varied, which weights the loss of the generated data. Accordingly, the weight for the original data is set to wori=1−wgensubscript𝑤ori1subscript𝑤genw_{\text{ori}}=1-w_{\text{gen}}italic_w start_POSTSUBSCRIPT or... | Figure 6:
Analyzing the influence of training with original data, NRFI data, and combinations of both for different number of samples per class. Using only NRFI data (wgen=100%subscript𝑤genpercent100w_{\text{gen}}=100\%italic_w start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT = 100 %) achieves better results than using only... | B |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | D |
In MobileNet (Howard et al., 2017a) depthwise separable convolutions are used to split a standard convolution in another way: (i) a depthwise convolution and (ii) a 1×1111\times 11 × 1 convolution.
The depthwise convolution applies a K×K𝐾𝐾K\times Kitalic_K × italic_K filter to each channel separately without taking t... | Similar ideas are used in SqueezeNet (Iandola et al., 2016) which employs 1×1111\times 11 × 1 convolutions to reduce the number of input channels of subsequent parallel 1×1111\times 11 × 1 and 3×3333\times 33 × 3 convolutions.
In addition, SqueezeNet uses the global average pooling output of per-class channels directly... | In MobileNet (Howard et al., 2017a) depthwise separable convolutions are used to split a standard convolution in another way: (i) a depthwise convolution and (ii) a 1×1111\times 11 × 1 convolution.
The depthwise convolution applies a K×K𝐾𝐾K\times Kitalic_K × italic_K filter to each channel separately without taking t... | In particular, the residual path performs a 1×1111\times 11 × 1 convolution to increase the number of channels, followed by a cheap depthwise 3×3333\times 33 × 3 convolution, followed by another 1×1111\times 11 × 1 convolution to reduce the number of channels again.
They show that their inverted structure is more memor... | A typical residual block with bottleneck structure in ResNet (He et al., 2016) contains a 1×1111\times 11 × 1 bottleneck convolution to reduce the number of channels, followed by a 3×3333\times 33 × 3 convolution, followed by another 1×1111\times 11 × 1 convolution to restore the original number of channels again.
Cont... | D |
If X𝑋Xitalic_X is a hyperbolic geodesic metric space, then for any k≥1𝑘1k\geq 1italic_k ≥ 1 and I=(u,v]∈barckVR(X;𝔽)𝐼𝑢𝑣subscriptsuperscriptbarcVR𝑘𝑋𝔽I=(u,v]\in\mathrm{barc}^{\mathrm{VR}}_{k}(X;\mathbb{F})italic_I = ( italic_u , italic_v ] ∈ roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_PO... | In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence b... |
As proved in [68] via the notion of core of a metric graph or as a consequence of [50, Proposition 2.2], the unit circle 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT and the join X𝑋Xitalic_X of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPER... | A hyperconvex metric space is one where any collection of balls with non-empty pairwise intersections forces the non-empty intersection of all balls. These were studied by Aronszajn and Panitchpakdi [8] who showed that every hyperconvex space is an absolute 1-Lipschitz
retract. Isbell [52] proved that every metric spac... |
Observe that metric trees are both 00-hyperbolic and hyperconvex. A recent paper by Joharinad and Jost [53] analyzes the persistent homology of metric spaces satisfying the hyperconvexity condition (which is equivalent to injectivity) as well as that of spaces satisfying a relaxed version of hyperconvexity. | D |
Adaptive PCP vs. PCP
Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of ... | Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it. The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when availab... |
Adaptive Parallel Coordinates Plot Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (... | Adaptive PCP vs. PCP
Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of ... | To briefly present the benefits of using our technique, we employ the Single Proton Emission Computed Tomography (SPECTF) data set [58] with 44 dimensions. In Figure 12, we can observe that the standard PCP is cluttered, especially for the case without any selection. Thus, it is hard to see why the normal class is actu... | D |
Neighborhood based differential vector: In this subcategory, each solution is affected only by solutions in its local neighborhood. Table 26 compiles all algorithms that are classified in this subcategory. A notable example in this list is BFOA [148], in which all solutions in the neighborhood impact on the computation... | The second and third most influential algorithms are GA, a very classic algorithm, and DE, a well-known algorithm whose natural inspiration resides only in the evolution of a population. Both have been used by around 5% of all reviewed nature-inspired algorithms, and they are the most representative approach in the Evo... |
This category is composed of algorithms that explore the domain search by generating new solutions, not by moving existing ones. This group is a significant ratio (almost 31%) of all proposals, and includes many classical algorithms like GA [98]. A very widely exploited advantage of these methods is the possibility to... | Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous solution. The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations. This soluti... |
Bearing the above criteria in mind, Figure 5 shows the classification reached after our literature analysis. The plot indicates, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that in most nature- and bio-inspired algorithms, new solut... | B |
where φ(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12A~D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 e... | To apply graph convolution on unsupervised learning, GAE is proposed [20].
GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21... |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | A |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... |
IPID technique. When spoofing is not filtered the counter on the server will be incremented - which is the test action. At the probing phase the counter’s value will equal or large than the expected value after the increment phase. The repeated measurements ensure that we do not accidentally interpret noise (i.e., pac... | Methodology. We send a DNS request to the tested network from a spoofed IP address belonging to the tested network. If the network does not enforce ingress filtering, the request will arrive at the DNS resolver on that network. A query from a spoofed source IP address will cause the response to be sent to the IP addres... |
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremente... | D |
All neural networks in this section were trained using stochastic gradient descent with momentum [24] on the loss function ℒℒ\mathcal{L}caligraphic_L. The learning rate was set to 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT and the momentum factor to 0.90.90.90.9. Networks were trained fo... | Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a... |
The skill network approach incorporates all available data into a single training set, disregarding the sequential structure between batches of the dataset. For each batch T𝑇Titalic_T, a network was trained using batches 1111 through T−1𝑇1T-1italic_T - 1 as the training set and evaluated on batch T𝑇Titalic_T. | First, the effect of sensor drift on classification accuracy is demonstrated using classifiers trained on a single batch. For each batch 1111 through 10101010, a feedforward model was trained on that batch. Training of a new model was repeated 30 times on each batch. The accuracy of all classifiers were evaluated on ev... |
In order to improve performance, Vergara et al. [7] employed an ensemble technique on the SVM classifiers (Fig. 2B). The same technique was reimplemented and tested on the modified dataset in this paper. The ensemble meant to generalize to batch T𝑇Titalic_T was constructed by training a collection of single-batch cla... | B |
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses.
Recall that for... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | B |
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
| While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a ... | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ... | C |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... |
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... | C |
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
|
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w... | We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page. We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019). The three random forest mod... |
Content Extraction. Manual inspection of the English language web pages showed that they included content other than the main text: often they had a header, a footer, a navigation menu, and banners. We refer to this extra content in a web page as boilerplate. Boilerplate draws away from the focus of the main content i... |
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)... | C |
In our VA system, the user can explore how models perform on each class of the data set, and the performance metrics are instilled into a combined user-driven value. Manifold [66] generates pairs of models and compares them over all classes of a data set, including feature selection. We adopt a similar approach, but in... |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.... | Figure 5(a) presents ensemble \raisebox{-.0pt} {\tiny\bfS3}⃝, with all models still included. Figure 5(a+b) show the same projection but with different color-encodings for two selected performance metrics: f2-score and MCC. They allow us to decide which models are vital in order to stabilize the performance of the ense... |
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c... | For instance, the more recent tool iFuseML [48] operates with prediction errors in order to present ensemble models with more accurate predictions to the users. The comparison of models is very different in our approach: we use preliminary results from performance metrics in order to select the appropriate models that ... | D |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | B |
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019]. |
To answer RQ2, we find the fine-tuning epochs for each task in Persona where its BLEU and C Score reaches the best respectively to find the impact of data quantity and the task profile (persona description) on fine-tuning. (Table 1) We cluster the tasks with similar best fine-tuning epoch number and calculate the aver... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019]. | C |
&&&{\mathcal{S}_{k}^{\text{r}}}\mathop{\cap}{\mathcal{S}_{j}^{\text{r}}}=%
\emptyset,\thinspace k\neq j.\end{aligned}start_ROW start_CELL end_CELL start_CELL start_UNDERACCENT bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIP... |
From the aforementioned two properties of the CCA, we know that the optimal beamforming and combining vector 𝒇k(t)subscript𝒇𝑘𝑡\boldsymbol{f}_{k}(t)bold_italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) and 𝒘k(t)subscript𝒘𝑘𝑡\boldsymbol{w}_{k}(t)bold_italic_w start_POSTSUBSCRIPT italic_k end... | ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively.
In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent... | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ... | The t-UAV needs to select an appropriate codeword 𝒗(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) from our proposed codebook 𝒱ksubscript𝒱𝑘\mathcal{V}_{k}caligraphic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to solve the subarray partition and AWV selecti... | B |
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping.
This completes the proof for case 2 when the assumptions (a1) and (a2) hold. | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the ri... | D |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
at the mean-field limit with ϵ→0+→italic-ϵsuperscript0\epsilon\rightarrow 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and m→∞→𝑚m\rightarrow\inftyitalic_m → ∞. Such a correspondence allows us to use the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.... | The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019).
In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se... | The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati... | D |
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f... |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform... | Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) f... | Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de... | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | A |
φ∈𝖥𝖮[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ], if A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ, then there
exists a finite structure Afinsubscript𝐴finA_{\mathrm{fin}}italic_A start_POSTSUBSCRIPT roman_fin end_POSTSUBSCRIPT such that | \neg(x_{i}=x_{j})\wedge\bigwedge_{0\leq i<n-1}E(x_{i},x_{i+1})\;italic_ψ start_POSTSUBSCRIPT ⊇ italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≜ ∃ italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT . ⋀ start_POSTSUBSCRIPT italic_i... | X≜{x→∈∏i∈IXi∣∀i≤j∈I,xj=fi,j(xi)}≜𝑋conditional-set→𝑥subscriptproduct𝑖𝐼subscript𝑋𝑖formulae-sequencefor-all𝑖𝑗𝐼subscript𝑥𝑗subscript𝑓𝑖𝑗subscript𝑥𝑖X\triangleq\left\{\vec{x}\in\prod_{i\in I}X_{i}\mid\forall i\leq j\in I,x_{j}=%
f_{i,j}(x_{i})\right\}italic_X ≜ { over→ start_ARG italic_x end_ARG ∈ ∏ start_POST... | ∃x1,…xs.(⋀1≤i≤sα(r)(xi)∧⋀1≤i<j≤sd>2r(xi,xj)),subscript𝑥1.…subscript𝑥𝑠subscript1𝑖𝑠superscript𝛼𝑟subscript𝑥𝑖subscript1𝑖𝑗𝑠superscript𝑑absent2𝑟subscript𝑥𝑖subscript𝑥𝑗\exists x_{1},\dots x_{s}\mathbin{.}\big{(}\bigwedge_{1\leq i\leq s}\alpha^{(r%
)}(x_{i})\wedge\bigwedge_{1\leq i<j\leq s}d^{>2r}(x_{i},x_... | _{i}^{1})_{1\leq i\leq n};(y\models\psi_{i}^{2})_{1\leq i\leq n}\right)=1\;.∀ italic_x ∈ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y ∈ italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_f ( italic_x , italic_y ) ⊧ italic_φ ⇔ italic_β ( ( italic_x ⊧ italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSU... | C |
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like... | 2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. Also, the devised distortion-aware perception layer boosts the feature extraction of different degrees of distortion.
| In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | After predicting the distortion labels of a distorted image, it is direct to use the distance metric loss such as ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss or ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT loss to learn the network paramete... | B |
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
We set aside 20% of the samples as the test set and divide the rema... | We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets.
The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs. | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | We compare SNGM with four baselines: MSGD, LARS [34], EXTRAP-SGD [19] and CLARS [12]. For LARS, EXTRAP-SGD and CLARS, we adopt the open
source code 222https://github.com/NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch 333http://proceedings.mlr.press/v119/lin20b.html 444https://github.com/slowbull/largebatch | If we avoid these tricks, these methods may suffer from severe performance degradation.
For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective. | B |
{\mathcal{F}}roman_support ( caligraphic_D ) ⊆ 2 start_POSTSUPERSCRIPT caligraphic_C end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT caligraphic_F end_POSTSUPERSCRIPT and, in the black-box setting, |𝒟|𝒟|\mathcal{D}|| caligraphic_D | may be uncountably infinite.
| The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ... | Stochastic optimization, first introduced in the work of Beale [4] and Dantzig [8], provides a way to model uncertainty in the realization of the input data. In this paper, we give approximation algorithms for a family of problems in stochastic optimization, and more precisely in the 2222-stage recourse model [27].
Our... | The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | D |
However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... | such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... | Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments... |
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and an... | and show how various random factors affect the convergence rate of the algorithm in Theorem III.4.
In [6], the convergence rates of the distributed stochastic gradient descent algorithm with precise communications were analyzed under the conditions that the communication graphs are i.i.d. and the mean graph is connecte... | B |
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... | A |
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | B |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | C |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3