context stringlengths 250 4.14k | A stringlengths 250 3.94k | B stringlengths 250 5.14k | C stringlengths 250 4.12k | D stringlengths 250 4.03k | label stringclasses 4
values |
|---|---|---|---|---|---|
\Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
| ^{2}-m^{2}\right]x^{2}\\
+D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP... | Rnm(x)=(−1)(n−m)/2xmP(n−m)/2(m+1−D/2,0)(1−2x2)=(n+1−D/2(n−m)/2)xmG−a(2+m−D/2,2+m−D/2,x2).superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2superscript𝑥𝑚superscriptsubscript𝑃𝑛𝑚2𝑚1𝐷2012superscript𝑥2binomial𝑛1𝐷2𝑛𝑚2superscript𝑥𝑚subscript𝐺𝑎2𝑚𝐷22𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}x^{m}P_{(n-m)... | m+D/2\end{array}\mid x^{2}\right),italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_D + italic_m + italic_n end_ARG start_ARG 2... | +x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e... | B |
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) define... | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... |
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set a... |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali... | C |
To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}... | Except for (ii), all steps above above can be performed efficiently as the matrices involved are sparse and either local or independent of hℎhitalic_h. Solving (25) on the other hand involves computing the hℎhitalic_h-dependent, global operator P𝑃Pitalic_P, leading to a dense matrix in (25). From now on, we concentrat... |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... |
Above, and in what follows, c𝑐citalic_c denotes an arbitrary constant that does not depend on H𝐻Hitalic_H, ℋℋ{\mathscr{H}}script_H, hℎhitalic_h, 𝒜𝒜\mathcal{A}caligraphic_A, depending only on the shape regularity of the elements of 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POST... | A |
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | B |
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2... |
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work [6, 11] on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor). Our task is, to a point, a reverse engin... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | C |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | B |
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content (liu2015real, ; ma2015detect, ; zhao2015enquiring, ), since individual tweets are of... | For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... |
For this task, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 3.2). Fo... |
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work (castillo2011information, ; gupta2014tweetcred, ) on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor)... | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... | C |
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}%
)\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | D |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST... | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,... | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | D |
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | D |
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone... | Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba... |
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... | To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... | D |
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11... |
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration)... | This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending po... | We first prove pw(Gα)≤2loc(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α... | j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111.
This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act position... | A |
In[136] the authors used Jaccard distance as optimization objective function, integrating a residual learning strategy, and introducing a batch normalization layer to train a u-net.
It is shown in the paper that this configuration performed better than other simpler u-nets in terms of Dice. | Tan et al.[135] parameterize all short axis slices and phases of the LV segmentation task in terms of the radial distances between the LV center-point and the endocardial and epicardial contours in polar space.
Then, they train a CNN regression on STA11 to infer these parameters and test the generalizability of the met... | Isensee et al.[141] used an ensemble of a 2D and a 3D u-net for segmentation of the LV/RV cavity and the LV myocardium on each time instance of the cardiac cycle.
Information was extracted from the segmented time-series in form of features that reflect diagnostic clinical procedures for the purposes of the classificati... | The model was trained alternately on LV segmentation and volume estimation, placing fourth in the test set of DS16.
Emad et al.[138] localize the LV using a CNN and a pyramid of scales analysis to take into account different sizes of the heart with the YUDB. | Luo et al.[133] adopted a LV atlas mapping method to achieve accurate localization using MRI data from DS16.
Then, a three layer CNN was trained for predicting the LV volume, achieving comparable results with the winners of the challenge in terms of root mean square of end-diastole and end-systole volumes. | C |
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ... | Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training.
Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was... | The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol... | We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a).
Our results are poor with 20202020K interactions. For 50505050K th... |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... | A |
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | C |
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... |
It is important to emphasize that the locomotion mode transitions are only meaningful when both rolling and walking modes are capable of handling a step negotiation. And in the step negotiation simulations, it has been observed that the rolling locomotion can not transverse over steps with height more than three time ... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... |
During the step negotiation simulations, it was noticed that the rolling locomotion mode encountered constraints when attempting to cross steps with a height greater than thrice the track height (h being the track height as shown in Fig. 3). This limitation originates from the traction forces generated by the tracks. ... |
Figure 12: The Cricket robot tackles a step of height 3h by initiating in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The transition process mirrors that of the 2h step negotiation shown in Fig. 11. Unlike tackling a 2h step, the robot achieves considerable i... | A |
For paid exchanges at the beginning of the phase, Tog incurs a cost that is less than m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Before serving the last request σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT of the phase, the access cost of Tog is less ... | Similar arguments apply for an ignoring phase with the exception that the threshold is β⋅m2⋅𝛽superscript𝑚2\beta\cdot m^{2}italic_β ⋅ italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and there are no paid exchanges performed by Tog. So, we can observe the following.
|
The worst-case ratio between the costs of Tog and Mtf2 is maximized when the last phase is an ignoring phase. In this case, we have k𝑘kitalic_k trusting phases and k𝑘kitalic_k ignoring phases. The total cost of Mtf2 is at least km3+k(βm3/2−m2)=km3(1+β/2−1/m)𝑘superscript𝑚3𝑘𝛽superscript𝑚32superscript𝑚2𝑘sup... |
For a trusting phase, the cost of Tog is in the range (m3,m3(1+1/m+1/m2))superscript𝑚3superscript𝑚311𝑚1superscript𝑚2(m^{3},m^{3}(1+1/m+1/m^{2}))( italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m + 1 / italic_m start_POSTSUPERSCRIPT 2 en... |
In an ignoring phase, the cost of Tog for the phase is in the range (βm3,βm3(1+1/m2))𝛽superscript𝑚3𝛽superscript𝑚311superscript𝑚2(\beta m^{3},\beta m^{3}(1+1/m^{2}))( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m ... | A |
Finally, explainability/interpretability is another important requirement for EDD.
The same as with any other critical application in healthcare, finance, or national security, this is a domain that would be greatly benefited by models that not only make correct predictions but also facilitate understanding how those p... | Although interpretability and explanations have a long tradition in areas of AI like expert systems and argumentation, they have gained renewed interest in modern applications due to the complexity and obscure nature of popular machine learning methods based on deep learning.
| Nonetheless, This manual process is very expensive and error-prone since the KB of a real expert system includes thousands of rules.
This, added to the rise of big data and cheaper GPU-powered computing hardware, are causing a major shift in the development of these intelligent systems in which machine learning is incr... | In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detec... | On the other hand, in the machine learning community, it is well known the importance of having publicly available datasets to foster research on a particular topic, in this case, predicting depression based on language use.
That was the reason why the main goal in [Losada & Crestani, 2016] was to provide, to the best ... | A |
\frac{1}{2},k})bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_η divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ∑ start_POSTSUBSCRIPT italic_k ∈ [ italic_K ] end_POSTSUBSCRIPT caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide s... | DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5).
In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=... | Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor.
To address this convergence issue, | Note that the convergence guarantee of DEF-A and its momentum variant for non-convex problems is lacking in (Xu and Huang, 2022). We provide the convergence analysis for GMC+, which can be seen as a global momentum variant of DEF-A. We eliminate the assumption of ring-allreduce compatibility from (Xu and Huang, 2022) a... | Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee... | D |
Moreover activation functions that produce continuous valued activation maps (such as ReLU) are less biologically plausible, because biological neurons rarely are in their maximum saturation regime [22] and use spikes to communicate instead of continuous values [23]. | Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations.
In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ... | Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24].
Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o... | In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14].
Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler... |
Moreover activation functions that produce continuous valued activation maps (such as ReLU) are less biologically plausible, because biological neurons rarely are in their maximum saturation regime [22] and use spikes to communicate instead of continuous values [23]. | B |
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo... | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit... |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | D |
imposed at all boundary points in the plasma domain (in combination
with the explicitly applied boundary conditions 𝐯¯|Γ=𝟎evaluated-at¯𝐯Γ0\overline{\mathbf{v}}|_{\Gamma}=\mathbf{0}over¯ start_ARG bold_v end_ARG | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0), | to the peak values of ψmain,ψlev,and ψcompsubscript𝜓𝑚𝑎𝑖𝑛subscript𝜓𝑙𝑒𝑣and subscript𝜓𝑐𝑜𝑚𝑝\psi_{main},\,\psi_{lev},\,\mbox{and }\psi_{comp}italic_ψ start_POSTSUBSCRIPT italic_m italic_a italic_i italic_n end_POSTSUBSCRIPT , italic_ψ start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT , ... | Vform=16subscript𝑉𝑓𝑜𝑟𝑚16V_{form}=16italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT = 16kV, Imain=70subscript𝐼𝑚𝑎𝑖𝑛70I_{main}=70italic_I start_POSTSUBSCRIPT italic_m italic_a italic_i italic_n end_POSTSUBSCRIPT = 70A, Vlev=16subscript𝑉𝑙𝑒𝑣16V_{lev}=16italic_V start_... | were set to the experimentally measured values corresponding to experimentally
recorded Vlev=16subscript𝑉𝑙𝑒𝑣16V_{lev}=16italic_V start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT = 16kV and Vcomp=18subscript𝑉𝑐𝑜𝑚𝑝18V_{comp}=18italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p e... | typical shot with Vform=16subscript𝑉𝑓𝑜𝑟𝑚16V_{form}=16italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT = 16kV, where Vformsubscript𝑉𝑓𝑜𝑟𝑚V_{form}italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT is the voltage
to which the formation capaci... | B |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | A |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | C |
Encoder-decoder networks with long and short skip connections are the winning architectures according to the state-of-the-art methods. Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures and reducing the risks for ... |
Medical images, both 2D and volumetric, have in general, larger file sizes than natural images, which inhibits the ability to load them entirely onto the memory for processing. As such, they need to be processed either as patches or sub-volumes, making it difficult for the segmentation models to capture spatial relati... | The majority of the methods discussed in Section 5 have attempted to handle the class imbalance issue in the input images i.e., small foreground versus large background with providing weights/penalty terms in the loss function. Other approaches consist of first identifying the object of interest, cropping around this o... |
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th... |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ... | D |
This means that the graph has a very large diameter (maximum shortest path), where information propagates slowly through MP layers.
Therefore, even after MP, nodes in very different parts of the graph will end up having similar (if not identical) features, which leads feature-based pooling methods to assign them to the... | these methods compute a coarsened version of the graph through differentiable functions, which are parametrized by weights that are optimized for the task at hand.
Differently from topological pooling, these methods account for the node features, which change as the GNN is trained. | As a result the graph collapses, becoming densely connected and losing its original structure.
On the other hand, topological pooling methods can preserve the graph structure by operating on the whole adjacency matrix at once to compute the coarsened graphs and are not affected by uninformative node features. | Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
| The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
| B |
Experiments demonstrate that the accuracy of the imitating neural network is equal to the original accuracy or even slightly better than the random forest due to better generalization while being significantly smaller.
To summarize, our contributions are as follows: | Neural random forest imitation enables an implicit transformation of random forests into neural networks. Usually, data samples are propagated through the individual decision trees and the split decisions are evaluated during inference.
We propose a method for generating input-target pairs by reversing this process and... | In this work, we presented a novel method for transforming random forests into neural networks.
Instead of a direct mapping, we introduced a process for generating data from random forests by analyzing the decision boundaries and guided routing of data samples to selected leaf nodes. | Our proposed approach, called Neural Random Forest Imitation (NRFI), implicitly transforms random forests into neural networks.
The main concept includes (1) generating training data from decision trees and random forests, (2) adding strategies for reducing conflicts and increasing the variety of the generated examples... | We propose a novel method for implicitly transforming random forests into neural networks by generating data from a random forest and training an random forest-imitating neural network. Labeled data samples are created by evaluating the decision boundaries and guided routing to selected leaf nodes.
| D |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... | Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | C |
Starting from a pre-trained teacher DNN, they first train an autoencoder which they call paraphraser to extract understandable factors from a selected intermediate layer of the teacher DNN.
The student DNN is extended by a regressor which they call translator whose purpose is to predict the paraphraser factors from the... | For τ>1𝜏1\tau>1italic_τ > 1, the labels tend to become more uniform which has been reported to facilitate training.
Furthermore, they propose to utilize the ground truth labels by minimizing a weighted average of the traditional cross-entropy loss based on the ground truth labels t𝑡titalic_t and the knowledge distill... | Subsequently, the smaller student model is trained on data where the ground truth labels have been replaced by the soft labels obtained from the output of the teacher model, e.g., from the softmax output of a DNN.
It has been shown that this substantially increases the accuracy of the student model compared to directly... | The student DNN is then trained to simultaneously minimize the cross-entropy loss on the ground truth labels and the difference between paraphraser and translator output.
They employ the paraphraser and the translator after the last convolutional layer in their DNNs. | Starting from a pre-trained teacher DNN, they first train an autoencoder which they call paraphraser to extract understandable factors from a selected intermediate layer of the teacher DNN.
The student DNN is extended by a regressor which they call translator whose purpose is to predict the paraphraser factors from the... | C |
≃γxi,p⋅γp,xi+1similar-to-or-equalsabsent⋅subscript𝛾subscript𝑥𝑖𝑝subscript𝛾𝑝subscript𝑥𝑖1\displaystyle\simeq\gamma_{x_{i},p}\cdot\gamma_{p,x_{i+1}}≃ italic_γ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_p end_POSTSUBSCRIPT ⋅ italic_γ start_POSTSUBSCRIPT italic_p , italic_x s... |
Note that whereas the proof of Lemma 1 in [54] takes place at the level of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ), the proof of Proposition 9.1 given above takes place at the level of simplicial complexes and simplicial maps. | The following corollary was already established by Gromov (who attributes it to Rips) in [47, Lemma 1.7.A]. The proof given by Gromov operates at the simplicial level. By invoking Proposition 8.1 we obtain an alternative proof, which instead of operating the simplicial level, exploits the isometric embedding of X𝑋Xita... | See Section 5 for the proof of Theorem 1. As we already mentioned earlier, our proof of Theorem 1 does not depend on Crawley-Boevey’s theorem since we circumvented verifying the pointwise finite-dimensionality of PHk(VR∗(X);𝔽)subscriptPH𝑘subscriptVR𝑋𝔽\mathrm{PH}_{k}(\mathrm{VR}_{*}(X);\mathbb{F})roman_PH start_PO... | In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem... | D |
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scat... | We present a Neighborhood Preservation plot (Figure 1(g)) that shows an overview of the preservation of neighborhoods of different sizes (k𝑘kitalic_k) in both the entire projection and the current selection, based on the Jaccard distance between sets:
| As an example, the set difference from Martins et al. [33] uses the Jaccard set-distance between the two sets of neighbors of a point in low- and high-dimensional space in order to compute a measure of Neighborhood Preservation. We have chosen to adopt it in our work, in contrast to others, because of its intuitive int... | The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis.
It provides a clearer overall picture of the difference in preservation among all the shown s... | we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light... | B |
When to establish a new division of a category into subcategories: a coarse split criterion for the taxonomy can imply categories of little utility for the subsequent analysis, since in that case, the same category would group very different algorithms. On the other hand, a fine-grained taxonomy can produce very comple... | Which number of subcategories into which to divide a category: the criterion followed in this regard must produce meaningful subcategories. In order to ensure a reduced number of subcategories, we consider that not all algorithms inside one category must be a member of one of its subcategories. In that way, we avoid in... | This category is further divided into subcategories as a function of the above decision, i.e. which solutions are considered to create the movement vector. It should be noted that some algorithms can be classified into more than one subcategory. For instance, a particle’s update in the PSO solver is affected by the glo... |
Taking into account all the reviewed papers, we group the proposals therein in a hierarchy of categories. In the hierarchy, not all proposals of a category must fit in one of its subcategories. In our classification, categories lying at the same level are disjoint sets, which means that each proposed algorithm can be ... | When to establish a new division of a category into subcategories: a coarse split criterion for the taxonomy can imply categories of little utility for the subsequent analysis, since in that case, the same category would group very different algorithms. On the other hand, a fine-grained taxonomy can produce very comple... | A |
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph... | (3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
| As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi... | B |
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi... |
Network Traces. To overcome the dependency on vantage points for running the tests, researchers explored alternatives for inferring filtering of spoofed packets. A recent work used loops in traceroute to infer ability to send packets from spoofed IP addresses, (Lone et al., 2017). |
(Lichtblau et al., 2017) developed a methodology to passively detect spoofed packets in traces recorded at a European IXP connecting 700 networks. The limitation of this approach is that it requires cooperation of the IXP to perform the analysis over the traffic and applies only to networks connected to the IXP. Allow... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin... | A |
It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long... | It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal ox... |
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b... | D |
Our algorithm is a dynamic program, where we define a subproblem for each separator
index i𝑖iitalic_i, and each set of endpoints B∈ℬi𝐵subscriptℬ𝑖B\in\mathcal{B}_{i}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The value of A[i,B]𝐴𝑖𝐵A[i,B]italic_A [ italic_i , italic_B ] is defined as f... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... | A |
While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne... | However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least
very close to being an auto... |
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t... | While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne... |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel... | D |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... | Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ... | A |
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil... |
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. D... |
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application... | Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... | B |
Following our design goals and derived analytical tasks, we implemented StackGenVis, an interactive VA system that allows users to build powerful stacking ensembles from scratch. Our system consists of six main interactive visualization panels (see StackGenVis: Alignment of Data, Algorithms, and Models for Stacking En... | The model exploration phase is perhaps the most important step on the way to build a good ensemble. It focuses on comparing and exploring different models both individually and in groups. Due to the page limits, we now assume that we selected the most performant models, removed the remaining from the stack, and reached... | (ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models;
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model explo... | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme... | B |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | D |
For both BLEU and C Score, Jac Score is around 1 in each cluster, which means the persona descriptions are not similar. The dialogue quantity also seems similar among different clusters. So we can conclude that data quantity and task profile does not have a major impact on the fine-tuning process.
| To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor... | Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2).
When the data quantity i... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o... | A |
Figure 6: The subarray patterns on the cylinder and the corresponding expanded cylinder. (a) The t-UAV subarray partition pattern. (b) The r-UAV subarray partition pattern with conflict. (c) The r-UAV subarray partition pattern without conflict. (d) The t-UAV subarray partition pattern with beamwidth selection. | Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast m... | Without loss of generality, let us focus on the TE-aware codeword
selection for the k𝑘kitalic_k-th t-UAV at the r-UAV side. The beam gain is selected as the optimization objective, and the problem of beamwidth control is translated to choose the appropriate subarray size, which corresponds to the appropriate layer in ... |
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as | The r-UAV needs to select multiple appropriate AWVs 𝒗(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , ital... | C |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | B |
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize... | Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | B |
Compared to the baseline Zhang et al. (2020), Table 7 shows that: 1) our approach can lead to +3.023.02+3.02+ 3.02 and +3.383.38+3.38+ 3.38 BLEU improvements on average in the En→→\rightarrow→xx and xx→→\rightarrow→En directions respectively in the evaluation over 4 typologically different languages, and 2) using dept... | It is a common problem that increasing the depth does not always lead to better performance, whether with residual connections Li et al. (2022b) or other previous studies on deep Transformers Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a), and the use of wider models is the usual method of choice for furthe... | Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de... | In our deep Transformer experiments, Table 6 shows that our depth-wise LSTM Transformer with fewer layers, parameters and computations can lead to competitive/better performance and faster decoding speed than vanilla Transformers with more layers but a similar BLEU score, and the depth-wise LSTM Transformer is in fact ... |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform... | C |
⟦𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta... | τ⊆i∩⟦𝖥𝖮[σ]⟧Struct(σ)\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{%
\operatorname{Struct}(\upsigma)}roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT | ⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct(σ)⟩\langle\tau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{%
\operatorname{Struct}(\upsigma)}\rangle⟨ italic_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_PO... | ⟦𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta... | topology ⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct(σ)⟩\langle\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{%
\operatorname{Struct}(\upsigma)}\rangle⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_... | A |
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | After predicting the distortion labels of a distorted image, it is direct to use the distance metric loss such as ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss or ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT loss to learn the network paramete... | 2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. Also, the devised distortion-aware perception layer boosts the feature extraction of different degrees of distortion.
| In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | D |
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32]
observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... |
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ... | Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ... | D |
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... |
We remark that if we make an additional assumption that the stage-II cost is at most some polynomial value ΔΔ\Deltaroman_Δ, we can use standard SAA techniques without discarding scenarios; see Theorem 2.6 for full details. However, this assumption is stronger than is usually used in the literature for two-stage stocha... |
Unfortunately, standard SAA approaches [26, 7] do not directly apply to radius minimization problems. On a high level, the obstacle is that radius-minimization requires estimating the cost of each approximate solution; counter-intuitively, this may be harder than optimizing the cost (which is what is done in previous ... | D |
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... | Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | C |
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, whi... |
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat... | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv... | Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonym... | D |
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... |
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | C |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| D |
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th... | In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020... | We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... | D |
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... |
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures 1 and 2) which is statistically significant (r(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | C |
Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors.
DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its... | Moreover, DAN introduces a distinctive attention mechanism that employs the neighbors of the central entity to evaluate the neighbors themselves. This collective voting mechanism helps mitigate bias and contributes to improved performance, even on traditional tasks. It also distinguishes DAN from other existing inducti... | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... | Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors.
DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its... |
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem... | A |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3