Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
7.19k
A
stringlengths
250
4.62k
B
stringlengths
250
4.85k
C
stringlengths
250
4.12k
D
stringlengths
250
8.2k
label
stringclasses
4 values
(a)0≡1;(a)n=a⁢(a+1)⁢(a+2)⁢⋯⁢(a+n−1)=Γ⁢(a+n)/Γ⁢(a);n≥0.formulae-sequenceformulae-sequencesubscript𝑎01subscript𝑎𝑛𝑎𝑎1𝑎2⋯𝑎𝑛1Γ𝑎𝑛Γ𝑎𝑛0(a)_{0}\equiv 1;\quad(a)_{n}=a(a+1)(a+2)\cdots(a+n-1)=\Gamma(a+n)/\Gamma(a);% \quad n\geq 0.( italic_a ) start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ≡ 1 ; ( italic_a ) start_POSTSUBSCRI...
x2⁢(x2−1)⁢d2d⁢x2⁢Rnm⁢(x)=[n⁢x2⁢(n+D)−m⁢(D−2+m)]⁢Rnm⁢(x)+x⁢[D−1−(D+1)⁢x2]⁢dd⁢x⁢Rnm⁢(x).superscript𝑥2superscript𝑥21superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥delimited-[]𝑛superscript𝑥2𝑛𝐷𝑚𝐷2𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑥delimited-[]𝐷1𝐷1superscript𝑥2𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥x^{2}(x^{2}-...
x3⁢(x2−1)2⁢d3d⁢x3⁢Rnm⁢(x)={−n(3+D)(n+D)x4+[(n+m)D2+(n2+m2−n+3m)D−10m+5m2−n2]x2−m(D+1)(D−2+m)}Rnm(x)+x{[D2+(3+n)D+n2+2]x4+[−2D2−(2+n+m)D+6+2m−n2−m2]x2+D2+D(m−1)−2m+m2}dd⁢xRnm(x).superscript𝑥3superscriptsuperscript𝑥212superscript𝑑3𝑑superscript𝑥3superscriptsubscript𝑅𝑛𝑚𝑥𝑛3𝐷𝑛𝐷superscript𝑥4delimited-[]𝑛𝑚supe...
Rnm⁢(x)=(−1)(n−m)/2⁢(D+m+n2−1n−m2)⁢xm⁢F⁢(−(n−m)/2,(D+n+m)/2m+D/2∣x2),superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2binomial𝐷𝑚𝑛21𝑛𝑚2superscript𝑥𝑚𝐹conditional𝑛𝑚2𝐷𝑛𝑚2𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}\binom{\frac{D+m+n}{2}-1}{\frac{n-m}{2}}x^{m}{}F% \left(\begin{array}[]{c}-(n-m)/2,(D+n+m)/2\\
Rnm′′⁢(x)Rnm′⁢(x)=1x2−1⁢[(n⁢(n+D)−m⁢(D−2+m)x2)⁢Rnm⁢(x)Rnm′⁢(x)+D−1−(D+1)⁢x2x].superscriptsuperscriptsubscript𝑅𝑛𝑚′′𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥1superscript𝑥21delimited-[]𝑛𝑛𝐷𝑚𝐷2𝑚superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝐷1𝐷1superscript𝑥2𝑥\frac{{R_{n}^{m...
C
0&I_{d-4}\end{array}\right)\text{for $d$ even or }x=I_{d}\text{ for $d$ odd}.italic_x = ( start_ARRAY start_ROW start_CELL italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_I start_POSTSUBSCRIPT italic_d - 4 end_POSTSUBSCRIPT end_CE...
Note that a small variation of these standard generators for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali...
Finally, we construct a second MSLP, described in Section 3.5, that writes a diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) (when evaluated with these generators as input)...
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) define...
Our aim is to determine the length and memory quota for an MSLP for the Bruhat decomposition of an arbitrary matrix g∈SL⁢(d,q)𝑔SL𝑑𝑞g\in\textnormal{SL}(d,q)italic_g ∈ SL ( italic_d , italic_q ) via the above method, with the matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, u2subscript𝑢2u...
A
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
C
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
C
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex...
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
A
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′⁢(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp⁡(−uν)superscript𝑢𝜈...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training ...
Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th...
A
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi...
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi...
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W...
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to...
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
D
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
A
RT=𝔼⁢{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST...
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ...
Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many. TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018].
one uses p⁢(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal, i.e., π⁢(A|xt+1,ℋ1:t)=ℙ⁢(A=at+1∗|xt+1,θt,...
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
B
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical ac...
Table 1 shows basic patient information. Half of the patients are female and ages range from 17 to 66, with a mean age of 41.8 years. Body weight, according to BMI, is normal for half of the patients, four are overweight and one is obese. The mean BMI value is 26.9. Only one of the patients suffers from diabetes type ...
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
A
Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S...
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which...
To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result...
Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S...
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect...
A
\mathtt{l}}\mathtt{e}\overline{\mathtt{l}}\mathtt{e}\overline{\mathtt{m}}% \mathtt{e}\mathtt{n}\mathtt{t}typewriter_Einze over¯ start_ARG typewriter_l end_ARG typewriter_e over¯ start_ARG typewriter_l end_ARG typewriter_e over¯ start_ARG typewriter_m end_ARG typewriter_ent
For the sake of convenience, let ℓ=2⁢kℓ2𝑘\ell=2kroman_ℓ = 2 italic_k for some k≥1𝑘1k\geq 1italic_k ≥ 1. Let σ𝜎\sigmaitalic_σ be any block-extending marking sequence for α𝛼\alphaitalic_α. If σ𝜎\sigmaitalic_σ marks y𝑦yitalic_y first, then we have 2⁢k2𝑘2k2 italic_k marked blocks and if some xisubscript𝑥𝑖x_{i}ita...
In the following, we investigate another aspect of greedy strategies. Any symbol that is marked next in a marking sequence can have isolated occurrences (i. e., occurrences that are not adjacent to any marked block) and block-extending occurrences (i. e., occurrences with at least one adjacent marked symbol). Each isol...
The important property of the word αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT is that for every edge {x,y}𝑥𝑦\{x,y\}{ italic_x , italic_y } of H𝐻Hitalic_H (except e𝑒eitalic_e), it contains two distinct size-2222 factors that are x⁢y𝑥𝑦xyitalic_x italic_y- or y⁢x𝑦𝑥yxitalic_y ...
For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if th...
D
Lee et al.[250] conclude that international cooperation is required for constructing a high quality multimodal big dataset for stroke imaging. Another solution to better exploit big medical data in cardiology is to apply unsupervised learning methods, which do not require annotations.
According to the literature, RNNs are widely used in cardiology structured data because they are capable in finding optimal temporal features better than other deep/machine learning methods. On the other hand, applications in this area are relatively few and this is mainly because there is a small number of public data...
Regarding the problem of lack of interpretability as it is indicated by Hinton[260] it is generally infeasible to interpret nonlinear features of deep networks because their meaning depends on complex interactions with uninterpreted features from other layers.
Besides solving the data and interpretability problems, researchers in cardiology could utilize the already established deep learning architectures that have not been widely applied in cardiology such as capsule networks. Capsule networks[265] are deep neural networks that require less training data than CNNs and its l...
One reason that traditional machine learning have worked sufficiently well in this area in previous years is because of the use of handcrafted and carefully designed features by experts such as statistical measures from the ECG beats and the RR interval[74]. Deep learning can improve results when the annotations are no...
B
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ...
As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen...
D
As shown in Table. I the one layer CNN DenseNet201 achieved the best accuracy of 85.3%percent85.385.3\%85.3 % with training time 70 seconds/epoch on average. In overall the one layer CNN S2I achieved best accuracies for eleven out of fifteen ‘base models’.
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 sampl...
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
D
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi...
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga...
D
Mtf2 is called called Move-To-Front-Even (Mtfe), and if all bits are 1 at the beginning, Mtf2 is called Move-To-Front-Odd (Mtfo). Both Mtfe and Mtfo algorithms have a competitive ratio of 5/2525/25 / 2 [11]. In  [11] it is shown that, for any request sequence, at least one of Timestamp, Mtfo, and Mtfe has a competitive...
For a given request sequence, the best option among the three algorithms can be indicated with two bits of advice, giving a 5/3535/35 / 3-competitive algorithm. However, if the advice is untrusted, the competitive ratio can be as bad as 5/2525/25 / 2.
Mtf2 is called called Move-To-Front-Even (Mtfe), and if all bits are 1 at the beginning, Mtf2 is called Move-To-Front-Odd (Mtfo). Both Mtfe and Mtfo algorithms have a competitive ratio of 5/2525/25 / 2 [11]. In  [11] it is shown that, for any request sequence, at least one of Timestamp, Mtfo, and Mtfe has a competitive...
If the advice indicates Timestamp as the best algorithm, the algorithm trusts it and the competitive ratio will be at most 2 [1]. If the advice indicates Mtfe or Mtfo as the best algorithm, the Tog algorithm uses the phasing scheme described above by alternating between the indicated algorithm and Move-To-Front, and b...
If the advice indicates Timestamp as the best algorithm among Mtfe, Mtfo, and Timestamp, the algorithm uses Timestamp to serve the entire sequence, and since the advice is right, the competitive ratio will be at most 5/3 [11]. If the advice indicates Mtfe or Mtfo as the best algorithm, the Tog algorithm uses the phasin...
A
On the other hand, graphs of accumulated confidence values over time (chunk-by-chunk or writing-by-writing) shown in Figures 6, 7 and 8 are intended to show how lexical evidence (learned from the training data and given by g⁢v𝑔𝑣gvitalic_g italic_v) is accumulated over time, for each class, and how it is used to decid...
However, EDD poses really challenging aspects to the “standard” machine learning field. The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its ration...
At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability. Unfortunately, to the best of our knowledge, there is no text classifier able to support thes...
In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detec...
In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD). The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification...
D
Since 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) is sparse, 𝐰t+1−𝐰tsubscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}-{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_t + 1 ...
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo...
The error feedback technique keeps the compressed error into the error residual on each worker and incorporates the error residual into the next update. Error feedback based sparse communication methods have been widely adopted by recent communication compression methods and achieved better performance than quantizatio...
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
There are some other ways to combine momentum and error feedback. For example, we can put the momentum term on the server. However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
D
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
SANs combined with the φ𝜑\varphiitalic_φ metric compress the description of the data in a way a minimum description language framework would, by encoding them into 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α...
The φ𝜑\varphiitalic_φ metric is also related to the rate-distortion theory [40], in which the maximum distortion is defined according to human perception, which however inevitably introduces a bias. There is also relation with the field of Compressed Sensing [41] in which the sparsity of the data is exploited allowing...
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data. This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most rep...
C
The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚...
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
(Regular Perturbed Markov Process) Denote P𝑃Pitalic_P as the transaction matrix of a Markov Process which has a finite state space S𝑆Sitalic_S. This Markov Process is called regular perturbed markov process with noise ϵitalic-ϵ\epsilonitalic_ϵ if the following conditions are met.
(Stochastically Stable Strategy) Denote Pϵsubscript𝑃italic-ϵP_{\epsilon}italic_P start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT as the transaction probability of a regular perturbed Markov process in a state space S𝑆Sitalic_S, and μϵ⁢(s)subscript𝜇italic-ϵ𝑠\mu_{\epsilon}(s)italic_μ start_POSTSUBSCRIPT italic_ϵ end_P...
The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚...
B
<X¯>e=13⁢M^¯∗X¯superscriptexpectation¯𝑋𝑒13¯^𝑀¯𝑋<\overline{X}>^{e}=\frac{1}{3}\overline{\widehat{M}}*\overline{X}< over¯ start_ARG italic_X end_ARG > start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG 3 end_ARG over¯ start_ARG over^ start_ARG italic_M end_ARG end_ARG ∗ over¯ st...
with r^=(r1e,r2e,…⁢rNee)T=<r¯>e^𝑟superscriptsuperscriptsubscript𝑟1𝑒superscriptsubscript𝑟2𝑒…superscriptsubscript𝑟subscript𝑁𝑒𝑒𝑇superscriptexpectation¯𝑟𝑒\widehat{r}=(r_{1}^{e},\,r_{2}^{e},...r_{N_{e}}^{e})^{T}=<\overline{r}>^{e}over^ start_ARG italic_r end_ARG = ( italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
(\frac{f_{P_{i}}(t)\,s_{i}}{3r_{i}}\right)+f_{I}(t)\left(h_{I}\,\mbox{ln}(r_{% out}/r_{in})\right)=0⇒ start_RELOP SUPERSCRIPTOP start_ARG italic_i end_ARG start_ARG [ end_ARG end_RELOP = 1 ] italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Σ ( divide start_ARG italic_f start_POSTSUBSCRIPT italic_P start_PO...
i}\right)\,[\mbox{W/$\mbox{m}^{3}$}]= ( 7.6 × 10 start_POSTSUPERSCRIPT - 33 end_POSTSUPERSCRIPT italic_Z start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( over¯ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT [ eV ] - over¯ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ...
{\int\frac{g_{form}(z)}{r}dr\;dz}\right)italic_f start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT ( italic_z , italic_t ) = ( - italic_e start_POSTSUPERSCRIPT - divide start_ARG italic_t end_ARG start_ARG italic_τ start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT end_ARG end_POSTSUPERSCRI...
A
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
A
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
D
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol...
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol...
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ...
Multi-task learning (Caruana, 1997) refers to a machine learning approach where multiple tasks are learned simultaneously, and the learning efficiency and the model performance on each of the tasks are improved because of the existing commonalities across the tasks. For visual recognition tasks, it has been shown that...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
C
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods.
Graph Neural Networks (GNNs) are machine learning models that learn abstract representations of graph-structured data to solve a large variety of inference tasks [1, 2, 3, 4, 5]. Differently from neural networks that process vectors, images, or sequences, the graphs processed by GNNs have an arbitrary topology.
We notice that the coarsened graphs are pre-computed before training the GNN. Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪⁢(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP...
The latter, learn both the topology and the features of the coarsened graphs end-to-end via gradient descent, at the cost of a larger model complexity and higher training time. The efficiency of NDP brings a significant advantage when GNNs are deployed in real-world scenarios subject to computational constraints, like ...
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
C
The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence. NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples.
In the next step, the imitation learning performance of the sampling modes is evaluated. The results are shown in Table 3. Random data generation reaches a mean accuracy of 63.80%percent63.8063.80\%63.80 % while NRFI uniform and NRFI dynamic achieve 87.46%percent87.4687.46\%87.46 % and 88.14%percent88.1488.14\%88.14 %,...
Table 3: Imitation learning performance (in accuracy [%]) of different data sampling modes on Soybean. NRFI achieves better results than random data generation. When optimizing the selection of the decision trees, the performance is improved due to more diverse sampling.
Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI unifo...
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data po...
A
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
D
Afterwards the models perform best if the weights are scaled to 2 bits and the activation bit width is further increased to 4 bits. This supports the observation of the previous sections, showing that model accuracy is sensitive to activation quantization rather than weight quantization.
As can be seen, the various models in each regime (pruning structure or DNN architecture) show similar behavior for throughput. The worst performing regimes are group and kernel pruning as well as the combination of fixed grouping and channel pruning.
Section 5.1 explored the impact of several network quantization approaches and structured pruning on the prediction quality. In this section. we use the well-performing LQ-Net approach for quantization and PSP (for channel pruning) to measure the inference throughput of the quantized and pruned models separately on an ...
Liu et al. (2019b) have replicated several experiments of pruning approaches (see Section 3.2) and they observed that the typical workflow of training, pruning, and fine-tuning is often not necessary and only the discovered sparsity structure is important. In particular, they show for several pruning approaches that ra...
In this experiment, we select pruning structures that are in line with commonly used DNN libraries for convolutions: we use channel pruning to learn the number of input and output feature maps, kernel pruning to learn the size of the convolution kernel, and group pruning to learn heterogeneous group sizes for grouped c...
A
2⁢|FillRad⁢(M;𝔽)−FillRad⁢(𝕊n)|≤cost⁢(Rε)<ε.2FillRad𝑀𝔽FillRadsuperscript𝕊𝑛costsubscript𝑅𝜀𝜀2|\mathrm{FillRad}(M;\mathbb{F})-\mathrm{FillRad}(\mathbb{S}^{n})|\leq\mathrm{% cost}(R_{\varepsilon})<\varepsilon.2 | roman_FillRad ( italic_M ; blackboard_F ) - roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n...
\mathbb{S}^{n})roman_FillRad ( italic_M ; blackboard_F ) ≥ divide start_ARG 1 end_ARG start_ARG italic_π square-root start_ARG italic_c end_ARG end_ARG ⋅ roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) by [64, Proofs of Theorem 1.1 and Proposition 1.6]. Therefore, 2⁢|FillRad⁢(M;𝔽)−Fil...
By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ...
The proof strategy for Propositions 9.8 and 9.9 is to invoke Wilhelm’s result [82, Main Theorem 2] and Lemma 9.5 above. However, if FillRad⁢(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ) were small, one would not be able to apply Wilhelm’s theorem. To avoid that, we will invoke a result due to Liu [64].
FillRad⁢(𝕊n;𝔽)=FillRad⁢(𝕊n)FillRadsuperscript𝕊𝑛𝔽FillRadsuperscript𝕊𝑛\mathrm{FillRad}(\mathbb{S}^{n};\mathbb{F})=\mathrm{FillRad}(\mathbb{S}^{n})roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ; blackboard_F ) = roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTS...
C
One initial observation is that the overall Completion Time for both groups was remarkably similar. With the exception of Tasks 1 and 5, where t-viSNE users performed faster than GEP, in general the results have not shown any statistically significant difference. To answer RQ1, we detected no statistically significant...
On the other hand, t-viSNE obtained consistently higher scores for Tool Supportiveness, with a higher average in all the proposed tasks. The bulk of the distributions of the supportiveness scores from the two groups overlap little, mostly near outliers (the “N/A” option was chosen three times, all in the GEP group). Wh...
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde...
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q...
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim...
A
A literature review and critical analysis of metaheuristics recently developed - 2024 [38]: This review focuses on algorithms with titles containing words such as ‘new’, ‘hybrid’, or ‘improved’, in response to the growing trend of nature-based approaches. After analyzing over 100 algorithms, it was found that a signif...
Theoretical studies: By the hand of the fitness landscape for a better understanding of how a search algorithm can perform on a family of problem instances, of multidisciplinary theories to study the role of diversity and the balance of local search and global search required to undertake a certain problem efficiently...
Metaheuristic optimization algorithms: an overview - 2024 [39]: This paper focuses on studying the main components and concepts of optimization. More specifically, the overview provides the advantages (agnostic to the problem being solved, gradient independence, global search capability, the capability of dealing with...
Metaheuristics in a nutshell - 2023 [36]: The purpose of this overview is to define the main terms related to the concept of metaheuristic. The text does not provide an extensive taxonomy, but it clearly distinguishes between two classes of metaheuristics: trajectory and population algorithms. It describes the most we...
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n...
B
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propo...
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
A
False negatives in our measurements mean that a network that does not perform filtering of spoofed packets is not marked as such. We next list the causes of false negatives for each of our three techniques. Essentially the false negatives cannot be resolved, and therefore our measurement results of networks that enforc...
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca...
IPID technique. Load balancing can introduce a challenge in identifying whether a given network enforces ingress filtering. As a result of load balancing our packets will be split between multiple instances of the server, hence resulting in low IPID counter values. There are different approaches for distributing the l...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
B
The context+skill approach used in this paper builds on the substantial body of work on training models in nonstationary environments (Section II-1). It also draws inspiration from models of context-aware odor processing in biological systems (Section II-2).
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b...
Machine learning applications frequently deal with data-generating processes that change over time. Applications in such nonstationary environments include power use forecasting, recommendation systems, and environmental sensors [9]. Semisupervised learning, which has received a lot of attention in the sensor communit...
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design...
C
$P_{0}\cup\dots\cup P_{i-1}\cup B$ realizing the matching $M$}\end{cases}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] := { start_ROW start_CELL A representative set containing pairs ( italic_M , italic_x ) , where italic_M is a perfect matching on italic_B ∈ caligraphic_B start_POSTS...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
C
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem...
B
P⁢(a|𝒬,ℐ)=fV⁢Q⁢A⁢(v,𝒬).𝑃conditional𝑎𝒬ℐsubscript𝑓𝑉𝑄𝐴𝑣𝒬\displaystyle P(a|\mathcal{Q},\mathcal{I})=f_{VQA}(v,\mathcal{Q}).italic_P ( italic_a | caligraphic_Q , caligraphic_I ) = italic_f start_POSTSUBSCRIPT italic_V italic_Q italic_A end_POSTSUBSCRIPT ( italic_v , caligraphic_Q ) .
Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t...
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P⁢(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ...
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
B
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regul...
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert ann...
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word...
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categorie...
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application...
A
Wang et al. [62] experimented with alternative visualization designs for selecting parameters, and they found that a parallel coordinates plot is a solid representation for this context as it is concise and also not rejected by the users. A drawback is the complexity of it compared to multiple simpler scatterplots. Fig...
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c....
We normalize the importance from 0 to 1 and use a two-hue color encoding from dark red to dark green to highlight the least to the most important features for our current stored stack, see Figure 4(b). The panel in Figure 4(c) uses a table heatmap view where data features are mapped to the y-axis (13 attributes, only 7...
Predictions’ Space. The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances.
Such iterative exploration proceeds for every algorithm until we are satisfied, see Figure 2(e) where six algorithms are selected for our initial stack \raisebox{-.0pt} {\tiny\bfS1}⃝. Figure 2(f) shows a radar chart providing an overview of the entire space of available algorithms (yellow contour) against the current s...
D
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
B
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
A
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roma...
The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm. Tracking the AOAs and AODs is essential for beam tracking, which...
ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively. In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent...
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C.
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Sectio...
D
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are as required by Theorem 3.7.
a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ. The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI...
We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en...
Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever...
Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital...
D
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati...
at the mean-field limit with ϵ→0+→italic-ϵsuperscript0\epsilon\rightarrow 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and m→∞→𝑚m\rightarrow\inftyitalic_m → ∞. Such a correspondence allows us to use the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3....
The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019). In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
A
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-laye...
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ...
D
For that, consider φ∈𝖤𝖥𝖮⁢[σ]𝜑𝖤𝖥𝖮delimited-[]σ\varphi\in\mathsf{EFO}[\upsigma]italic_φ ∈ sansserif_EFO [ roman_σ ] such that B′⊧φmodelssuperscript𝐵′𝜑B^{\prime}\models\varphiitalic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊧ italic_φ, as φ∈𝖤𝖥𝖮⁢[σ]𝜑𝖤𝖥𝖮delimited-[]σ\varphi\in\mathsf{EFO}[\upsigma]italic...
φ∈𝖥𝖮⁢[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ], if A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ, then there exists a finite structure Afinsubscript𝐴finA_{\mathrm{fin}}italic_A start_POSTSUBSCRIPT roman_fin end_POSTSUBSCRIPT such that
there exists a finite structure A⊆iB′subscript𝑖𝐴superscript𝐵′A\subseteq_{i}B^{\prime}italic_A ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT such that A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ. Because F𝐹Fitalic_F is downwards-closed,
\endkillcontentsIndeed, consider a structure A∈Fin⁡(σ)¯𝐴¯FinσA\in\overline{\operatorname{Fin}(\upsigma)}italic_A ∈ over¯ start_ARG roman_Fin ( roman_σ ) end_ARG and a sentence φ∈𝖥𝖮⁢[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ] such that A⊧φmodels𝐴𝜑A\models\varphiitalic_A ...
finite structure A𝐴Aitalic_A, there exists a diagram sentence ψA𝖥superscriptsubscript𝜓𝐴𝖥\psi_{A}^{\mathsf{F}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_F end_POSTSUPERSCRIPT in 𝖥𝖥\mathsf{F}sansserif_F such that A≤B𝐴𝐵A\leq Bitalic_A ≤ italic_B if and only if B⊧ψA𝖥mo...
B
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Sinc...
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen...
Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re...
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul...
Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ...
C
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. ...
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy.
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies. The model is tra...
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/. We set aside 20% of the samples as the test set and divide the rema...
D
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ...
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ...
D
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments...
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed. In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
D
In this work, we propose a novel technique called the Mutual Cover (MuCo) to impede adversary from matching the combination of QI values while overcoming the above issues. The key idea of MuCo is to make similar tuples to cover for each other by randomizing their QI values according to random output tables.
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ...
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a...
Given a microdata table T𝑇Titalic_T, the mutual cover strategy partitions T𝑇Titalic_T into groups, calculates a random output table on each QI attribute for the records in every group, and generates random values according to probabilities in the random output tables.
Given a set of tuples, MuCo partitions the tuples into groups, calculates a random output table on each QI attribute inside each group, and generates random values to replace the original QI values according to the random output tables. The formalization is as follows.
A
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
B
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
A
For any algorithm, the dynamic regret is at least Ω⁢(B1/3⁢d5/6⁢H⁢T2/3)Ωsuperscript𝐵13superscript𝑑56𝐻superscript𝑇23\Omega(B^{1/3}d^{5/6}HT^{2/3})roman_Ω ( italic_B start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 6 end_POSTSUPERSCRIPT italic_H italic_T start_POSTSUPERSCRIPT 2 / 3 en...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov...
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni...
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
C
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3