context
stringlengths
250
4.14k
A
stringlengths
250
3.94k
B
stringlengths
250
5.14k
C
stringlengths
250
4.12k
D
stringlengths
250
4.03k
label
stringclasses
4 values
\Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
^{2}-m^{2}\right]x^{2}\\ +D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP...
Rnm⁢(x)=(−1)(n−m)/2⁢xm⁢P(n−m)/2(m+1−D/2,0)⁢(1−2⁢x2)=(n+1−D/2(n−m)/2)⁢xm⁢G−a⁢(2+m−D/2,2+m−D/2,x2).superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2superscript𝑥𝑚superscriptsubscript𝑃𝑛𝑚2𝑚1𝐷2012superscript𝑥2binomial𝑛1𝐷2𝑛𝑚2superscript𝑥𝑚subscript𝐺𝑎2𝑚𝐷22𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}x^{m}P_{(n-m)...
m+D/2\end{array}\mid x^{2}\right),italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_D + italic_m + italic_n end_ARG start_ARG 2...
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
B
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) define...
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in...
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set a...
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application...
Note that a small variation of these standard generators for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali...
C
To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}...
Except for (ii), all steps above above can be performed efficiently as the matrices involved are sparse and either local or independent of hℎhitalic_h. Solving (25) on the other hand involves computing the hℎhitalic_h-dependent, global operator P𝑃Pitalic_P, leading to a dense matrix in (25). From now on, we concentrat...
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
Above, and in what follows, c𝑐citalic_c denotes an arbitrary constant that does not depend on H𝐻Hitalic_H, ℋℋ{\mathscr{H}}script_H, hℎhitalic_h, 𝒜𝒜\mathcal{A}caligraphic_A, depending only on the shape regularity of the elements of 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POST...
A
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
B
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
For the evaluation, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 4.2...
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work [6, 11] on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor). Our task is, to a point, a reverse engin...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
C
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
B
Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content (liu2015real, ; ma2015detect, ; zhao2015enquiring, ), since individual tweets are of...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
For this task, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 3.2). Fo...
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work (castillo2011information, ; gupta2014tweetcred, ) on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor)...
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor...
C
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}% )\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
D
RT=𝔼⁢{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST...
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ...
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
one uses p⁢(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal, i.e., π⁢(A|xt+1,ℋ1:t)=ℙ⁢(A=at+1∗|xt+1,θt,...
Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many. TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018].
D
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
D
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
D
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11...
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration)...
This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending po...
We first prove pw⁡(Gα)≤2⁢loc⁡(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α...
j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111. This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act position...
A
In[136] the authors used Jaccard distance as optimization objective function, integrating a residual learning strategy, and introducing a batch normalization layer to train a u-net. It is shown in the paper that this configuration performed better than other simpler u-nets in terms of Dice.
Tan et al.[135] parameterize all short axis slices and phases of the LV segmentation task in terms of the radial distances between the LV center-point and the endocardial and epicardial contours in polar space. Then, they train a CNN regression on STA11 to infer these parameters and test the generalizability of the met...
Isensee et al.[141] used an ensemble of a 2D and a 3D u-net for segmentation of the LV/RV cavity and the LV myocardium on each time instance of the cardiac cycle. Information was extracted from the segmented time-series in form of features that reflect diagnostic clinical procedures for the purposes of the classificati...
The model was trained alternately on LV segmentation and volume estimation, placing fourth in the test set of DS16. Emad et al.[138] localize the LV using a CNN and a pyramid of scales analysis to take into account different sizes of the heart with the YUDB.
Luo et al.[133] adopted a LV atlas mapping method to achieve accurate localization using MRI data from DS16. Then, a three layer CNN was trained for predicting the LV volume, achieving comparable results with the winners of the challenge in terms of root mean square of end-diastole and end-systole volumes.
C
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data. Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ...
Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training. Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was...
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol...
We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a). Our results are poor with 20202020K interactions. For 50505050K th...
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
A
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz). Truong et al. [9] used Short...
For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure. The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels).
Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D. An example of the respective outputs of some of the m𝑚mita...
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification. Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke.
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz). Truong et al. [9] used Short...
C
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
It is important to emphasize that the locomotion mode transitions are only meaningful when both rolling and walking modes are capable of handling a step negotiation. And in the step negotiation simulations, it has been observed that the rolling locomotion can not transverse over steps with height more than three time ...
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ...
During the step negotiation simulations, it was noticed that the rolling locomotion mode encountered constraints when attempting to cross steps with a height greater than thrice the track height (h being the track height as shown in Fig. 3). This limitation originates from the traction forces generated by the tracks. ...
Figure 12: The Cricket robot tackles a step of height 3h by initiating in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The transition process mirrors that of the 2h step negotiation shown in Fig. 11. Unlike tackling a 2h step, the robot achieves considerable i...
A
For paid exchanges at the beginning of the phase, Tog incurs a cost that is less than m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Before serving the last request σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT of the phase, the access cost of Tog is less ...
Similar arguments apply for an ignoring phase with the exception that the threshold is β⋅m2⋅𝛽superscript𝑚2\beta\cdot m^{2}italic_β ⋅ italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and there are no paid exchanges performed by Tog. So, we can observe the following.
The worst-case ratio between the costs of Tog and Mtf2 is maximized when the last phase is an ignoring phase. In this case, we have k𝑘kitalic_k trusting phases and k𝑘kitalic_k ignoring phases. The total cost of Mtf2 is at least k⁢m3+k⁢(β⁢m3/2−m2)=k⁢m3⁢(1+β/2−1/m)𝑘superscript𝑚3𝑘𝛽superscript𝑚32superscript𝑚2𝑘sup...
For a trusting phase, the cost of Tog is in the range (m3,m3⁢(1+1/m+1/m2))superscript𝑚3superscript𝑚311𝑚1superscript𝑚2(m^{3},m^{3}(1+1/m+1/m^{2}))( italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m + 1 / italic_m start_POSTSUPERSCRIPT 2 en...
In an ignoring phase, the cost of Tog for the phase is in the range (β⁢m3,β⁢m3⁢(1+1/m2))𝛽superscript𝑚3𝛽superscript𝑚311superscript𝑚2(\beta m^{3},\beta m^{3}(1+1/m^{2}))( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m ...
A
Finally, explainability/interpretability is another important requirement for EDD. The same as with any other critical application in healthcare, finance, or national security, this is a domain that would be greatly benefited by models that not only make correct predictions but also facilitate understanding how those p...
Although interpretability and explanations have a long tradition in areas of AI like expert systems and argumentation, they have gained renewed interest in modern applications due to the complexity and obscure nature of popular machine learning methods based on deep learning.
Nonetheless, This manual process is very expensive and error-prone since the KB of a real expert system includes thousands of rules. This, added to the rise of big data and cheaper GPU-powered computing hardware, are causing a major shift in the development of these intelligent systems in which machine learning is incr...
In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detec...
On the other hand, in the machine learning community, it is well known the importance of having publicly available datasets to foster research on a particular topic, in this case, predicting depression based on language use. That was the reason why the main goal in [Losada & Crestani, 2016] was to provide, to the best ...
A
\frac{1}{2},k})bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_η divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ∑ start_POSTSUBSCRIPT italic_k ∈ [ italic_K ] end_POSTSUBSCRIPT caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide s...
DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5). In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=...
Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor. To address this convergence issue,
Note that the convergence guarantee of DEF-A and its momentum variant for non-convex problems is lacking in (Xu and Huang, 2022). We provide the convergence analysis for GMC+, which can be seen as a global momentum variant of DEF-A. We eliminate the assumption of ring-allreduce compatibility from (Xu and Huang, 2022) a...
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee...
D
Moreover activation functions that produce continuous valued activation maps (such as ReLU) are less biologically plausible, because biological neurons rarely are in their maximum saturation regime [22] and use spikes to communicate instead of continuous values [23].
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations. In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ...
Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24]. Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o...
In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14]. Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler...
Moreover activation functions that produce continuous valued activation maps (such as ReLU) are less biologically plausible, because biological neurons rarely are in their maximum saturation regime [22] and use spikes to communicate instead of continuous values [23].
B
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo...
In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it...
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin...
D
imposed at all boundary points in the plasma domain (in combination with the explicitly applied boundary conditions 𝐯¯|Γ=𝟎evaluated-at¯𝐯Γ0\overline{\mathbf{v}}|_{\Gamma}=\mathbf{0}over¯ start_ARG bold_v end_ARG | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0),
to the peak values of ψm⁢a⁢i⁢n,ψl⁢e⁢v,and ⁢ψc⁢o⁢m⁢psubscript𝜓𝑚𝑎𝑖𝑛subscript𝜓𝑙𝑒𝑣and subscript𝜓𝑐𝑜𝑚𝑝\psi_{main},\,\psi_{lev},\,\mbox{and }\psi_{comp}italic_ψ start_POSTSUBSCRIPT italic_m italic_a italic_i italic_n end_POSTSUBSCRIPT , italic_ψ start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT , ...
Vf⁢o⁢r⁢m=16subscript𝑉𝑓𝑜𝑟𝑚16V_{form}=16italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT = 16kV, Im⁢a⁢i⁢n=70subscript𝐼𝑚𝑎𝑖𝑛70I_{main}=70italic_I start_POSTSUBSCRIPT italic_m italic_a italic_i italic_n end_POSTSUBSCRIPT = 70A, Vl⁢e⁢v=16subscript𝑉𝑙𝑒𝑣16V_{lev}=16italic_V start_...
were set to the experimentally measured values corresponding to experimentally recorded Vl⁢e⁢v=16subscript𝑉𝑙𝑒𝑣16V_{lev}=16italic_V start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT = 16kV and Vc⁢o⁢m⁢p=18subscript𝑉𝑐𝑜𝑚𝑝18V_{comp}=18italic_V start_POSTSUBSCRIPT italic_c italic_o italic_m italic_p e...
typical shot with Vf⁢o⁢r⁢m=16subscript𝑉𝑓𝑜𝑟𝑚16V_{form}=16italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT = 16kV, where Vf⁢o⁢r⁢msubscript𝑉𝑓𝑜𝑟𝑚V_{form}italic_V start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT is the voltage to which the formation capaci...
B
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
A
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t...
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
C
Encoder-decoder networks with long and short skip connections are the winning architectures according to the state-of-the-art methods. Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures and reducing the risks for ...
Medical images, both 2D and volumetric, have in general, larger file sizes than natural images, which inhibits the ability to load them entirely onto the memory for processing. As such, they need to be processed either as patches or sub-volumes, making it difficult for the segmentation models to capture spatial relati...
The majority of the methods discussed in Section 5 have attempted to handle the class imbalance issue in the input images i.e., small foreground versus large background with providing weights/penalty terms in the loss function. Other approaches consist of first identifying the object of interest, cropping around this o...
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates th...
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ...
D
This means that the graph has a very large diameter (maximum shortest path), where information propagates slowly through MP layers. Therefore, even after MP, nodes in very different parts of the graph will end up having similar (if not identical) features, which leads feature-based pooling methods to assign them to the...
these methods compute a coarsened version of the graph through differentiable functions, which are parametrized by weights that are optimized for the task at hand. Differently from topological pooling, these methods account for the node features, which change as the GNN is trained.
As a result the graph collapses, becoming densely connected and losing its original structure. On the other hand, topological pooling methods can preserve the graph structure by operating on the whole adjacency matrix at once to compute the coarsened graphs and are not affected by uninformative node features.
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
B
Experiments demonstrate that the accuracy of the imitating neural network is equal to the original accuracy or even slightly better than the random forest due to better generalization while being significantly smaller. To summarize, our contributions are as follows:
Neural random forest imitation enables an implicit transformation of random forests into neural networks. Usually, data samples are propagated through the individual decision trees and the split decisions are evaluated during inference. We propose a method for generating input-target pairs by reversing this process and...
In this work, we presented a novel method for transforming random forests into neural networks. Instead of a direct mapping, we introduced a process for generating data from random forests by analyzing the decision boundaries and guided routing of data samples to selected leaf nodes.
Our proposed approach, called Neural Random Forest Imitation (NRFI), implicitly transforms random forests into neural networks. The main concept includes (1) generating training data from decision trees and random forests, (2) adding strategies for reducing conflicts and increasing the variety of the generated examples...
We propose a novel method for implicitly transforming random forests into neural networks by generating data from a random forest and training an random forest-imitating neural network. Labeled data samples are created by evaluating the decision boundaries and guided routing to selected leaf nodes.
D
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
C
Starting from a pre-trained teacher DNN, they first train an autoencoder which they call paraphraser to extract understandable factors from a selected intermediate layer of the teacher DNN. The student DNN is extended by a regressor which they call translator whose purpose is to predict the paraphraser factors from the...
For τ>1𝜏1\tau>1italic_τ > 1, the labels tend to become more uniform which has been reported to facilitate training. Furthermore, they propose to utilize the ground truth labels by minimizing a weighted average of the traditional cross-entropy loss based on the ground truth labels t𝑡titalic_t and the knowledge distill...
Subsequently, the smaller student model is trained on data where the ground truth labels have been replaced by the soft labels obtained from the output of the teacher model, e.g., from the softmax output of a DNN. It has been shown that this substantially increases the accuracy of the student model compared to directly...
The student DNN is then trained to simultaneously minimize the cross-entropy loss on the ground truth labels and the difference between paraphraser and translator output. They employ the paraphraser and the translator after the last convolutional layer in their DNNs.
Starting from a pre-trained teacher DNN, they first train an autoencoder which they call paraphraser to extract understandable factors from a selected intermediate layer of the teacher DNN. The student DNN is extended by a regressor which they call translator whose purpose is to predict the paraphraser factors from the...
C
≃γxi,p⋅γp,xi+1similar-to-or-equalsabsent⋅subscript𝛾subscript𝑥𝑖𝑝subscript𝛾𝑝subscript𝑥𝑖1\displaystyle\simeq\gamma_{x_{i},p}\cdot\gamma_{p,x_{i+1}}≃ italic_γ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_p end_POSTSUBSCRIPT ⋅ italic_γ start_POSTSUBSCRIPT italic_p , italic_x s...
Note that whereas the proof of Lemma 1 in [54] takes place at the level of L∞⁢(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ), the proof of Proposition 9.1 given above takes place at the level of simplicial complexes and simplicial maps.
The following corollary was already established by Gromov (who attributes it to Rips) in [47, Lemma 1.7.A]. The proof given by Gromov operates at the simplicial level. By invoking Proposition 8.1 we obtain an alternative proof, which instead of operating the simplicial level, exploits the isometric embedding of X𝑋Xita...
See Section 5 for the proof of Theorem 1. As we already mentioned earlier, our proof of Theorem 1 does not depend on Crawley-Boevey’s theorem since we circumvented verifying the pointwise finite-dimensionality of PHk⁢(VR∗⁢(X);𝔽)subscriptPH𝑘subscriptVR𝑋𝔽\mathrm{PH}_{k}(\mathrm{VR}_{*}(X);\mathbb{F})roman_PH start_PO...
In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞⁢(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem...
D
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scat...
We present a Neighborhood Preservation plot (Figure 1(g)) that shows an overview of the preservation of neighborhoods of different sizes (k𝑘kitalic_k) in both the entire projection and the current selection, based on the Jaccard distance between sets:
As an example, the set difference from Martins et al. [33] uses the Jaccard set-distance between the two sets of neighbors of a point in low- and high-dimensional space in order to compute a measure of Neighborhood Preservation. We have chosen to adopt it in our work, in contrast to others, because of its intuitive int...
The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis. It provides a clearer overall picture of the difference in preservation among all the shown s...
we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light...
B
When to establish a new division of a category into subcategories: a coarse split criterion for the taxonomy can imply categories of little utility for the subsequent analysis, since in that case, the same category would group very different algorithms. On the other hand, a fine-grained taxonomy can produce very comple...
Which number of subcategories into which to divide a category: the criterion followed in this regard must produce meaningful subcategories. In order to ensure a reduced number of subcategories, we consider that not all algorithms inside one category must be a member of one of its subcategories. In that way, we avoid in...
This category is further divided into subcategories as a function of the above decision, i.e. which solutions are considered to create the movement vector. It should be noted that some algorithms can be classified into more than one subcategory. For instance, a particle’s update in the PSO solver is affected by the glo...
Taking into account all the reviewed papers, we group the proposals therein in a hierarchy of categories. In the hierarchy, not all proposals of a category must fit in one of its subcategories. In our classification, categories lying at the same level are disjoint sets, which means that each proposed algorithm can be ...
When to establish a new division of a category into subcategories: a coarse split criterion for the taxonomy can imply categories of little utility for the subsequent analysis, since in that case, the same category would group very different algorithms. On the other hand, a fine-grained taxonomy can produce very comple...
A
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph...
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi...
B
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi...
Network Traces. To overcome the dependency on vantage points for running the tests, researchers explored alternatives for inferring filtering of spoofed packets. A recent work used loops in traceroute to infer ability to send packets from spoofed IP addresses, (Lone et al., 2017).
(Lichtblau et al., 2017) developed a methodology to passively detect spoofed packets in traces recorded at a European IXP connecting 700 networks. The limitation of this approach is that it requires cooperation of the IXP to perform the analysis over the traffic and applies only to networks connected to the IXP. Allow...
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20...
Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin...
A
It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long...
It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long...
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal ox...
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b...
D
Our algorithm is a dynamic program, where we define a subproblem for each separator index i𝑖iitalic_i, and each set of endpoints B∈ℬi𝐵subscriptℬ𝑖B\in\mathcal{B}_{i}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The value of A⁢[i,B]𝐴𝑖𝐵A[i,B]italic_A [ italic_i , italic_B ] is defined as f...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num...
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re...
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈...
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B...
A
While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne...
However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least very close to being an auto...
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t...
While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne...
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel...
D
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P⁢(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ...
A
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil...
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. D...
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application...
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word...
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da...
B
Following our design goals and derived analytical tasks, we implemented StackGenVis, an interactive VA system that allows users to build powerful stacking ensembles from scratch. Our system consists of six main interactive visualization panels (see StackGenVis: Alignment of Data, Algorithms, and Models for Stacking En...
The model exploration phase is perhaps the most important step on the way to build a good ensemble. It focuses on comparing and exploring different models both individually and in groups. Due to the page limits, we now assume that we selected the most performant models, removed the remaining from the stack, and reached...
(ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models; (iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model explo...
Predictions’ Space. The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances.
and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme...
B
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
D
For both BLEU and C Score, Jac Score is around 1 in each cluster, which means the persona descriptions are not similar. The dialogue quantity also seems similar among different clusters. So we can conclude that data quantity and task profile does not have a major impact on the fine-tuning process.
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
A
Figure 6: The subarray patterns on the cylinder and the corresponding expanded cylinder. (a) The t-UAV subarray partition pattern. (b) The r-UAV subarray partition pattern with conflict. (c) The r-UAV subarray partition pattern without conflict. (d) The t-UAV subarray partition pattern with beamwidth selection.
Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast m...
Without loss of generality, let us focus on the TE-aware codeword selection for the k𝑘kitalic_k-th t-UAV at the r-UAV side. The beam gain is selected as the optimization objective, and the problem of beamwidth control is translated to choose the appropriate subarray size, which corresponds to the appropriate layer in ...
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as
The r-UAV needs to select multiple appropriate AWVs 𝒗⁢(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , ital...
C
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element.
In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper, which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_...
Related one-variable fragments in which we have only a unary relational vocabulary and the main quantification is ∃Sx⁢ϕ⁢(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ...
B
We first introduce the assumptions for our analysis. In §4.1, we establish the global optimality and convergence of the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.4). In §4.2, we further invoke Proposition 3.1 to establish the global optimality and convergence of ...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize...
Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
B
Compared to the baseline Zhang et al. (2020), Table 7 shows that: 1) our approach can lead to +3.023.02+3.02+ 3.02 and +3.383.38+3.38+ 3.38 BLEU improvements on average in the En→→\rightarrow→xx and xx→→\rightarrow→En directions respectively in the evaluation over 4 typologically different languages, and 2) using dept...
It is a common problem that increasing the depth does not always lead to better performance, whether with residual connections Li et al. (2022b) or other previous studies on deep Transformers Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a), and the use of wider models is the usual method of choice for furthe...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
In our deep Transformer experiments, Table 6 shows that our depth-wise LSTM Transformer with fewer layers, parameters and computations can lead to competitive/better performance and faster decoding speed than vanilla Transformers with more layers but a similar BLEU score, and the depth-wise LSTM Transformer is in fact ...
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transform...
C
⟦𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT and ⟦𝖤𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta...
τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)⟩\langle\tau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}\rangle⟨ italic_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_PO...
⟦𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT and ⟦𝖤𝖥𝖮[σ]⟧Struct⁡(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta...
topology ⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)⟩\langle\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}\rangle⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_...
A
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
After predicting the distortion labels of a distorted image, it is direct to use the distance metric loss such as ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss or ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT loss to learn the network paramete...
2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. Also, the devised distortion-aware perception layer boosts the feature extraction of different degrees of distortion.
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl...
D
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r...
argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32] observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a...
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r...
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪⁢(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ...
Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models. Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ...
D
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
We remark that if we make an additional assumption that the stage-II cost is at most some polynomial value ΔΔ\Deltaroman_Δ, we can use standard SAA techniques without discarding scenarios; see Theorem 2.6 for full details. However, this assumption is stronger than is usually used in the literature for two-stage stocha...
Unfortunately, standard SAA approaches [26, 7] do not directly apply to radius minimization problems. On a high level, the obstacle is that radius-minimization requires estimating the cost of each approximate solution; counter-intuitively, this may be harder than optimizing the cost (which is what is done in previous ...
D
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed. In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d...
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments...
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
C
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, whi...
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by mat...
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ...
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserv...
Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonym...
D
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
C
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
D
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020...
We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the en...
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
D
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
C
Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors. DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its...
Moreover, DAN introduces a distinctive attention mechanism that employs the neighbors of the central entity to evaluate the neighbors themselves. This collective voting mechanism helps mitigate bias and contributes to improved performance, even on traditional tasks. It also distinguishes DAN from other existing inducti...
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg...
Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors. DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its...
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attem...
A
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The ...
We compare the model complexity of all the methods in Table I. VDM, RFM, and Disagreement use a fixed CNN for feature extraction. Thus, the trainable parameters of feature extractor are 0. ICM estimates the inverse dynamics for feature extraction with 2.21M parameters. ICM and RFM use the same architecture for dynamics...
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
Conducting exploration without the extrinsic rewards is called the self-supervised exploration. From the perspective of human cognition, the learning style of children can inspire us to solve such problems. The children often employ goal-less exploration to learn skills that will be useful in the future. Developmental ...
B
The number of coefficients |Am,n,1|=(m+nn)∈𝒪⁢(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali...
Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl...
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
Whatsoever, any answer to Questions 2 that is to be of practical relevance must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1.
A
The Wasserstein distance, as a particular case of IPM, is popular in many machine learning applications. However, a significant challenge in utilizing the Wasserstein distance for two-sample tests is that the empirical Wasserstein distance converges at a slow rate due to the complexity of the associated function space....
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized. Our proposed projected Wasserstein distance is similar to this framework, but we use 1111-Wasserst...
Typical examples include principal component analysis [27], linear discriminant analysis [28], etc. It is intuitive to understand the differences between two collections of high-dimensional samples by projecting those samples into low-dimensional spaces in some proper directions [29, 30, 31, 6, 32, 33, 34].
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
Our two-sample testing algorithm also gives us interpretable characterizations for understanding differences between two high-dimensional distributions, by studying the worst-case projection mappings and projected samples in low dimensions. See Fig. 2(a)) for the optimized linear mapping so that the Wasserstein distanc...
B
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ⁢(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mod...
Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf...
D
This paper presents the NOT gate implementation of structural computers and the Reverse-Logic pair and double pair-based logic operation techniques of digital signals that can solve the problem of heating and aging of existing semiconductor computers.
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
Furthermore, we propose Simulation Metric (DFS) based on deep-first search (DFS) that enables easy implementation and testing of complex structural computer Circuits. This confirmed the feasibility of this study in an experiment based on an XOR gate produced by combining NAND, AND and OR gates.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
C
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with g⁢c⁢d⁢(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start...
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representati...
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat...
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic...
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form
D
In this study, we evaluated the performance of the different meta-learners across a variety of settings, including high-dimensional and highly correlated settings. Most of these settings were not easy problems, as evident by the absolute accuracy values obtained by the meta-learners. Additionally we considered two rea...
The nonnegative elastic net is particularly suitable if it is important to the research that, out of a set of correlated features, more than one should be selected. If this is not of particular importance, the nonnegative lasso and nonnegative adaptive lasso can provide even sparser models.
The results of applying MVS with the seven different meta-learners to the colitis data can be observed in Table 2. In terms of raw test accuracy the nonnegative lasso is the best performing meta-learner, followed by the nonnegative elastic net and the nonnegative adaptive lasso. In terms of AUC and H, the best performi...
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Hou...
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An exam...
A
In this paper, we introduce DepAD, a versatile framework for dependency-based anomaly detection. DepAD offers a general approach to construct effective, scalable, and flexible anomaly detection algorithms by leveraging off-the-shelf feature selection techniques and supervised prediction models for various data types a...
We systematically and empirically study the performance of representative off-the-shelf techniques and their combinations in the DepAD framework. We identify two well-performing dependency-based methods. The two DepAD algorithms consistently outperform nine benchmark algorithms on 32 datasets.
We compare two high-performing instantiations of DepAD, FBED-CART-PS and FBED-CART-Sum, against nine state-of-the-art anomaly detection methods across 32 commonly used datasets. The results demonstrate that DepAD algorithms consistently outperform existing methods in most cases. Moreover, the DepAD framework’s high int...
Effectiveness: The two DepAD algorithms, FBED-CART-PS, and FBED-CART-Sum, demonstrate superior performance over nine state-of-the-art anomaly detection methods in the majority of cases. The two DepAD methods do not outperform wkNN. However, they show advantages over wkNN in higher dimensional datasets in terms of both...
In the subsection, we answer the question, i.e., compared with state-of-the-art anomaly detection methods, how is the performance of the instantiated DepAD algorithms? We choose the two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, to compare them with the nine state-of-the-art anomaly detection methods shown in Ta...
B
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
D
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s...
We provide ablation study for the key components VSS and xGPN in VSGN to verify their effectiveness on the two datasets in Table 3 and 4, respectively. The baselines are implemented by replacing each xGN module in xGPN with a layer of Conv1d⁢(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) and ReLU, and not using cutt...
Table 6: xGN levels in xGPN (ActivityNet-v1.3). We show the mAPs (%) at different tIoU thresholds, average mAPs as well as mAPs for short actions (less than 30 seconds) when using xGN at different xGPN encoder levels. The levels in the columns with ✓use xGN and the ones in the blank columns use a Conv1d⁢(3,2)Conv1d32\t...
To further improve the boundaries generated from Ml⁢o⁢csubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we design Ma⁢d⁢jsubscript𝑀𝑎𝑑𝑗M_{adj}italic_M start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT inspired by FGD in [24]. For each updated anchor seg...
Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ...
A
Latha and Jeeva [LJ19] tried out various ensembles for this same data set, with or without (as in our case) feature selection. They found that applying majority vote with the NB, BN, RF, and MLP algorithms was the best combination, achieving ≈\approx≈82% accuracy without feature selection. However, they do not state ho...
From Figure 2(d.1), we observe that KNN and MLP contain more diverse models (darker green color for instances at the bottom), because they better predict hard-to-classify instances when compared to LR, RF, and GradB (which work better for the easy-to-predict instances). Since we have already found powerful and diverse ...
Figure 5: The exploration of clusters of interest that contain performant ML models. View (a) presents the user’s selection that drive the analyses performed in the remaining subfigures. (b.1) provides an overview of the performance, showing that \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C3}}}}⃝ has under...
Exploration and Selection of Algorithms and Models. Similar to the workflow described in Section 4, we start by setting the most appropriate validation metrics for the imbalanced data set (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(a)). The projection in Figure 5(a)...
At this point, the importance of \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C1}}}}⃝ and \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C3}}}}⃝ is clear, so we decide to gradually scan for the in-depth connections of the models belonging to the remaining clusters and the data instances (Step 3 in Fi...
B
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
Furthermore, unlike previous algorithms in [14, 15], the convergence rate of the DSMC algorithm does not rapidly decrease in scenarios where the state space contains sparsely connected regions. Due to the decentralized nature of the consensus protocol, the Markov chain synthesis relies on local information, similar to ...
Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen...
In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
A
Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc...
The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions. The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting. Moreover, for general non-rigid settings learning these basis functions has also been propos...
Despite the exponential size of the search space, there exist efficient polynomial-time algorithms to solve the LAP [11]. A downside of the LAP is that the geometric relation between points is not explicitly taken into account, so that the found matchings lack spatial smoothness. To compensate for this, a correspondenc...
Apart from methods tackling a QAP formulation (see previous paragraph), there exist directions utilising other structural properties of isometries. The Laplace-Beltrami operator (LBO) [54], a generalisation of the Laplace operator on manifolds, as well as its eigenfunctions are invariant under isometries.
Functional Maps [51] formulate the correspondence problem as a linear mapping 𝒞i⁢j:L2⁢(𝒳i)→L2⁢(𝒳j):subscript𝒞𝑖𝑗→superscript𝐿2subscript𝒳𝑖superscript𝐿2subscript𝒳𝑗\mathcal{C}_{ij}:L^{2}(\mathcal{X}_{i})\to L^{2}(\mathcal{X}_{j})caligraphic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT : italic_L st...
C
If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed path graph.
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
In this section we introduce some results and notations in [1], that give a new characterization of path graphs resumed in Theorem 6. Indirectly, some of these results allow us to efficiently recognize directed path graphs too (see Section 5 and Theorem 9).
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov...
interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path % graphs $\subset$ path graphs $\subset$ chordal graphs}.interva...
B
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi...
Dolphins: this network consists of frequent associations between 62 dolphins in a community living off Doubtful Sound. In the Dolphins network, node denotes a dolphin, and edge stands for companionship dolphins0 ; dolphins1 ; dolphins2 . The network splits naturally into two large groups females and males dolphins1 ; ...
The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002F...
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi...
The ego-networks dataset contains more than 1000 ego-networks from Facebook, Twitter, and GooglePlus. In an ego-network, all the nodes are friends of one central user and the friendship groups or circles (depending on the platform) set by this user can be used as ground truth communities. The SNAP ego-networks are ope...
A
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward...
To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods. I...
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation. In each iteration, variati...
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward...
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals.
D
2) MetaVIM shows good generalization for different scenarios and configurations. MetaVIM performs the second best in Hangzhou with the mixedl configuration, Jinan with the real configuration and Shenzhen with the mixedl configuration, and performs best in other scenarios. Overall, MetaVIM has the best mean performance...
1) In general, RL methods perform better than conventional methods, and it indicates the advantage of the RL. The reason is that the conventional methods often rely on prior knowledge which may fails in some cases. A typical case is MaxPressure. It shows good performances on several cases including Hangzhou with the r...
Except MaxPressure analysed above, GeneraLight achieves the best in Hangzhou with the mixedl configuration, while performs poorly in other scenarios. The reason is that GeneraLight trains several models on diverse generated traffic flows, and select the model in testing by matching the flow. Hence, it limits the genera...
The method is evaluated in two modes: (1) Common Testing Mode: the model trained on one scenario with one traffic flow configuration is tested on the same scenario with the same configuration. It is used to validate the ability of the RL algorithm to find the optimal policy.
2) MetaVIM shows good generalization for different scenarios and configurations. MetaVIM performs the second best in Hangzhou with the mixedl configuration, Jinan with the real configuration and Shenzhen with the mixedl configuration, and performs best in other scenarios. Overall, MetaVIM has the best mean performance...
B
[]{c}\mathbf{f}(\mathbf{x}),\,J(\mathbf{x})\,\mathbf{y},\,R\,\mathbf{y}-% \mathbf{e}\end{array}\right)bold_g : ( bold_x , bold_y ) ↦ ( start_ARRAY start_ROW start_CELL bold_f ( bold_x ) , italic_J ( bold_x ) bold_y , italic_R bold_y - bold_e end_CELL end_ROW end_ARRAY )
𝐲∗∈ℝ4subscript𝐲superscriptℝ4\mathbf{y}_{*}\in\mathbbm{R}^{4}bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT such that (𝐱∗,𝐲∗)subscript𝐱subscript𝐲(\mathbf{x}_{*},\mathbf{y}_{*})( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT ∗ ...
If 𝐱^^𝐱\hat{\mathbf{x}}over^ start_ARG bold_x end_ARG is an ultrasingular zero of 𝐟𝐟\mathbf{f}bold_f where r=𝓇⁢𝒶⁢𝓃⁢𝓀⁢(𝒥⁢(𝐱^))𝑟𝓇𝒶𝓃𝓀𝒥^𝐱r\,=\,\mathpzc{rank}\left(\,J(\hat{\mathbf{x}})\,\right)italic_r = italic_script_r italic_script_a italic_script_n italic_script_k ( italic_script_J ( over^ start_ARG bol...
where R𝑅Ritalic_R is a random (m−r)×m𝑚𝑟𝑚(m-r)\times m( italic_m - italic_r ) × italic_m matrix and 𝐞≠ 0𝐞 0\mathbf{e}\,\neq\,\mathbf{0}bold_e ≠ bold_0. If (𝐱∗,𝐲∗)subscript𝐱subscript𝐲(\mathbf{x}_{*},\mathbf{y}_{*})( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ...
Theorem 5.1 on the mapping (𝐱,𝐲)↦𝐟⁢(𝐱)−𝐲maps-to𝐱𝐲𝐟𝐱𝐲(\mathbf{x},\,\mathbf{y})\,\mapsto\,\mathbf{f}(\mathbf{x})-\mathbf{y}( bold_x , bold_y ) ↦ bold_f ( bold_x ) - bold_y at (𝐱∗,𝐲∗)subscript𝐱subscript𝐲(\mathbf{x}_{*},\,\mathbf{y}_{*})( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , bold_y start_POSTSUBSC...
C
Online bin packing has a long history of study. The simplest algorithm is NextFit, which places an item into its single open bin when possible; otherwise, it closes the bin (does not use it anymore) and opens a new bin for the item. FirstFit is another simple heuristic that places an item into the first bin of suffici...
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and i...
In this setting, the objective is to minimize the expected loss, defined as the difference between the number of bins opened by the algorithm, and the total size of all items normalized by the bin capacity. Ideally, one aims for a loss that is as small as o⁢(n)𝑜𝑛o(n)italic_o ( italic_n ), where n𝑛nitalic_n is the nu...
To obtain the best theoretical performance, we can choose A𝐴Aitalic_A as the algorithm of the best known competitive ratio, that is Advanced Harmonic algorithm (?). However, as discussed in Section 2, such algorithms belong to a class that is tailored to worst-case competitive analysis, and do not tend to perform well...
These algorithms are variants of the classic Harmonic algorithm (?), which places items of approximately equal sizes, according to a harmonic sequence, in the same bin. The currently best algorithm is the Advanced Harmonic (AH) algorithm, which has a competitive ratio of 1.57829 (?), whereas the best-known lower bound ...
D
In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018...
Patch-based approaches (Yang et al., 2018b; Groueix et al., 2018; Bednarik et al., 2020; Deng et al., 2020b) are much more flexible and enable modeling virtually any surfaces, including those with a non-disk topology. It is achieved using parametric mappings to transform 2D patches into a set of 3D shapes. The first d...
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create...
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018...
A
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O⁢(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n...
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O⁢(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n...
Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t...
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O⁢(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate. Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constr...
D
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ...
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
D
For any simplicial complex K𝐾Kitalic_K and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ), there exists an integer t=t⁢(b,K,m)𝑡𝑡𝑏𝐾𝑚t=t(b,K,m)italic_t = italic_t ( italic_b , italic_K , italic_m ) with the following property: If ℱℱ\mathcal{F}caligraphic_F is an m𝑚mita...
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ...
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37]. This technique, which we briefly outline here, was specifically designed for complete intersection patterns. A major part of this paper, all of...
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma...
C
Teal color encodes the current action’s score, and brown the best result reached so far. The choice of colors was made deliberately because they complement each other, and the former denotes the current action since it is brighter than the latter. If the list of features is long, the user can scroll this view.
Fig. 3(b) is a table heatmap view with five automatic feature selection techniques, their Average contribution, and an # Action # button to exclude any number of features. As we originally train our ML algorithm with all features, the yellow color (one of the standard colors used for highlighting [77]) in the last colu...
High scores were reached in terms of accuracy, precision, and recall. All in all with FeatureEnVi, we improve the total combined score by using 6 well-engineered features instead of the original 11. On the contrary, Rojo et al. [33] reported a slight decrease in performance when selecting 6 features for this task as a ...
A use case present in a visual diagnosis tool revealed that feature generation involving the combination of two features is capable of a slight increase in performance [30]. The authors tested the same mathematical operations as in our system (i.e., addition, subtraction, multiplication, and division), but the generati...
Using our approach, we managed to achieve the same accuracy as before, 89%, compared to 83% reported by Mansouri et al. [94] for the additional external data set. For precision and recall, we always use macro-average, which is identical to Mansouri et al. [94]. On the one hand, the precision was 4% lower in both test a...
B
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT itali...
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so...
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
A
λ2⁢‖y^−γ‖22,𝜆2superscriptsubscriptnorm^𝑦𝛾22\displaystyle\frac{\lambda}{2}||\hat{y}-\gamma||_{2}^{2},divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG | | over^ start_ARG italic_y end_ARG - italic_γ | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ,
Figure 1: Current bias mitigation systems are tested on simple datasets that are easy to analyze, but do not offer challenges present in realistic cases. Addressing this, we propose the Biased MNISTv1 dataset which is easy to analyze, yet is reflective of real world challenges since it contains multiple sources of bias...
To test scalability on a natural dataset, we conduct four experiments per explicit method on GQA-OOD with the explicit bias variables: a) head/tail (2 groups), b) answer class (1833 groups), c) global group (115 groups), and d) local group (133328 groups). Unlike Biased MNISTv1, we do not test with combinations of thes...
We use datasets that enable probing existing methods with critical questions regarding their robustness. We test on datasets with varying scales and types of biases, allowing us to perform highly controlled studies that analyze scalability to a large number of hidden groups.
Our study demonstrates that systems are highly sensitive to the tuning distribution, that explicit methods cannot handle multiple bias sources, and that more rigorous analysis is critical for bias mitigation algorithms for future progress. Based on our results, we argue that the community should focus on implicit meth...
C
The majority of gaze estimation systems use a single RGB camera to capture eye images, while some studies use different camera settings, e.g., using multiple cameras to capture multi-view images [121, 147, 164], using infrared (IR) cameras to handle low illumination condition [123, 149], and using RGBD cameras to provi...
They build a multi-branch network to extract the features of each view and concatenate them to estimate 2222D gaze position on the screen. Wu et al. collect gaze data using near-eye IR cameras [123]. They use CNN to detect the location of glints, pupil centers and corneas from IR images. Then, they build an eye model u...
The majority of gaze estimation systems use a single RGB camera to capture eye images, while some studies use different camera settings, e.g., using multiple cameras to capture multi-view images [121, 147, 164], using infrared (IR) cameras to handle low illumination condition [123, 149], and using RGBD cameras to provi...
Tonsen et al. embed multiple millimeter-sized RGB cameras into a normal glasses frame [147]. They use multi-layer perceptrons to process the eye images captured by different cameras, and concatenate the extracted feature to estimate gaze. Lian et al. mount three cameras at the bottom of a screen [121].
The head-mounted device usually employs near-eye cameras to capture eye images. Tonsen et al. embed millimetre-sized RGB cameras into a normal glasses frame [147]. In order to compensate for the low-resolution captured images, they use multi-cameras to capture multi-view images and use a neural network to regress gaze...
C
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
has been successfully employed for image classification tasks krizhevsky2017imagenet . This deep model is pre-trained on a few millions of images from the ImageNet database through eight learned layers, five convolutional layers and three fully-connected layers. The last fully-connected layer allows to classify one tho...
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b...
simonyan2014very is trained on the ImageNet dataset which has over 14 million images and 1000 classes. Its name VGG-16 comes from the fact that it has 16 layers. It contains different layers including convolutional layers, Max Pooling layers, Activation layers, and Fully Connected (fc) layers. There are 13 convolution...
C
Assuming F∈[𝚪]𝐹delimited-[]𝚪F\in[\bm{\Gamma}]italic_F ∈ [ bold_Γ ], we want to show F,C,C′∈⟦𝚫⟧F,C,C^{\prime}\in\llbracket\bm{\Delta}\rrbracketitalic_F , italic_C , italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ ⟦ bold_Δ ⟧. By induction on the first premise, F,C∈⟦𝚪′⟧F,C\in\llbracket\bm{\Gamma}^{\prime}\rrbr...
\operatorname{!cell}a\,Kroman_proc italic_a ( bold_case italic_a start_POSTSUPERSCRIPT roman_W end_POSTSUPERSCRIPT italic_K ) → start_OPFUNCTION ! roman_cell end_OPFUNCTION italic_a italic_K, so we invoke part 2 on SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivation ...
We prove parts 2 and 3 simultaneously by lexicographic induction on SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivation D𝐷Ditalic_D then the part number, yielding induction hypotheses I⁢H2⁢(derivation)𝐼subscript𝐻2derivationIH_{2}(\text{derivation})italic_I italic...
By induction on the configuration typing derivation D𝐷Ditalic_D, the empty and join cases are discharged by Lemma 7. The object typing cases are covered by Lemma 6, noting that ⦇Γ⦈delimited-⦇⦈Γ\llparenthesis\Gamma\rrparenthesis⦇ roman_Γ ⦈ persists across the semantic sequent due to memory cell persistence and monotoni...
Now, let F∈⦇A⦈≜F∈⦇A⦈nF\in\llparenthesis A\rrparenthesis\triangleq F\in\llparenthesis A% \rrparenthesis_{n}italic_F ∈ ⦇ italic_A ⦈ ≜ italic_F ∈ ⦇ italic_A ⦈ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for some n𝑛nitalic_n—intuitively, all of the (syntactic) types we have considered so far are defined by a lexicograp...
C
However, in this case, an unfaithful user can evade traitor tracing by producing two different fingerprints, i.e., a 𝐛k′subscriptsuperscript𝐛′𝑘\mathbf{b}^{{}^{\prime}}_{k}bold_b start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ′ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is u...
Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However...
Moreover, FairCMS-I does not perform any processing on the encrypted media content stored in the cloud, but only performs homomorphic operations and re-encryption operations on the encrypted LUT and fingerprint that are much smaller in size, which results in outstanding cloud-side efficiency. In contrast, the two schem...
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
B
Currently, Graph Neural Networks (GNN) Kipf and Welling (2017); Hamilton et al. (2017); Veličković et al. (2018) have recently emerged as an effective class of models for capturing high-order relationships between nodes in a graph and have achieved state-of-the-art results on a variety of tasks such as computer vision...
It first proposes to connect all the feature fields, and thus the multi-field features can be treated as a fully-connected graph. Then it utilizes GGNN Li et al. (2015) to model high-order feature interactions on the feature graph. KD-DAGFM Tian et al. (2023) uses knowledge distillation and proposes a lightweight stude...
The high-order relations between nodes can be modeled explicitly by stacking layers. Gated Graph Neural Networks (GGNN) Li et al. (2015) uses GRU Cho et al. (2014) to update the node representations based on the aggregated neighborhood feature information.
In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter...
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
D
In the classical analysis of Newton’s method, when the Hessian of f𝑓fitalic_f is assumed to be Lipschitz continuous and the function is strongly convex, one arrives at a convergence rate for the algorithm that depends on the Euclidean structure of ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic...
step sizes γt=2/(t+2)subscript𝛾𝑡2𝑡2\gamma_{t}=2/(t+2)italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 2 / ( italic_t + 2 ) to obtain a 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ) convergence rate for generalized self-concordant functions in terms of primal and
In the classical analysis of Newton’s method, when the Hessian of f𝑓fitalic_f is assumed to be Lipschitz continuous and the function is strongly convex, one arrives at a convergence rate for the algorithm that depends on the Euclidean structure of ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic...
Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostrovskii & Bach, 2021]. The original definition of self-concordance has been expanded and generalized since its incept...
Logistic regression. One of the motivating examples for the development of a theory of generalized self-concordant function is the logistic loss function, as it does not match the definition of a standard self-concordant function but shares many of its characteristics.
C
Otherwise, we will find an augmentation and we have that an augmenting path satisfying one of the two desired properties has been found. This property is formalized in Observation 4.2 and the process for finding these odd cycles is formalized in Definition 4.3 and Lemma 4.4.
Informal description: Extend-Active-Paths can be seen as performing a Depth First Search (DFS) along active paths. When an active path does not get extended in a pass then, just like in DFS, Backtrack-Stuck-Structures backtracks on this active path (in our case by one matched and one unmatched arc) and continues the DF...
Informally speaking, the key observations are that in the former case, by Lemma 4.8, (a suffix of) the active path must form an odd cycle. A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices.
Our main challenge is that on the path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β, there can be many events by active paths of many distinct free vertices, where some active paths are blocked by other active paths and others form odd cycles. Our main technical contribution is to sort this mess and show that certain positiv...
The primary goal of Extend-Active-Paths is to extend active paths of a maximal (not necessary maximum) number of distinct free nodes with respect to a given ordering of arcs. Algorithm 7 does not achieve the same guarantee. As a consequence of such behavior of Algorithm 7, Backtrack-Stuck-Structures potentially reduce...
C
The existence of compression errors may result in inferior convergence performance compared to uncompressed or centralized algorithms. For example, the methods considered by [41, 42, 43, 44, 45, 46] can only guarantee to reach a neighborhood of the desired solutions when the compression errors exist. QDGD [47] achieves...
This is reasonable as the compression operator induces additional errors compared to the exact method, and these additional errors could slow down the convergence. Meanwhile, as the values of b𝑏bitalic_b or k𝑘kitalic_k increases, both CPP and B-CPP speed up since the compression errors decrease.
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the...
The existence of compression errors may result in inferior convergence performance compared to uncompressed or centralized algorithms. For example, the methods considered by [41, 42, 43, 44, 45, 46] can only guarantee to reach a neighborhood of the desired solutions when the compression errors exist. QDGD [47] achieves...
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compres...
D
Certainly, we want to reduce the number of communications (or calls the regularizer gradient) as much as possible. This is especially important when the problem (1) is a fairly personalized (λ≪Lmuch-less-than𝜆𝐿\lambda\ll Litalic_λ ≪ italic_L) and information from other nodes is not significant. To solve this problem ...
Note that the lower bound not depend on which local oracles we use. This seems natural, because from a communication point of view it does not matter how certain local subproblems are solved. The same effect can be seen for decentralized (not personalized) minimization problems: [36] gives lower bounds on communicatio...
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ...
\tfrac{\lambda}{2}\|\sqrt{W}Y\|^{2}\right\}{ ∑ start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT...
It is clear that the method from [29] cannot be used for saddle point problems. Sliding for saddles has its own specifics – exactly for the same reasons why Extra Step Method is used for smooth saddles instead of the usual Descent-Ascent [42] (at least because Descent-Ascent diverges for the most common bilinear probl...
D
There is a rich polytope of possible equilibria to choose from, however, an MS must pick one at each time step. There are three competing properties which are important in this regard, exploitation, robustness, and exploration. For exploitation, maximum welfare equilibria appear to be useful. However, to prevent JPSRO...
In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games)...
We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ...
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
B
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay...
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay...
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so ...
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. (2021); Steinke and Zakynthinou (2020). Using these methods Feldman and Steinke (2018) presented a ...
One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In  Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individua...
C
In fact, we prove a slightly stronger statement. If a graph G𝐺Gitalic_G can be reduced to a graph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by iteratively removing z𝑧zitalic_z-antlers, each of width at most k𝑘kitalic_k, and the sum of the widths of this sequence of antlers is t𝑡...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
As described in Section 1, our algorithm aims to identify vertices in antlers using color coding. To allow a relatively small family of colorings to identify an entire antler structure (C,F)𝐶𝐹(C,F)( italic_C , italic_F ) with |C|≤k𝐶𝑘|C|\leq k| italic_C | ≤ italic_k, we need to bound |F|𝐹|F|| italic_F | in terms of...
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets). We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler...
The remainder of the paper is organized as follows. After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3. We present structural properties of antlers and how they combine in Section 4. In Section 5 we show how color coding can be used to find a large feedb...
C
Another group of methods attempt to achieve smooth boundary transition by enforcing gradient domain smoothness [31, 63, 74, 144]. The earliest work along this research direction is Poisson image blending [121]. Poisson image blending [121] proposed to enforce the gradient domain consistency with respect to the source i...
We report the results of Poisson image blending [121], GP-GAN [172], Zhang et al. [198], and MLF [194]. We also report the ground-truth composite image obtained using ground-truth alpha matte for comparison. From Fig. 9, it can be seen that the obtained composite images using predicted alpha mattes are very close to t...
Another group of methods attempt to achieve smooth boundary transition by enforcing gradient domain smoothness [31, 63, 74, 144]. The earliest work along this research direction is Poisson image blending [121]. Poisson image blending [121] proposed to enforce the gradient domain consistency with respect to the source i...
To avoid the color bleeding and halo effect brought by Poisson image blending, Tao et al. [150] developed a two-step algorithm: first processing the gradient values on the boundary and then employing a weighted integration scheme to reconstruct the image from its gradient field.
Among them, the works [172, 198, 194] not only enable smooth transition over the boundary, but also reduce the illumination discrepancy between foreground and background, in which the latter one is the goal of image harmonization in Section IV. In this section, we only introduce the way they enable smooth transition ov...
C
Regrettably, currently available open datasets, such as PeMS [8], METR [9] and NYC Cabs [10] are limited to either traffic speeds or taxi-related data. Consequently, they do not provide inclusive support for studies on a realistic and comprehensive smart city system. Moreover, individual datasets cannot be easily merge...
Data-driven analytical techniques have become increasingly prevalent in both the research community and industry for addressing various tasks in urban computing [1]. In recent years, several machine learning techniques, including deep learning [2, 3], transfer learning [4, 5], and reinforcement learning [6, 7], have b...
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp...
In brief, the creation and implementation of a comprehensive urban dataset encounter two major challenges. Firstly, urban data are usually fragmented across different entities, such as governmental bodies and private enterprises, resulting in disparities in data acquisition and processing protocols. These differences ...
Regrettably, currently available open datasets, such as PeMS [8], METR [9] and NYC Cabs [10] are limited to either traffic speeds or taxi-related data. Consequently, they do not provide inclusive support for studies on a realistic and comprehensive smart city system. Moreover, individual datasets cannot be easily merge...
C
The choice of data sets in this comparative study was very broad and no specific properties were taken into account a priori. After comparing the results of the different models, it did become apparent that certain assumptions or properties can have a major influence on the performance of the models. The main examples ...
A further aspect that was not considered in this study is the conditional behaviour of the models. When constructing a model that optimizes the coverage probability (1), only the marginal coverage is controlled, i.e. the specific properties of an instance are not taken into account. In certain cases it might be releva...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
The choice of data sets in this comparative study was very broad and no specific properties were taken into account a priori. After comparing the results of the different models, it did become apparent that certain assumptions or properties can have a major influence on the performance of the models. The main examples ...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
A
The emotion of each clip has been labelled using the following 4-class taxonomy: HAHV (high arousal high valence); LAHV (low arousal high valence); HALV (high arousal low valence); and LALV (low arousal low valence). This taxonomy is derived from the Russell’s valence-arousal model of emotion \parenciterussell, where v...
POP909 comprises piano covers of 909 pop songs compiled by \textcitepop909.555https://github.com/music-x-lab/POP909-Dataset It is the only dataset among the five that provides melody, non-melody labels for each note. Specifically, each note is labelled with one of the following three classes: vocal melody (piano notes ...
We use this dataset for the emotion classification task. As Tab. 1 shows, the average length of the pieces in the EMOPIA dataset is the shortest among the five, since they are actually clips manually selected by dedicated annotators \parenciteemopia to ensure that each performance expresses a single emotion.
Tab. 2 also shows that “our model (performance)+++CP” outperforms “our model (score)+++CP” greatly for the two sequence-level tasks, style classification and emotion classification. This matches our intuition as the two tasks are highly related to performance styles and expressions of the piano pieces.
The results show that MusicBERT achieves a testing accuracy of 37.25% for style classification and 77.78% for emotion classification. Specifically, in the style classification task, MusicBERT exhibits clear signs of overfitting and falls short in performance when compared to our model (81.75%). This outcome can be attr...
B
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
Since all vertices in c𝑐citalic_c have different colors, it is true that |Y|≤l𝑌𝑙|Y|\leq l| italic_Y | ≤ italic_l. Moreover, the optimality of c𝑐citalic_c implies that both R𝑅Ritalic_R and B𝐵Bitalic_B are non-empty. From the fact that c𝑐citalic_c is a coloring of Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT ...
We will color F𝐹Fitalic_F by assigning colors to Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT first, and then to Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBS...
This description draws a comparison e.g. to L⁢(k,1)𝐿𝑘1L(k,1)italic_L ( italic_k , 1 )-labeling problem (see e.g. [10] for a survey), where the colors of any two adjacent vertices have to differ by at least k𝑘kitalic_k and the colors of any two vertices within distance 2222 have to be distinct.
First, we note that Z⁢(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) by the property (A)𝐴(A)( italic_A ) of the Zeckendorf representation does not have two consecutive ones. Thus, the only combinations available when we sum the rightmost blocks of type A (i.e. the ones which do...
C