context
stringlengths
250
7.19k
A
stringlengths
250
4.62k
B
stringlengths
250
8.2k
C
stringlengths
250
3.89k
D
stringlengths
250
4.12k
label
stringclasses
4 values
Rnm⁢(x)=∑s=0(n−m)/2(−1)s⁢(n−m2s)⁢(D2+n−s−1n−m2)⁢xn−2⁢s.superscriptsubscript𝑅𝑛𝑚𝑥superscriptsubscript𝑠0𝑛𝑚2superscript1𝑠binomial𝑛𝑚2𝑠binomial𝐷2𝑛𝑠1𝑛𝑚2superscript𝑥𝑛2𝑠\displaystyle R_{n}^{m}(x)=\sum_{s=0}^{(n-m)/2}(-1)^{s}\binom{\frac{n-m}{2}}{s% }\binom{\frac{D}{2}+n-s-1}{\frac{n-m}{2}}x^{n-2s}.italic_R st...
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE...
The Newton’s Method of third order convergence is implemented for Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios
Following the original notation, we will not put the upper (azimuth) index m𝑚mitalic_m in Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT—which is not a power—into parentheses.
to not exist because Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT changes sign over the integration interval. (i) (14) suggests to split Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POS...
C
0&I_{d-4}\end{array}\right)\text{for $d$ even or }x=I_{d}\text{ for $d$ odd}.italic_x = ( start_ARRAY start_ROW start_CELL italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_I start_POSTSUBSCRIPT italic_d - 4 end_POSTSUBSCRIPT end_CE...
Note that a small variation of these standard generators for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali...
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) define...
Finally, we construct a second MSLP, described in Section 3.5, that writes a diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) (when evaluated with these generators as input)...
Our aim is to determine the length and memory quota for an MSLP for the Bruhat decomposition of an arbitrary matrix g∈SL⁢(d,q)𝑔SL𝑑𝑞g\in\textnormal{SL}(d,q)italic_g ∈ SL ( italic_d , italic_q ) via the above method, with the matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, u2subscript𝑢2u...
A
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
A
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
D
𝖫⁢(x(i),y(i))=1⁢{y(i)=yr⁢u⁢m⁢o⁢r}⁢l⁢o⁢g⁢(y~r⁢u⁢m⁢o⁢r(i))+1⁢{y(i)=yn⁢e⁢w⁢s}⁢l⁢o⁢g⁢(y~n⁢e⁢w⁢s(i))𝖫superscript𝑥𝑖superscript𝑦𝑖1superscript𝑦𝑖subscript𝑦𝑟𝑢𝑚𝑜𝑟𝑙𝑜𝑔superscriptsubscript~𝑦𝑟𝑢𝑚𝑜𝑟𝑖1superscript𝑦𝑖subscript𝑦𝑛𝑒𝑤𝑠𝑙𝑜𝑔superscriptsubscript~𝑦𝑛𝑒𝑤𝑠𝑖\mathsf{L}(x^{(i)},y^{(i)})=1\{y^{(i)}=y...
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
Based on the credibility model we develop a novel and effective cascaded model for rumor classification. The model uses time-series structure of features to capture their temporal dynamics. Our model clearly outperforms strong baselines, especially for the targeted early stage of the diffusion. It already
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t...
D
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
C
Twitter Features refer to basic Twitter features, such as hashtags, mentions, retweets. In addition, we derive three more URL-based features. The first is the WOT–trustworthy-based– score which is crawled from the APIs of WOT.com555https://www.mywot.com/en/api. The second is domain categories which we have collected fr...
To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
User Features. Apart from the features already exploited in related work (e.g., VerifiedUser, NumOfFriends, NumOfTweets, ReputationScore), we add two new features captured from Twitter interface: (1) how many photos have been posted by a user (UserNumPhoto), and (2) whether the user lives in a large city. We use the li...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
C
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
D
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it...
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali...
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ...
B
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week. For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
B
The spatial allocation of attention when viewing natural images is commonly represented in the form of topographic saliency maps that depict which parts of a scene attract fixations reliably. Identifying the underlying properties of these regions would allow us to predict human fixation patterns and gain a deeper under...
Early approaches towards computational models of visual attention were defined in terms of different theoretical frameworks, including Bayesian Zhang et al. (2008) and graph-based formulations Harel et al. (2006). The former was based on the notion of self-information derived from a probability distribution over linear...
With the advent of deep neural network solutions for visual tasks such as image classification Krizhevsky et al. (2012), saliency modeling has also undergone a paradigm shift from manual feature engineering towards automatic representation learning. In this work, we leveraged the capability of convolutional neural net...
Figure 1: A visualization of four natural images with the corresponding empirical fixation maps, our model predictions, and estimated maps based on the work by Itti et al. (1998). The network proposed in this study was not trained on the stimuli shown here and thus exhibits its generalization ability to unseen instanc...
With the large-scale acquisition of eye tracking measurements under natural viewing conditions, data-driven machine learning techniques became more practicable. Judd et al. (2009) introduced a model based on support vector machines to estimate fixation densities from a set of low-, mid-, and high-level visual features...
B
loc⁡(Zi)=|Zi|+14=2i−2locsubscript𝑍𝑖subscript𝑍𝑖14superscript2𝑖2\operatorname{\textsf{loc}}(Z_{i})=\frac{|Z_{i}|+1}{4}=2^{i-2}loc ( italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG | italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | + 1 end_ARG start_ARG 4 end_ARG = 2 start_POSTS...
It is easy to see that 1111-locality implies some sort of palindromic structure of a word. For example, palindromes like the English words radar, refer and rotator are obviously 1111-local, while the palindrome 𝚊𝚋𝚊𝚋𝚊𝚋𝚊𝚊𝚋𝚊𝚋𝚊𝚋𝚊\mathtt{a}\mathtt{b}\mathtt{a}\mathtt{b}\mathtt{a}\mathtt{b}\mathtt{a}typewriter_...
Observation 2.1 justifies that in the following, we are only concerned with condensed words (and therefore words with at most 2⁢loc⁡(α)2loc𝛼2\operatorname{\textsf{loc}}(\alpha)2 loc ( italic_α ) occurrences per symbol and total length of at most |X|⁢2⁢loc⁡(α)𝑋2loc𝛼|X|2\operatorname{\textsf{loc}}(\alpha)| italic_X |...
Notice that both Zimin words and 1111-local words have an obvious palindromic structure. However, in the Zimin words, the letters occur multiple times, but not in large blocks, while in 1111-local words there are at most 2222 blocks of each letter. With respect to palindromes, we can show the following general result ...
Regarding the locality of Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, note that marking x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT leads to 2i−2superscript2𝑖22^{i-2}2 start_POSTSUPERSCRIPT italic_i - 2 end_POSTSUPERSCRIPT marked blocks; further, marking x1subsc...
C
Hengling et al.[244] argue that substantial investments will be required to create high quality annotated databases which are essential for the success of supervised deep learning methods. In[238] the authors argue that the continued success of this field depends on sustained technological advancements in information t...
Lee et al.[250] conclude that international cooperation is required for constructing a high quality multimodal big dataset for stroke imaging. Another solution to better exploit big medical data in cardiology is to apply unsupervised learning methods, which do not require annotations.
It is evident from the literature that most deep learning methods (mostly CNNs and SDAEs) in this area consist of three parts: filtering for denoising, R-peak detection for beat segmentation and the neural network for feature extraction. Another popular set of methods is the conversion of ECGs to images, to utilize the...
Loh et al.[251] argue that deep learning and mobile technologies would expedite the proliferation of healthcare services to those in impoverished regions which in turn leads to further decline of disease rates. Mayer et al.[237] state that big data promises to change cardiology through an increase in the data gathered ...
Deep learning requires large training datasets to achieve high quality results[3]. This is especially difficult with medical data, considering that the labeling procedure of medical data is costly because it requires manual labor from medical experts.
A
Reinforcement learning is formalized in Markov decision processes (MDP). An MDP is defined as a tuple (𝒮,𝒜,P,r,γ)𝒮𝒜𝑃𝑟𝛾(\mathcal{S},{\mathcal{A}},P,r,\gamma)( caligraphic_S , caligraphic_A , italic_P , italic_r , italic_γ ), where 𝒮𝒮\mathcal{S}caligraphic_S is a state space, 𝒜𝒜{\mathcal{A}}caligraphic_A is a...
In Atari 2600 games our goal is to find a policy which maximizes the value function from the beginning of the game. Crucially, apart from an Atari 2600 emulator environment e⁢n⁢v𝑒𝑛𝑣envitalic_e italic_n italic_v we will use a neural network simulated environment e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n i...
We will now describe the details of SimPLe, outlined in Algorithm 1. In step 6 we use the proximal policy optimization (PPO) algorithm (Schulman et al., 2017) with γ=0.95𝛾0.95\gamma=0.95italic_γ = 0.95. The algorithm generates rollouts in the simulated environment e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n ...
In this work we refer to MDPs as environments and assume that environments do not provide direct access to the state (i.e., the RAM of Atari 2600 emulator). Instead we use visual observations, typically 210×160210160210\times 160210 × 160 RGB images. A single image does not determine the state. In order to reduce envir...
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ...
C
A high level overview of these combined methods is shown in Fig. 1. Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
The two layer module consists of two 1D convolutional layers (kernel sizes of 3333 with 8888 and 16161616 channels) with the first layer followed by a ReLU activation function and a 1D max pooling operation (kernel size of 2222). The feature maps of the last convolutional layer for both modules are then concatenated al...
For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure. The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels).
Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D. An example of the respective outputs of some of the m𝑚mita...
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
D
As depicted in Fig. 10, for the step negotiation operation with a height of hℎhitalic_h, both ER⁢w<EC⁢wsubscript𝐸𝑅𝑤subscript𝐸𝐶𝑤E_{Rw}<E_{Cw}italic_E start_POSTSUBSCRIPT italic_R italic_w end_POSTSUBSCRIPT < italic_E start_POSTSUBSCRIPT italic_C italic_w end_POSTSUBSCRIPT and ER⁢r<EC⁢rsubscript𝐸𝑅𝑟subscript𝐸𝐶...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
To assess the efficacy of the suggested autonomous locomotion mode transition strategy, simulation experiments featuring step heights of h, 2h, and 3h were conducted. These simulations involved continuous tracking of energy consumption for both total body negotiation (ER⁢wsubscript𝐸𝑅𝑤E_{Rw}italic_E start_POSTSUBSCR...
A
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution....
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat...
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation. Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
The above observations were recently made in the context of online algorithms with machine-learned predictions. Lykouris and Vassilvitskii [24] and Purohit et al. [29] show how to use predictors to design and analyze algorithms with two properties: (i) if the predictor is good, then the online algorithm should perform ...
D
This scenario, known as “early risk detection” have gained increasing interest in recent years with potential applications in rumor detection [Ma et al., 2015, 2016, Kwon et al., 2017], sexual predator detection and aggressive text identification [Escalante et al., 2017], depression detection [Losada et al., 2017, Losa...
Finally, [Loyola et al., 2018] considers the decision of “when to classify” as a problem to be learned on its own and trains two SVMs, one to make category predictions and the other to decide when to stop reading the stream. Nonetheless, the use of these two SVMs, again, hides the reasons behind both, the classificatio...
It is true that more elaborated methods that simultaneously learn the classification model and the policy to stop reading could have been used, such as in [Dulac-Arnold et al., 2011, Yu et al., 2017]. However, for the moment it is clear that this very simple approach is effective enough to outperform the remainder meth...
Although the use of MDP is very appealing from a theoretical point of view, and we will consider it for future work, the model they proposed would not be suitable for risk tasks. The use of SVMs along with Φ⁢(s)Φ𝑠\Phi(s)roman_Φ ( italic_s ) implies that the model is a black box, not only hiding the reasons for classif...
As far as we know, the approach presented in [Dulac-Arnold et al., 2011] is the first to address a (sequential) text classification task as a Markov decision process (MDP) with virtually three possible actions: read (the next sentence), classify333In practice, this action is a collection of actions, one for each catego...
D
\frac{1}{2},k})bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_η divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ∑ start_POSTSUBSCRIPT italic_k ∈ [ italic_K ] end_POSTSUBSCRIPT caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide s...
DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5). In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=...
Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor. To address this convergence issue,
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee...
Note that the convergence guarantee of DEF-A and its momentum variant for non-convex problems is lacking in (Xu and Huang, 2022). We provide the convergence analysis for GMC+, which can be seen as a global momentum variant of DEF-A. We eliminate the assumption of ring-allreduce compatibility from (Xu and Huang, 2022) a...
C
An advantage of SANs compared to Sparse Autoencoders [37] is that the constrain of activation proximity can be applied individually in each example instead of requiring the computation of forward-pass of all examples. Additionally, SANs create exact zeros instead near-zeros, which reduces co-adaptation between instance...
Olshausen et al. [43] presented an objective function that considers subjective measures of sparseness of the activation maps, however in this work we use the direct measure of compression ratio. Previous work by [44] have used a weighted combination of the number of neurons, percentage root-mean-squared difference and...
Regarding the φ𝜑\varphiitalic_φ metric and considering Eq. 17 our target is to estimate an as accurate as possible representation of 𝒙𝒙\bm{x}bold_italic_x through 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic...
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations. In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ...
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
D
In this section, how the key parameters in the UAV ad-hoc network affect the performance of PBLLA and SPBLLA will be studied. In the simulation, besides the quantity of UAVs and channel, other parameters are fixed as constant values. We set up the post-disaster area as D=4000⁢k⁢m2𝐷4000𝑘superscript𝑚2D=4000km^{2}itali...
As the dynamic degree index τ𝜏\tauitalic_τ decreases from 0.030.030.030.03 to 0.010.010.010.01, the goal function’s values are increasing, which illustrates that lower values of τ𝜏\tauitalic_τ approach to maximizer of the global utility function. When τ=0.03𝜏0.03\tau=0.03italic_τ = 0.03, the value of U𝑈Uitalic_U do...
Let denote τ𝜏\tauitalic_τ as the dynamic degree of the scenarios. The harsher environment the networks suffers, the higher τ𝜏\tauitalic_τ it is. In the highly dynamic scenarios, we suppose that τ≥0.01𝜏0.01\tau\geq 0.01italic_τ ≥ 0.01. With proper τ𝜏\tauitalic_τ, PBLLA asymptotically converges and leads the UAV ad-...
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve...
In this part, we investigate the influence of environment dynamic on the network states. With different scenarios’ dynamic degree τ∈(0,∞)𝜏0\tau\in(0,\infty)italic_τ ∈ ( 0 , ∞ ), PBLLA and SPBLLA will converge to the maximizer of goal function with different altering strategy probability. Fig. 6 presents the influence...
D
∇^¯⁢U¯¯^∇¯𝑈\displaystyle\overline{\widehat{\nabla}}\,\,\overline{U}over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_U end_ARG =(D⁢r^¯∗U¯)⁢𝐫^+(D⁢z^¯∗U¯)⁢𝐳^absent¯^𝐷𝑟¯𝑈^𝐫¯^𝐷𝑧¯𝑈^𝐳\displaystyle=\left(\overline{\widehat{Dr}}*\overline{U}\right)\hat{\mathbf{r}%
}}+\left(\overline{\overline{Dz}}*\overline{U}\right)\hat{\mathbf{z}}= ( over¯ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG ) over^ start_ARG bold_r end_ARG + ( over¯ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG ) over^ ...
}+\left(\overline{\widehat{Dz}}*\overline{U}\right)\hat{\mathbf{z}}= ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG ) over^ start_ARG bold_r end_ARG + ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG ) over^ st...
\widehat{Dz}}*\overline{U}\right)\,/\,\widehat{r}\right)\right)over¯ start_ARG over¯ start_ARG roman_Δ start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG end_ARG over¯ start_ARG italic_U end_ARG = over¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (...
\widehat{Dz}}*\overline{U}\right)\right)\right)\right)\,/\,\overline{r}over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG over¯ start_ARG italic_U end_ARG = over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_U end_ARG = ( ( over^ start_ARG o...
B
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
D
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
A
V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images.  Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Simil...
Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al., 2017; Fu et al., 2019b; Lin et al., 2017a; Peng et al., 2017; Pohlen et al., 2017; Wojna et al., 2017; Zhang et al., 2018d). Recently in 2018, De...
The standard CE loss function and its weighted versions, as discussed in Section 4, have been applied to numerous medical image segmentation problems (Isensee et al., 2019; Li et al., 2019b; Lian et al., 2018; Ni et al., 2019; Nie et al., 2018; Oktay et al., 2018; Schlemper et al., 2019). However, Milletari et al. (20...
Khosravan et al. (2019) proposed an adversarial training framework for pancreas segmentation from CT scans. Son et al. (2017) applied GANs for retinal image segmentation. Xue et al. (2018) used a fully convolutional network as a segmenter in the generative adversarial framework to segment brain tumors from MRI images....
V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images.  Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Simil...
C
The red line indicates the number of edges that remain in 𝐀¯¯𝐀\bar{{\mathbf{A}}}over¯ start_ARG bold_A end_ARG after sparsification. It is possible to see that for small increments of ϵitalic-ϵ\epsilonitalic_ϵ the spectral distance increases linearly, while the number of edges in the graph drops exponentially.
The GNN is then trained to fit its node representations to these pre-determined structures. Pre-computing graph coarsening not only makes the training much faster by avoiding to perform graph reduction at every forward pass, but it also provides a strong inductive bias that prevents degenerate solutions, such as entire...
The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
We notice that the coarsened graphs are pre-computed before training the GNN. Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪⁢(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP...
The proposed spectral algorithm is not designed to handle very dense graphs; an intuitive explanation is that 𝐯maxssubscriptsuperscript𝐯𝑠max{\mathbf{v}}^{s}_{\text{max}}bold_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT can be interpreted as the graph signal with the...
A
Fernández-Delgado et al. (2014) conduct extensive experiments comparing 179 classifiers on 121 UCI datasets (Dua & Graff, 2017). The authors show that random forests perform best, followed by support vector machines with a radial basis function kernel. Therefore, random forests are often considered as a reference for n...
Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018). The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ...
The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes. Olson et al. (2018) evaluate the performance of modern...
Mapping random forests into neural networks is already used in many applications such as network initialization (Humbird et al., 2019), camera localization (Massiceti et al., 2017), object detection (Reinders et al., 2018, 2019), or semantic segmentation (Richmond et al., 2016). State-of-the-art methods (Massiceti et a...
Neural networks have become very popular in many areas, such as computer vision (Krizhevsky et al., 2012; Reinders et al., 2022; Ren et al., 2015; Simonyan & Zisserman, 2015; Zhao et al., 2017; Qiao et al., 2021; Rudolph et al., 2022; Sun et al., 2021), speech recognition (Graves et al., 2013; Park et al., 2019; Sun et...
B
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
C
Compared to ResNets, DenseNets achieve similar performance, allow for even deeper architectures, and they are more parameter and computation efficient. However, the DenseNet architecture is highly non-uniform which complicates the hardware mapping and ultimately slows down training.
By using depthwise-separable convolutions, the number of trainable parameters as well as the number of multiply-accumulate operations (MACs) can be substantially reduced. It is empirically shown that this has little to no negative impact on prediction quality.
The challenge is to reduce the number of bits as much as possible while at the same time keeping the prediction accuracy close to that of a well-tuned full-precision DNN. Subsequently, we provide a literature overview of approaches that train reduced-precision DNNs, and, in a broader view, we also consider methods that...
In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1. Note that this requires observing overall constraints such as pre...
Section 5.1 explored the impact of several network quantization approaches and structured pruning on the prediction quality. In this section. we use the well-performing LQ-Net approach for quantization and PSP (for channel pruning) to measure the inference throughput of the quantized and pruned models separately on an ...
A
(iλ,λ′)∗⁢(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ita...
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\m...
B
As described in Subsection 6.1, we complemented the data from the tasks themselves by using the ICE-T methodology and questionnaire to gather and compare structured user feedback from both groups. The scores obtained from all participants, for all ICE-T components, can be seen in Table II. Larger is better, with green ...
The observed conclusions are confirmed when we compare the component-wise CIs for both groups—since none of them overlap—and the results of all component-wise Mann-Whitney U tests, with all U’s well below the critical value of 47, showing that t-viSNE had significantly larger scores in all four ICE-T components. These ...
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde...
In this paper, we introduced t-viSNE, an interactive tool for the visual investigation of t-SNE projections. By partly opening the black box of the t-SNE algorithm, we managed to give power to users allowing them to test the quality of the projections and understand the rationale behind the choices of the algorithm whe...
On the other hand, t-viSNE obtained consistently higher scores for Tool Supportiveness, with a higher average in all the proposed tasks. The bulk of the distributions of the supportiveness scores from the two groups overlap little, mostly near outliers (the “N/A” option was chosen three times, all in the GEP group). Wh...
B
More in detail, in these algorithms we have a population with individuals that have the ability to breed and produce new offspring. Therefore, from the parents, we get children, which introduces some variety with respect to their parents. These characteristics allow them to adapt to the environment which, translated t...
Figure 2 depicts the classification we have reached, indicating, for the 518 reviewed algorithms, the number and ratio of proposals classified in each category and subcategory. It can be observed that the largest group of all is Swarm Intelligence category (more than a half of the proposed, 53%), inspired in the Swarm...
We have reviewed 518 nature- and bio-inspired algorithms and grouped them into two taxonomies. The first taxonomy has considered the source of inspiration, while the second has discriminated algorithms based on their behavior in generating new candidate solutions. We have provided clear descriptions, examples, and an e...
Table 2 compiles all reviewed algorithms that fall within this category. As could have been a priori expected, well-known classical Evolutionary Computation algorithms can be observed in this list, such as Genetic Algorithm (GA), Evolution Strategies (ES), and Differential Evolution (DE). However, other algorithms base...
The second and third most influential algorithms are GA, a very classic algorithm, and DE, a well-known algorithm whose natural inspiration resides only in the evolution of a population. Both have been used by around 5% of all reviewed nature-inspired algorithms, and they are the most representative approach in the Evo...
C
where φ⁢(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12⁢A~⁢D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 e...
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
D
Closely related to volunteers is the vantage points measurements with faulty or misconfigured servers. (Mauch, 2013) noticed that some DNS resolvers do not change the source IP addresses of the DNS requests that they forward to upstream resolvers and return the DNS responses using the IP addresses of the upstream reso...
∙∙\bullet∙ Limited representativeness. Volunteer or crowd-sourcing studies, such as the Spoofer Project (Lone et al., 2018), are inherently limited due to bias introduced by the participants. These measurements are performed using a limited number of vantage points, which are set up in specific networks, and hence are...
Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin...
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studi...
How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing stu...
C
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso...
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal...
D
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. We previously placed these separators in every second nonempty drum σi:=[i⁢δ,(i+1)⁢δ]×Balld−1⁢(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=...
We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1⁢(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O⁢(1)𝑂1O(1)italic_O ( ...
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu...
Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above. The only nontrivial requirement is that T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBS...
However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1. Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide,...
D
While we define the congruence over Q∗superscript𝑄Q^{*}italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, we are only interested in the generated semigroup and let Σ(𝒜)=Q+/=𝒜\Sigma(\mathcal{A})=Q^{+}/{=_{\mathcal{A}}}roman_Σ ( caligraphic_A ) = italic_Q start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT / = start_POSTS...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
Let S𝑆Sitalic_S be a (completely) self-similar semigroup. Then S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is (completely) self-similar. Furthermore, if S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆t+⋆𝑆superscript𝑡S\star t^{+}italic_S ⋆ italic_t ...
B
Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following Wu and Mooney (2019), we define the sensitivity of an answer a𝑎aitalic_a with respect to a visual region visubscript𝑣𝑖v_{i...
Both Human Importance Aware Network Tuning (HINT) Selvaraju et al. (2019) and Self Critical Reasoning (SCR) Wu and Mooney (2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss betwe...
HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers ag⁢tsubscript𝑎𝑔𝑡a_{gt}italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT are different from the ranks computed from the human-based attention maps.
C
For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009). As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features. Since the classes w...
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an...
To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as i...
We use the byte pair encoding tokenization technique utilized in RoBERTa and retain its cased vocabulary. We did not create a new vocabulary since the two vocabularies are not significantly different and any out-of-vocabulary words can be represented and tuned for the privacy domain using the byte pair encoding vocabu...
Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil...
B
T4: Compare the results of two stages and receive feedback to guide interaction. To assist the knowledge generation, a comparison between the currently active stack against previously stored versions is important. In general, this includes monitoring the historical process of the stacking ensemble, facilitating intera...
The use of visualization for ensemble learning could possibly introduce further biases to the already blurry situation based on the different ML models involved. Thus, the thorough selection of both interaction techniques and visual representations that highlight and potentially overcome any cognitive biases is a major...
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
G5: Reveal and reduce cognitive biases. Visualizations should be carefully chosen in order to reduce cognitive biases. Cognitive bias is, in simple terms, a human judgment that drifts away from the actual information that should be conveyed by a visualization, i.e., it “involves a deviation from reality that is predic...
Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re...
B
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
B
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
To answer RQ2, we find the fine-tuning epochs for each task in Persona where its BLEU and C Score reaches the best respectively to find the impact of data quantity and the task profile (persona description) on fine-tuning. (Table 1) We cluster the tasks with similar best fine-tuning epoch number and calculate the aver...
B
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi...
Moreover, the data block of MSI is set as BMSI=nMSI×T×BMSIsubscript𝐵MSIsubscript𝑛MSI𝑇subscript𝐵MSIB_{\text{MSI}}=n_{\text{MSI}}\times T\times B_{\text{MSI}}italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT × italic_T × italic_B start_POSTSUBSCRIPT MSI end_POSTS...
The GP-based MSI prediction is proposed to solve the problem in [31]. Specifically, the r-UAV/t-UAV’s historical MSI is first exchanged with the t-UAV/r-UAV over a lower-frequency band and then the t-UAV will predict the future MSI of the r-UAV based on the historical MSI by using the GP-based MSI prediction model.
The tracking error of beam angles has a negative influence on the beam gain obtained by CCA. The proposed tracking error bounding algorithm uses the position/attitiude prediction error of the GP-based MSI prediction to obtain the beam angle tracking error, wherein the geometry relationship between UAVs and the Monte-Ca...
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmissi...
C
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element.
Related one-variable fragments in which we have only a unary relational vocabulary and the main quantification is ∃Sx⁢ϕ⁢(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ...
In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper, which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_...
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
A
The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and
D
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne...
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t...
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ...
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
D
Assume that ℒX′subscriptsuperscriptℒ′𝑋\mathcal{L}^{\prime}_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT defines a diagram base of X𝑋Xitalic_X. Consider a definable open set U∩Y∈ℒY𝑈𝑌subscriptℒ𝑌U\cap Y\in\mathcal{L}_{Y}italic_U ∩ italic_Y ∈ caligraphic_L...
particular, U∩Y∈ℒY𝑈𝑌subscriptℒ𝑌U\cap Y\in\mathcal{L}_{Y}italic_U ∩ italic_Y ∈ caligraphic_L start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT is a definable open set of Y𝑌Yitalic_Y. By restriction, U∩X∈ℒX𝑈𝑋subscriptℒ𝑋U\cap X\in\mathcal{L}_{X}italic_U ∩ italic_X ∈ caligraphic_L start_POSTSUBSCRIPT italic_X end_POSTS...
where U∈ℒ𝑈ℒU\in\mathcal{L}italic_U ∈ caligraphic_L. Remark that U∩X∈ℒX𝑈𝑋subscriptℒ𝑋U\cap X\in\mathcal{L}_{X}italic_U ∩ italic_X ∈ caligraphic_L start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is a definable open set in X𝑋Xitalic_X, hence there exists a family
ℒℒ\mathcal{L}caligraphic_L of ℘⁢(Z)Weierstrass-p𝑍\wp(Z)℘ ( italic_Z ), we write ℒX≜{U∩X∣U∈ℒ}≜subscriptℒ𝑋conditional-set𝑈𝑋𝑈ℒ\mathcal{L}_{X}\triangleq\{U\cap X\mid U\in\mathcal{L}\}caligraphic_L start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ≜ { italic_U ∩ italic_X ∣ italic_U ∈ caligraphic_L } for the lattice induce...
Assume that ℒX′subscriptsuperscriptℒ′𝑋\mathcal{L}^{\prime}_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT defines a diagram base of X𝑋Xitalic_X. Consider a definable open set U∩Y∈ℒY𝑈𝑌subscriptℒ𝑌U\cap Y\in\mathcal{L}_{Y}italic_U ∩ italic_Y ∈ caligraphic_L...
B
The first limitation is that the principal point needs to be at the center of the image. Observing that the principal point is slightly disturbed around the center of the image, we mainly consider the estimation of distortion coefficients using the proposed ordinal distortion in our work. Nevertheless, our method can b...
In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyze the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the ...
The second limitation is that the distortion needs to be radially symmetric. This problem may be addressed by the grid optimization technique in computer graphics, and we can teach the network to learn an asymmetric grid to warp each pixel of the distorted image. Based on the above limitations and the presented solutio...
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion recti...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify...
B
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies. The model is tra...
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1 shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed
A
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu...
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto...
If we have a ρ𝜌\rhoitalic_ρ-approximation algorithm for AlgRW for given 𝒞,ℱ,ℳ,R𝒞ℱℳ𝑅\mathcal{C},\mathcal{F},\mathcal{M},Rcaligraphic_C , caligraphic_F , caligraphic_M , italic_R, then we can get an efficiently-generalizable (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding problem 𝒫𝒫\m...
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
B
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function...
A
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
D
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
A
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
B
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni...
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi...
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
A
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
B
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr...
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr...
The performance of decentRL at the input layer notably lags behind that of other layers and AliNet. As discussed in previous sections, decentRL does not use the embedding of the central entity as input when generating its output embedding. However, this input embedding can still accumulate knowledge by participating i...
The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ...
This work is funded by National Natural Science Foundation of China (NSFCU23B2055/NSFCU19B2027/NSFC91846204), Zhejiang Provincial Natural Science Foundation of China (No.LGG22F030011), and Fundamental Research Funds for the Central Universities (226-2023-00138).
D
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
We implement a CVAE-based exploration algorithm by modifying the prior of VDM to a standard Gaussian444The code is released at https://github.com/Baichenjia/CAVE_NoisyMinist (for Noisy-Mnist) and https://github.com/Baichenjia/CVAE_exploration (for other tasks) for reproducibility and further improvement.. For Noisy-Mn...
Nevertheless, the introduce of latent variable often introduce instability to neural networks. For example, the popular deep learning models like VAEs and GANs are shown to be unstable since the introduce of stochasticity in latent space [51, 52]. We find VDM performs generally well and shows small performance varianc...
(i) For the network architecture, the important hyper-parameters include the dimensions of latent space Z𝑍Zitalic_Z, the dimensions of state features d𝑑ditalic_d, and the use of skip-connection between the prior and generative networks. We add an ablation study in Tab. IV to perform a grid search. The result shows t...
C
+1)}\|_{C^{0}(\Omega)}}{2^{n}(n+1)!}\,,\,\,\,P_{A}=\mathrm{Cheb}_{n}^{1\mathrm% {st}}\,.| italic_f ( italic_x ) - italic_Q start_POSTSUBSCRIPT italic_f , italic_A end_POSTSUBSCRIPT ( italic_x ) | ≤ divide start_ARG | italic_f start_POSTSUPERSCRIPT ( italic_n + 1 ) end_POSTSUPERSCRIPT ( italic_ξ start_POSTSUBSCRIPT ital...
This result states that any sufficiently smooth function f𝑓fitalic_f can be approximated by piecewise polynomial functions, which allows to approximate f𝑓fitalic_f by Hermite or spline interpolation. Generalizations of this result rely on this fact and are formulated in a similar manner [23, 24, 26].
Recently, Lloyd N. Trefethen [83] proposed a way of delivering a potential solution to the problem: For continuous functions f:Ω⟶ℝ:𝑓⟶Ωℝf:\Omega\longrightarrow\mathbb{R}italic_f : roman_Ω ⟶ blackboard_R that are analytic in the unbounded Trefethen domain (a genralization of a Bernstein ellipse) Nm,ρ⊊Ω=[−1,1]msubscript�...
Our result in Eq. (7.8) provides a similar bound on the approximation error in m𝑚mitalic_mD whenever the k𝑘kitalic_k-th derivatives of f𝑓fitalic_f are known or bounded. However, usually these bounds are unknown. By validating the proposed Trefethen approximation rates in the next section, we even though provide a po...
Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl...
C
As a result, the sample complexity for estimating the Wasserstein distance W⁢(μ,ν)𝑊𝜇𝜈W(\mu,\nu)italic_W ( italic_μ , italic_ν ) up to ϵitalic-ϵ\epsilonitalic_ϵ sub-optimality gap is of order 𝒪~⁢(ϵd∨2)~𝒪superscriptitalic-ϵ𝑑2\tilde{\mathcal{O}}(\epsilon^{d\lor 2})over~ start_ARG caligraphic_O end_ARG ( italic_ϵ st...
Motivated by Example 1, we propose the projected Wasserstein distance in Definition 2 to improve the sample complexity. This distance can be viewed as a special IPM with the function space defined in (1), a collection of 1111-Lipschitz functions in composition with an orthogonal k𝑘kitalic_k-dimensional linear mapping.
The 1111-Wasserstein distance can be viewed as a special IPM with ℱ=Lip1ℱsubscriptLip1\mathcal{F}=\text{Lip}_{1}caligraphic_F = Lip start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, where the Rademacher complexity of ℱℱ\mathcal{F}caligraphic_F is given by [42, Example 4]:
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
The orthogonal constraint on the projection mapping A𝐴Aitalic_A is for normalization, such that any two different projection mappings have distinct projection directions. The projected Wasserstein distance can also be viewed as a special case of integral probability metric with the function space
A
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
C
=(αA∨αB,βA∧βB)absentsubscript𝛼𝐴subscript𝛼𝐵subscript𝛽𝐴subscript𝛽𝐵\displaystyle=(\alpha_{A}\vee\alpha_{B},\beta_{A}\land\beta_{B})= ( italic_α start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ∨ italic_α start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ∧ italic...
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
As shown in the above method, logical aggregates can be constructed with structural wiring if digital signals are computed in pairs of inverted signals. Especially for the NOT gate, you can twist the α𝛼\alphaitalic_α line and the β𝛽\betaitalic_β line once, making it much simpler to operate than a semiconductor-based ...
If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
B
where x∈𝔽n𝑥superscript𝔽𝑛x\in\mathbb{F}^{n}italic_x ∈ blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is the state and A∈𝔽n×n𝐴superscript𝔽𝑛𝑛A\in\mathbb{F}^{n\times n}italic_A ∈ blackboard_F start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT is the state transition map represented as ...
Initially, the Koopman operator framework was used extensively for dynamics over reals (or complex) state space, and the function space is infinite-dimensional, which leads to resorting to finite-dimensional numerical approximations of the Koopman operator [28, 29] for practical computations. In our setting of dynamica...
Irrespective of whether the dynamics (2) being linear or not, the Koopman operator 𝐊𝐊\mathbf{K}bold_K is a linear operator over the function space ℱ⁢(𝔽n)ℱsuperscript𝔽𝑛\mathcal{F}(\mathbb{F}^{n})caligraphic_F ( blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ). This linearity of the Koopman operator...
When the dynamics is non-linear, the computation of the cycle set is a computationally hard problem. Apart from brute force computations, the work [26] gives an algorithmic procedure to estimate the cycle set of a non-linear dynamical system over finite fields by using the Koopman operator and constructing a reduced Ko...
The first statement of Theorem 3 does not imply an equivalence between the cycle structure of the permutation polynomial and the cycle set of the linear dynamics (19), and the former is a subset of the latter. This is because the linear dynamics evolve over a larger set 𝔽Nsuperscript𝔽𝑁\mathbb{F}^{N}blackboard_F star...
C
Figure 1: Boxplots of test accuracy for the different meta-learners, with 300 views and 25 features per view. The results are shown for all combinations of the correlation between features within the same view (ρwsubscript𝜌𝑤\rho_{w}italic_ρ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT), the correlation between fea...
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the add...
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge...
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi...
The false positive rate in view selection for each of the meta-learners can be observed in Figure 3. Again ignoring the interpolating predictor for now, the ranking of the different meta-learners is similar to their ranking by TPR. Nonnegative ridge regression has the highest FPR, followed by the elastic net, lasso, ad...
B
Figure 5: Performance of anomaly score generation techniques. For each technique, 25 results averaged from 32 datasets are shown in black dots. The violin plot’s outline indicates the Gaussian kernel density estimated from these results, with a red dot for the mean and lines indicating standard deviation. Techniques a...
With respect to AP, PS and Sum achieve the best results, followed by RZPS, GS and Max, as shown in Figure 5(b). Each technique is significantly better than the techniques to its right except for PS when compared with Sum. The results of Max are significantly worse than the results of the other techniques. The standard ...
Regarding ROC AUC, ten methods achieved the best results of 0.85. These methods resulted from combinations of Phase 1 techniques, including FBED, HITON-PC, and IEPC; Phase 2 techniques, specifically CART and mCART; and Phase 3 techniques, which involved RZPS, PS, and Sum, as depicted in Figure 6(a). Our analysis also ...
The experimental results (ROC AUC and AP) of the five relevant variable selection techniques are shown in Figure 3. For each technique, its 25 results (each is the average results over the 32 datasets) are presented with a violin plot overlaid by a dot plot. For the dot plot, each black dot corresponds to a result. Fo...
The results of the five scoring methods are presented in Figure 5. In terms of ROC AUC, RZPS achieves the highest average performance, followed by PS, Sum, GS, and Max. As illustrated in Figure 5(a), each technique shows significant improvement compared to the techniques to its right, as indicated by the p𝑝pitalic_p-...
A
Comparison with Abeille et al. [2021]  Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
Comparison with Amani & Thrampoulidis [2021] While the authors in Amani & Thrampoulidis [2021] also extend the algorithms of Faury et al. [2020] to a multinomial problem, their setting is materially different from ours. They model various click-types for the same advertisement (action) via the multinomial distribution...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Comparison with Abeille et al. [2021]  Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
A
Cross-scale graph pyramid network (xGPN). From Table 3 and 4, we can see that xGPN obviously improves the performance of short actions as well as the overall performance. On the one hand, xGPN utilizes long-range correlations in multi-level features and benefits actions of various lengths. On the other hand, xGPN enabl...
We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp...
Clip O and Clip U. In Table 5, we compare the performance when generating predictions only from Clip O, only from Clip U, and from both with the same well-trained VSGN model. We can see that the two clips still result in different performance even after their features are aggregated throughout the network. Clip O is be...
Multi-scale input. The magnification process may inevitably impair the information in the clip, thus the original video clip, which contains the original intact information, is also necessary. To take advantage of the complementary properties of both scales, we design a video stitching technique to piece them together...
Cross-scale correlations. The original clip and the magnified clip, albeit different, are highly correlated since they contain the same video content. If we can utilize their correlations and draw connections between their features, then the impaired information in the magnified clip can be rectified by the original cl...
B
“It is great to find that various combinations of models lead to different ensembles that are better for each class of the independent variable, which is visible from the two views [in VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(g) and (h)]”, said E2. Although this extra...
E1 and E2 were worried about the scalability of the tool. Indeed, the excessive computational time required for producing new hyperparameters along with ensemble learning methods can be problematic. Despite that, one possible improvement for VisEvol is to utilize parallel processing on powerful cloud servers. Moreover,...
Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters. T...
Afterwards, in Section 3, we describe the analytical requirements and design goals for attaching VA to evolutionary optimization and combining VA with ensemble learning. Section 4 presents the functionalities of the tool and, at the same time, describes the first use case with the goal of selecting a composition of mod...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
A
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
A complex communication architecture is not required since communication only with neighboring bins is sufficient for an agent to determine its transition probabilities. If agents have only access to the number of agents of their own and neighboring bins, then they also need to know the total number of agents in the sw...
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs. Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analy...
C
Other learning methods rely on a given template for each class [25] or local neighbourhood encoding to learn a compact representation [39]. The recently conducted SHREC correspondence contest on isometric and non-isometric 3D shapes [20] revealed that there is still room for improvement in both fields.
A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati...
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5. From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con...
The multi-matching problem is relatively well-studied for generic settings, e.g. for matching multiple graphs [79, 78, 65, 6, 69, 77], or matching keypoints in image collections [76, 72, 42]. A desirable property of multi-matchings is cycle consistency (which we will formally define in Sec. 3.1).
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar...
C
On the side of path graphs, we believe that, compared to algorithms in [3, 22], our algorithm is simpler for several reasons: the overall treatment is shorter, the algorithm does not require complex data structures, its correctness is a consequence of the characterization in [1], and there are a few implementation deta...
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati...
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ...
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly...
We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ...
C
\end{bmatrix}.italic_P start_POSTSUBSCRIPT ( italic_k italic_l ) end_POSTSUBSCRIPT = [ start_ARG start_ROW start_CELL 0.5 + italic_q end_CELL start_CELL 0.5 end_CELL start_CELL 0.3 end_CELL start_CELL 0.3 end_CELL end_ROW start_ROW start_CELL 0.5 end_CELL start_CELL 0.5 + italic_q end_CELL start_CELL 0.3 end_CELL start...
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
C
In contrast, the feasible set of distributional optimization is the Wasserstein space on a subset 𝒳𝒳\mathcal{X}caligraphic_X of ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, which is an infinite-dimensional manifold. As a result, unlike
See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018...
variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes. However,
See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (...
See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm. Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL⁢[p⁢(z),p⁢(z|x))...
C
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-...
The method is evaluated in two modes: (1) Common Testing Mode: the model trained on one scenario with one traffic flow configuration is tested on the same scenario with the same configuration. It is used to validate the ability of the RL algorithm to find the optimal policy.
We run the experiments under three traffic flow configurations: real traffic flow, mixed low traffic flow and mixed high traffic flow. The real traffic flow is real-world hourly statistical data with slight variance in vehicle arrival rates, as shown in Tab. I. Since the real-world strategies tend to break down during ...
Mixedl. The mixedl is a mixed low traffic flow with a total flow of 2550 in one hour, to simulate a light peak. The arrival rate changes every 10 minutes, which is used to simulate the uneven traffic flow distribution in the real world, the details of the vehicle arrival rate and cumulative traffic flow are shown in F...
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II.
C
(𝐱j+1,tj+1)=(𝐱j,tj)−𝐟𝐱⁢t⁢(𝐱j,tj)rank-4†⁢𝐟⁢(𝐱j,tj),j= 0,1,…formulae-sequencesubscript𝐱𝑗1subscript𝑡𝑗1subscript𝐱𝑗subscript𝑡𝑗subscript𝐟𝐱𝑡superscriptsubscriptsubscript𝐱𝑗subscript𝑡𝑗rank-4†𝐟subscript𝐱𝑗subscript𝑡𝑗𝑗 01…(\mathbf{x}_{j+1},\,t_{j+1})~{}~{}=~{}~{}(\mathbf{x}_{j},\,t_{j})-\mathbf{f}_{%
-\mathbf{x}_{j-1}+J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{j-1})^{\dagger}% \,\mathbf{f}(\mathbf{x}_{j-1})\big{)}\Big{)}\Big{\|}_{2}∥ italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ( bold_f ( bold_x s...
\scriptsize rank-$r$}}(\mathbf{x}_{j})\,(\mathbf{x}-\mathbf{x}_{j}).≈ bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ( bold_x - bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIP...
{j-1})\,(\mathbf{x}_{j}-\mathbf{x}_{j-1})\big{)}\big{\|}_{2}∥ italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ( bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) - bold_f ( bold_x start_POS...
\mathbf{x}t}(\mathbf{x}_{j},t_{j})_{\mbox{\scriptsize rank-{4}}}^{\dagger}~{}% \mathbf{f}(\mathbf{x}_{j},t_{j}),~{}~{}~{}~{}j\,=\,0,1,\ldots( bold_x start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT ) = ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIP...
D
Figure 5 depicts the plots obtained by this method, for both the Weibull and the GI benchmarks. We observe that Hybrid(λ𝜆\lambdaitalic_λ) exhibits similar performance tradeoffs as the plots for a single sequence (Figure 3), but the differences are less pronounced due to averaging.
publicly available benchmarks, such as the BPPLIB benchmarks (?), but also on distributions studied specifically in the context of offline bin packing, such as the Weibull distribution (?). The results show that our algorithms outperform the known efficient algorithms without any predictions. We also evaluate a heurist...
We give the first theoretical and experimental study of online bin packing with machine-learned predictions. Previous work on this problem has assumed ideal and error-free predictions that must be provided by a very powerful oracle, without any learnability considerations, as we discuss in more detail in Section 1.2. I...
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin...
In this section, we address the situation in which the input is not drawn according to a fixed distribution but instead is generated from distributions that change with time, e.g., when dealing with evolving data streams. This is a complex setting that has not been studied in any previous work on online bin packing, w...
D
We present in Fig. 2 that our method can smoothly stitch patches on an example airplane object. Since the surface of the object is smooth, our map ϕ∈𝒞⁢𝒜⁢(S)italic-ϕ𝒞𝒜𝑆\phi\in\mathcal{CA}(S)italic_ϕ ∈ caligraphic_C caligraphic_A ( italic_S ) is also continuous with respect to the pisubscript𝑝𝑖p_{i}italic_p start_...
To mitigate the issue of the discrete atlas, we define Continuous Atlas, a novel paradigm for meshing any object with an atlas that is leveraged in our method. In the first step, we construct a mapping that models a local structure of the object S𝑆Sitalic_S. By Continuous Atlas (𝒞⁢𝒜𝒞𝒜\mathcal{CA}{}caligraphic_C c...
Since we directly operate on points lying on surfaces of 3D objects, we use an existing solution based on hypernetworks HyperCloud (Spurek et al., 2020a) or HyperFlow (Spurek et al., 2020b)222We can also use conditioning framework introduced in (Yang et al., 2019; Chen et al., 2020a) instead of the classical encoder-de...
Therefore, we use a hypernetwork that produces parameters of a small neural network that performs 1, and the conditioning of that neural network with a point p𝑝pitalic_p to realize 2. The transformation ϕitalic-ϕ\phiitalic_ϕ is a fully connected network and is formulated as:
We present Locally Conditioned Atlas (LoCondA), a framework for generating and reconstructing meshes of objects using an atlas of localized charts that leverage the introduced notion of the continuous atlas. It consists of two parts. Firstly, we map the target object into a known prior distribution (training) or sample...
D
{\bf p}\in{\mathcal{P}}\end{subarray}}\frac{1}{m}\sum_{i=1}^{m}f_{i}(x,p_{i},% \widehat{y}_{av}^{N},\widehat{q}_{i}^{N})}\leq\varepsilon.roman_max start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_y ∈ over¯ start_ARG caligraphic_Y end_ARG , end_CELL end_ROW start_ROW start_CELL bold_q ∈ caligraphic_Q end_CELL e...
To describe this class of first-order methods, we use a similar definition of Black-Box procedure as in [51]. We assume that one local iteration costs t𝑡titalic_t time units, and the communication round costs τ𝜏\tauitalic_τ time units. Additionally, information can be transmitted only along the undirected edge of the...
If Bρ≠∅subscript𝐵𝜌B_{\rho}\neq\varnothingitalic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ≠ ∅, in the global output of any procedure that satisfies Assumption 4.1, after T𝑇Titalic_T units of time, only the first k=⌊T−2⁢tt+ρ⁢τ⌋+2𝑘𝑇2𝑡𝑡𝜌𝜏2k=\left\lfloor\frac{T-2t}{t+\rho\tau}\right\rfloor+2italic_k = ⌊ div...
The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ...
This fact leads to the main idea of the proof. At the initial moment of time T=0𝑇0T=0italic_T = 0, we have all zero coordinates in the global output, since the starting points x0,y0subscript𝑥0subscript𝑦0x_{0},y_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ar...
A
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ...
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
A
For any simplicial complex K𝐾Kitalic_K and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ), there exists an integer t=t⁢(b,K,m)𝑡𝑡𝑏𝐾𝑚t=t(b,K,m)italic_t = italic_t ( italic_b , italic_K , italic_m ) with the following property: If ℱℱ\mathcal{F}caligraphic_F is an m𝑚mita...
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37]. This technique, which we briefly outline here, was specifically designed for complete intersection patterns. A major part of this paper, all of...
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ...
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma...
We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor. The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m...
A
After the initial removal of features, as described in Section 4.2, we take a look into the radial tree that presents statistical information about the impact of the currently included features. The core aim of this view is to examine the impact on various subspaces, since removing a feature might appear the right choi...
Figure 3: Exploration of features with FeatureEnVi. The default slicing thresholds for the data space separate the instances into four quadrants that represent intervals of 25% predicted probability (see (a.1–a.4)). View (b) presents a table heatmap with five different feature selection techniques and their average val...
Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S...
This hierarchical visualization exploits the connections of these features (see Fig. 3(d.1–d.4)) with the four subspaces we defined in Section 4.1, which is the inner layer. The top part highlighted in a rectangular red box is the whole data space with all the slices (text in bold), and it is currently active (black st...
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
C
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system. We leverage the repeatability of the system, which is higher than the integrated encoder error of 3⁢μ⁢m3𝜇𝑚3\mu m3 italic_μ italic_m,
The physical system is a 2-axis gantry stage for (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) positioning with industrial grade actuators and sensors [14]. The plant can be modeled as a mass-spring-damper system with two masses linked with a damper and a spring for capturing imperfection and friction in the transmitting movem...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
To bring the model close to the real system, we unify the terms required for the contour control formulation with the velocity and acceleration for each axis from the identified, discretized state-space model from (4). Also, we include the path progress sksubscript𝑠𝑘s_{k}italic_s start_POSTSUBSCRIPT italic_k end_POST...
A
Interestingly, MMD was low for digit position. We hypothesize this is because CNNs are unable to use position information for inference [42]. To confirm this, we add CoordConv layers [42] before and after the maxpooling layer in CNN to enable usage of position information. This resulted in methods exploiting digit posi...
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA ...
Third, in Table. A4, we compare the explicit methods with each other, when considering each of the seven variables as explicit bias variables (be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT) in separate experiments. For the explicit methods, var...
Results. In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicit...
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI...
A
These works usually use a three-stream network to extract features from face images, left and right eye images, respectively as shown in  Fig. 4 (c) [42, 53, 73, 74, 75]. Besides, Deng et al.  [76] decompose gaze directions into the head rotation and eyeball rotation.
They use GRU [77] to build the network. Cai et al.  [78] use a transformer encoder [22] to aggregate face and eye features. They feed face and two eye features into the transformer encoder and concatenate the outputs of the encoder for gaze estimation.
Sun et al. propose a cross-encoder for unsupervised learning [118]. They acquire paired eye images for training where the paired images have the same gaze or appearance. They use an encoder extract appearance and gaze feature from eye images. They exchange the two features of selected paired images and aim to reconstru...
A number of CNN architectures that have been proposed for typical computer vision tasks also show great success in gaze estimation task, e.g., LeNet [17], AlexNet [50], VGG [49], ResNet18 [43] and ResNet50 [66]. Besides, some well-designed modules also help to improve the estimation accuracy [53, 56, 93, 94]. Chen et a...
It demonstrates an improved performance than the approaches that only use eye images. Cheng et al.  [34] explore the transformer for gaze estimation. They use CNN to extract feature maps from face images and input the feature map into transformer encoder for gaze estimation.
A
This deep quantization technique presents many advantages. It ensures a lightweight representation that makes the real-world masked face recognition process a feasible task. Moreover, the masked regions vary from one face to another, which leads to informative images of different sizes. The proposed deep quantization a...
The next step is to apply a cropping filter in order to extract only the non-masked region. To do so, we firstly normalize all face images into 240 ×\times× 240 pixels. Next, we partition a face into blocks. The principle of this technique is to divide the image into 100 fixed-size square blocks (24 ×\times× 24 pixels ...
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face...
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source l...
D
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i]⁢[z/x]⁢Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita...
Validity conditions of infinite proofs have been developed to keep cut elimination productive, which correspond to criteria like the guardedness check [BDS16, BT17, DP19, DP20d]. Although we use infinite typing derivations, we explicitly avoid syntactic termination checking for its non-compositionality. Nevertheless, w...
Note that in subsequent examples, in lieu of writing unwieldy typing derivations, we will demarcate when arithmetic constraints are assumed and asserted, as well as how a recursive call is checked. Now, to illustrate compositionality of typechecking, i.e., termination checking without full source code availability, we ...
Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], ...
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to c...
B
Liang et al. [23] defined and constructed a deterministic finite automata-based functional PRE scheme for public cloud data sharing without privacy leakage. After that, Li et al. [24] constructed a fine-grained and accountable access control system in the cloud, which traces suspicious access behaviors while ignores re...
Afterwards, Bianchi et al. [10] proposed a LUT-based AFP scheme without involving a Trusted Third Party (TTP) based on homomorphic encryption, which also implements AFP within the user-side framework. Despite the fact that Problems 2 and 3 are solved in these works, Problem 1 is not mentioned.
Rial et al. [13] proposed a provably secure anonymous AFP scheme based on the ideal-world/real-world paradigm. Poh et al. [25] designed an innovative user-side AFP scheme based on the symmetric Chameleon encryption technique, which achieves significant gains in owner-side computing and communication efficiency.
Firstly, this work inherits from the privacy-protected cloud media sharing solutions based on ABE or PRE. Wu et al. [21] came up with an ABE scheme with multi-message ciphertext policy, which was implemented for scalable media sharing and access control based on the user’s attributes. Polyakov et al. [22] proposed two ...
Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However...
B
Due to the strength in modeling relations on graph-structured data, GNN has been widely applied to various applications like neural machine translation Beck et al. (2018), semantic segmentation Qi et al. (2017), image classification Marino et al. (2017), situation recognition Li et al. (2017), recommendation Wu et al. ...
It first proposes to connect all the feature fields, and thus the multi-field features can be treated as a fully-connected graph. Then it utilizes GGNN Li et al. (2015) to model high-order feature interactions on the feature graph. KD-DAGFM Tian et al. (2023) uses knowledge distillation and proposes a lightweight stude...
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter...
The high-order relations between nodes can be modeled explicitly by stacking layers. Gated Graph Neural Networks (GGNN) Li et al. (2015) uses GRU Cho et al. (2014) to update the node representations based on the aggregated neighborhood feature information.
A
)\right\rfloor\right\},italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = roman_max { 1 , ⌊ roman_log start_POSTSUBSCRIPT 1 / 2 end_POSTSUBSCRIPT ( divide start_ARG ( over~ start_ARG italic_L end_ARG start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT / ( italic_κ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_μ sta...
We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refe...
Table 3: Complexity comparison for B-FW (Algorithm 3) when minimizing over a (κ,q)𝜅𝑞(\kappa,q)( italic_κ , italic_q )-uniformly convex set: Number of iterations needed to reach an ε𝜀\varepsilonitalic_ε-optimal solution in h⁢(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) for Problem 1.1 in several cases of interest. We use...
In Table 3 we provide an oracle complexity breakdown for the Frank-Wolfe algorithm with Backtrack (B-FW), also referred to as LBTFW-GSC in Dvurechensky et al. [2022], when minimizing over a (κ,q)𝜅𝑞(\kappa,q)( italic_κ , italic_q )-uniformly convex set.
the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020], denoted by B-FW and B-AFW respectively.
C
Let limit=def1/ε4superscriptdeflimit1superscript𝜀4\textsc{limit}\stackrel{{\scriptstyle\text{\tiny\rm def}}}{{=}}1/\varepsilon^{4}limit start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG def end_ARG end_RELOP 1 / italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT. If a structure 𝒮αsubscript𝒮𝛼\mathcal{S}_{\al...
Extend-Active-Paths scans over unmatched edges; note that matched edges are in the algorithm’s memory, so there is no new information to be gained by seeing a matched edge on the stream. Let {u,v}𝑢𝑣\{u,v\}{ italic_u , italic_v } be the current unmatched edge on the stream. Then, the algorithm considers separately g=...
Figure 3: In this example, α𝛼\alphaitalic_α is a free node, black (full) single-segments are unmatched and black (full) double-segments are matched edges. Assume that the algorithm first explores the red (dashed) path from α𝛼\alphaitalic_α to a4subscript𝑎4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT. This ...
[backgroundcolor=gray!15, linecolor=red!40!black] Informal description: This operation finds an augmenting path containing a given unmatched arc g𝑔gitalic_g (without seeing any new edge on the stream) and removes the vertices contained in the structures affected by this augmentation.
In a new pass, for each edge e={u,v}𝑒𝑢𝑣e=\{u,v\}italic_e = { italic_u , italic_v } in the stream, the algorithm checks whether the structure containing u𝑢uitalic_u and the structure containing v𝑣vitalic_v, if such structures exist, can augment over e𝑒eitalic_e. If it is possible, via Augment-and-Clean the algori...
C
To see why CPP outperforms Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, note that the vectors sent in CPP have been compressed, and hence the transmitted bits at each iteration are greatly reduced compared to Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
This is reasonable as the compression operator induces additional errors compared to the exact method, and these additional errors could slow down the convergence. Meanwhile, as the values of b𝑏bitalic_b or k𝑘kitalic_k increases, both CPP and B-CPP speed up since the compression errors decrease.
In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25]. In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies.
In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com...
Although the additional compression errors slow down the convergence, our design for CPP guarantees that the impact on the convergence rate is relatively small. Therefore, CPP is much more communication-efficient than the exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method.
D
We present a new SPP formulation of the PFL problem (1) as the decentralized min-max mixing model. This extends the classical PFL problem to a broader class of problems beyond the classical minimization problem. It furthermore covers various communication topologies and hence goes beyond the centralized setting.
We propose a lower bounds both on the communication and the number of local oracle calls for a general algorithms class (that satisfy Assumption 3). The bounds naturally depend on the communication matrix W𝑊Witalic_W (as in the minimization problem), but our results apply to SPP (see ”Lower” rows in Table 1 for variou...
We present a new SPP formulation of the PFL problem (1) as the decentralized min-max mixing model. This extends the classical PFL problem to a broader class of problems beyond the classical minimization problem. It furthermore covers various communication topologies and hence goes beyond the centralized setting.
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
A
Therefore the CE BR attempts to exploit each policy conditional “slice”. In practice, we only calculate a BR for positive support policies (similar to Rectified Nash (Balduzzi et al., 2019). Computing the argmaxargmax\operatorname*{argmax}roman_argmax of the BRs can be achieved through RL or exactly traversing the game...
There is a rich polytope of possible equilibria to choose from, however, an MS must pick one at each time step. There are three competing properties which are important in this regard, exploitation, robustness, and exploration. For exploitation, maximum welfare equilibria appear to be useful. However, to prevent JPSRO...
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t...
We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ...
We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt...
D
This theorem provides significant improvement over similar results that were achieved using differential privacy (see, e.g., Theorem 13 in Jung et al. (2020), which is defined for Δ=1Δ1\Delta=1roman_Δ = 1), by managing to replace the Δ2superscriptΔ2\Delta^{2}roman_Δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT term wit...
In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior d...
This generalization guarantee, which nearly avoids dependence on the range of the queries, begs the question of whether it is possible to extend these results to handle unbounded queries. Clearly such a result would not be true without some bound on the tail distribution for a single query, so we focus in the next theo...
We measure the harm that past adaptivity causes to a future query by considering the query as evaluated on a posterior data distribution and comparing this with its value on a prior. The prior is the true data distribution, and the posterior is induced by observing the responses to past queries and updating the prior. ...
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_PO...
B
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
The goal of this paper is to open up a new research direction aimed at understanding the power of preprocessing in speeding up algorithms that solve NP-hard problems exactly [26, 31]. In a nutshell, this new direction can be summarized as: how can an algorithm identify part of an optimal solution in an efficient prepro...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
This line of investigation opens up a host of opportunities for future research. For combinatorial problems such as Vertex Cover, Odd Cycle Transversal, and Directed Feedback Vertex Set, which kinds of substructures in inputs allow parts of an optimal solution to be identified by an efficient preprocessing phase? Is i...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
C
TABLE I: The issues to be solved in image composition task and the corresponding deep learning methods to solve these issue. Note that some methods only focus on one issue while some methods attempt to solve multiple issues simultaneously. “Boundary” means refining the boundary between foreground and background. “Appea...
To solve this issue, image blending [172, 198] aims to address the unnatural boundary between foreground and background, so that the foreground could be seamlessly blended with the background. For the second issue, since the foreground and background may be captured in different conditions (e.g., weather, season, time ...
The appearance inconsistency is including but not limited to: 1) unnatural boundary between foreground and background; 2) incompatible illumination statistics between foreground and background; 3) missing or implausible shadow and reflection of foreground; 4) resolution, sharpness, and noise discrepancy between foregr...
The geometric inconsistency is including but not limited to: 1) the foreground object is too large or too small; 2) the foreground object does not have reasonable supporting force (e.g., hanging in the air); 3) unreasonable occlusion; 4) inconsistent perspectives between foreground and background. In summary, the loca...
During image composition, the foreground is usually extracted using image segmentation [108] or matting [180] methods. However, the segmentation or matting results may be noisy and the foregrounds are not precisely delineated. When the foreground with jagged boundaries is pasted on the background, there will be abrupt...
B
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
Table VIII presents the taxi dispatching results for Chengdu, where the completion rate denotes the ratio of completed requests within all requests, and accumulated revenue represents the total revenue earned by all taxis throughout the day. Based on the experimental results, we draw the following conclusions:
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To...
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A⁢(i,j)�...
C
Dropout layers were initially introduced as a stochastic regularization technique by Hinton et al. srivastava2014dropout . During each forward pass at training time, a random subset of the network weights is set to zero with probability p𝑝pitalic_p. In essence this means that a Bernoulli prior with parameter p𝑝pital...
A second step in the development of dropout networks was the extension of the stochastic behaviour of the dropout layers to test time. By also randomly setting weights to zero during test time, an ensemble of different models can be obtained without having to retrain the model itself. Furthermore, because the Bernoulli...
In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th...
For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc...
Dropout layers were initially introduced as a stochastic regularization technique by Hinton et al. srivastava2014dropout . During each forward pass at training time, a random subset of the network weights is set to zero with probability p𝑝pitalic_p. In essence this means that a Bernoulli prior with parameter p𝑝pital...
A
The latter is in particular popular in the field of natural language processing (NLP), where pre-trained models (PTMs) using Transformers \parencitevaswani2017attention have achieved state-of-the-art results on almost all NLP tasks, including generative and discriminative ones \parencitehan2021pretrained.
For PTMs, an unsupervised or self-supervised, pre-training task is needed to set the objective function for learning. We employ the masked language modelling (MLM) pre-training strategy of BERT, randomly masking 15% tokens of an input sequence and the Transformer will reconstruct these masked tokens from the context of...
To our best knowledge, the work of \textcitetsai20ismir represents the first attempt to use PTMs for symbolic-domain music classification. They showed that either a RoBERTa-based Transformer encoder PTM \parenciteroberta or a GPT2-based Transformer encoder PTM \parencitegpt2 outperform non-pre-trained baselines for a ...
The latter is in particular popular in the field of natural language processing (NLP), where pre-trained models (PTMs) using Transformers \parencitevaswani2017attention have achieved state-of-the-art results on almost all NLP tasks, including generative and discriminative ones \parencitehan2021pretrained.
In particular, inspired by the growing trend of treating MIDI music as a “language” in deep generative models for symbolic music \parencitehuang2018music,payne2019musenet,huang2020pop,musemorphose,musecoco, we employ a Transformer-based network pre-trained by a self-supervised training strategy called “masked language ...
D
And of course we have to use a different color for each vertex, so B⁢B⁢Cλ⁢(Kn,T)≥n𝐵𝐵subscript𝐶𝜆subscript𝐾𝑛𝑇𝑛BBC_{\lambda}(K_{n},T)\geq nitalic_B italic_B italic_C start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_T ) ≥ italic_n – thus B⁢B⁢Cλ⁢(Kn,T)...
In this section we will proceed as follows: we first introduce the so-called red-blue-yellow (k,l)𝑘𝑙(k,l)( italic_k , italic_l )-decomposition of a forest F𝐹Fitalic_F on n𝑛nitalic_n vertices, which finds a set Y𝑌Yitalic_Y of size at most l𝑙litalic_l such that we can split V⁢(F)∖Y𝑉𝐹𝑌V(F)\setminus Yitalic_V ( it...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
To achieve the same result for forest backbones we only need to add some edges that would make the backbone connected and spanning. However, we can always make a forest connected by adding edges between some leaves and isolated vertices and we will not increase the maximum degree of the forest, as long as Δ⁢(F)≥2normal...
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
C