context stringlengths 250 5.57k | A stringlengths 250 4.17k | B stringlengths 250 3.69k | C stringlengths 250 8.2k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
This already suffices to implement the standard Newton iteration, i.e., to
approximate (1) by Δx=−f(x)/f′(x)Δ𝑥𝑓𝑥superscript𝑓′𝑥\Delta x=-f(x)/f^{\prime}(x)roman_Δ italic_x = - italic_f ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ). | Rnm′′/Rnm′superscriptsuperscriptsubscript𝑅𝑛𝑚′′superscriptsuperscriptsubscript𝑅𝑛𝑚′{R_{n}^{m}}^{\prime\prime}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT it... | Division of (29) through Rnm′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥{R_{n}^{m}}^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) yields
| Rnm′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥\displaystyle{R_{n}^{m}}^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x )
≅\displaystyle\cong≅ | Rnm′′(x)superscriptsuperscriptsubscript𝑅𝑛𝑚′′𝑥\displaystyle{R_{n}^{m}}^{\prime\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x )
≅\displaystyle\cong≅ | B |
The case where d𝑑ditalic_d is even is very similar, but requires a few changes that would complicate the pseudocode.
So, for the clarity of our exposition, we analyse the case d𝑑ditalic_d odd here and then explain the differences for the case d𝑑ditalic_d even in the next subsection. |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... | So twice the quantity (17) is contributed to the maximum length of an MSLP for Algorithm 3.
In addition, we must also include the cost of the initial computation of T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT given by Lemma 4.2, namely 5f−15𝑓15f-15 italic_f - 1 instructions, and then two addit... |
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gicgrc−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSU... | To aid the exposition and analysis, Algorithm 3 refers to several subroutines, namely Algorithms 4–7. In an implementation the code for the Algorithms 4–7 would be inserted into Algorithm 3 in the lines where they are called. We present them as subroutines here to improve the readability of Algorithm 3. However, we ass... | D |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscrip... | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien... | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... | B |
Similarly, from a P𝑃Pitalic_P-stable triangle A′B′C′superscript𝐴′superscript𝐵′superscript𝐶′A^{\prime}B^{\prime}C^{\prime}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we can also construct △ABC△𝐴�... | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS.
First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | C |
Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content.
Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before... | Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content.
Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before... | The city police had to warn the population to refrain from spreading related news on Twitter as it was getting out of control: “Rumors are wildfires that are difficult to put out and traditional news sources or official channels, such as police departments, subsequently struggle to communicate verified information to t... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... | B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | A |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... |
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level... |
We investigate how the performance of different types of low and high-level features changes over time (during the spreading of rumors); improving the understanding of feature impact and model design for rumor detection at different points in time. | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | A |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | D |
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits.
The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making. | we propagate forward the sequential random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ... | SMC weights are updated based on the likelihood of the observed rewards:
wt,a(m)∝pa(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it... | the fundamental operation in the proposed SMC-based MAB Algorithm 1
is to sequentially update the random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali... | A |
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal... | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | A |
Various measures are used in the literature and by benchmarks to evaluate the performance of fixation models. In practice, results are typically reported for all of them to include different notions about saliency and allow a fair comparison of model predictions Kümmerer et al. (2018); Riche et al. (2013). A set of nin... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... |
In this work, we adopted KLD as an objective function and produced fixation density maps as output from our proposed network. This training setup is particularly sensitive to false negative predictions and thus the appropriate choice for applications aimed at salient target detection Bylinskii et al. (2018). Defining ... | A prerequisite for the successful application of deep learning techniques is a wealth of annotated data. Fortunately, the growing interest in developing and evaluating fixation models has lead to the release of large-scale eye tracking datasets such as MIT1003 Judd et al. (2009), CAT2000 Borji and Itti (2015), DUT-OMRO... | To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... | B |
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,... | Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s... | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap... | One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr... |
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,... | A |
Another three models were trained using the signals as 1D.
The first model was a FNN with dropout, the second a three layer 1D CNN and the third a 2D CNN same as the first but trained with a stacked version of the signal (also trained with data augmentation). | Gotlibovych et al.[117] trained an one layer CNN network followed by a LSTM using 180h of PPG wearable data to detect AF.
Use of the LSTM layer allows the network to learn variable-length correlations in contrast with the fixed length of the convolutional layer. | An one hidden layer network was used for the initial testing of all voxels to obtain a small number of candidates, followed by a more accurate classification with a deep network.
The learned image features are further combined with Haar wavelet features to increase the detection accuracy. | Experiments by the authors showed that the three layer 1D CNN created better and more stable results.
In[101] the authors trained a network with an one convolutional layer with dropout followed by two RNNs to identify stress using short-term ECG data. | Another three models were trained using the signals as 1D.
The first model was a FNN with dropout, the second a three layer 1D CNN and the third a 2D CNN same as the first but trained with a stacked version of the signal (also trained with data augmentation). | C |
Figure 3: Comparison with Rainbow and PPO. Each bar illustrates the number of interactions with environment required by Rainbow (left) or PPO (right) to achieve the same score as our method (SimPLe). The red line indicates the 100100100100K interactions threshold which is used by the our method. | We evaluate our method on 26262626 games selected on the basis of being solvable with existing state-of-the-art model-free deep RL algorithms222Specifically, for the final evaluation we selected games which achieved non-random results using our method or the Rainbow algorithm using 100100100100K interactions., which in... | In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score... |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... |
Figure 3: Comparison with Rainbow and PPO. Each bar illustrates the number of interactions with environment required by Rainbow (left) or PPO (right) to achieve the same score as our method (SimPLe). The red line indicates the 100100100100K interactions threshold which is used by the our method. | C |
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | The names of the classes are depicted at the right along with the predictions for this example signal.
The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h... | Figure 1: High level overview of a feed-forward pass of the combined methods.
xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ... | The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters. | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | D |
In the realm of mobile robotics research, the motion control of terrestrial robots across varied terrains is a complex endeavor. To enhance locomotion efficacy and elevate mobility, hybrid robots have been actively developed in the past decade [1]. These robots astutely choose the most suitable locomotion mode from a s... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | This paper presents a novel methodology for achieving autonomous locomotion mode transitions in quadruped wheel/track-legged hybrid robots, taking into account both internal states of the robot and external environmental conditions. Our emphasis is on the “articulated wheel/track robot” [15], where the wheels or tracks... |
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking.... | This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
| A |
In other words, the algorithm designer can hedge against untrusted advice, by a small sacrifice in the trusted performance. Thus we can interpret r𝑟ritalic_r as the “risk” for trusting the advice: the smaller the r𝑟ritalic_r, the bigger the risk.
Likewise, for the list update problem, our (r,f(r))𝑟𝑓𝑟(r,f(r))( ita... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. | C |
Note that this algorithm can be massively parallelized since it naturally follows the Big Data programming model MapReduce [Dean & Ghemawat, 2008], giving the framework the capability of effectively processing very large volumes of data.
In Algorithm 2 is shown the training process described earlier. Note that the line... | It is worth mentioning that with this simple mechanism it would be fairly straightforward to justify when needed, the reasons of the classification by using the values of confidence vectors in the hierarchy, as will be illustrated with a visual example at the end of Section 5.
Additionally, the classification is also i... | This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category.
Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated. | Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix be... | Otherwise, it can be omitted since, during classification, gv𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries.
It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents co... | D |
}(\frac{1}{\sqrt{KT}})divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t ∈ [ italic_T ] end_POSTSUBSCRIPT blackboard_E ∥ ∇ italic_F ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≤ caligraphic_O ( divide start_ARG 1 end_ARG start... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | DEF-A achieves its best performance when λ=0.3𝜆0.3\lambda=0.3italic_λ = 0.3. In comparison, GMC+ outperforms DEF-A across different λ𝜆\lambdaitalic_λ values and shows a preference for a larger λ𝜆\lambdaitalic_λ (e.g., 0.5).
In the following experiments, we set λ𝜆\lambdaitalic_λ as 0.3 for DEF-A and 0.5 for GMC+. λ=... | Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error fee... | Note that the convergence guarantee of DEF-A and its momentum variant for non-convex problems is lacking in (Xu and Huang, 2022). We provide the convergence analysis for GMC+, which can be seen as a global momentum variant of DEF-A. We eliminate the assumption of ring-allreduce compatibility from (Xu and Huang, 2022) a... | D |
The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error.
However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the data... | Figure 3: Inverse compression ratio (CR−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT) vs. normalized reconstruction loss (ℒ~~ℒ\tilde{\mathcal{L}}over~ start_ARG caligraphic_L end_ARG) for the 15151515 datasets of Physionet for various kernel sizes.
The five inner plots with t... | The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error.
However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the data... | These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components.
Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru... | Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
| C |
\end{split}start_ROW start_CELL roman_Δ = end_CELL start_CELL italic_A italic_E divide start_ARG roman_Δ italic_P end_ARG start_ARG ∑ italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋅ ( italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + roman_Δ italic_P ) end_A... | The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚... |
Figure 7: Effect of dynamic degree index τ𝜏\tauitalic_τ on SPBLLA (2×1052superscript1052\times 10^{5}2 × 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT iterations). The result is the same as PBLLA, which illustrates that algorithm does not affect convergence states. |
In this part, we investigate the influence of environment dynamic on the network states. With different scenarios’ dynamic degree τ∈(0,∞)𝜏0\tau\in(0,\infty)italic_τ ∈ ( 0 , ∞ ), PBLLA and SPBLLA will converge to the maximizer of goal function with different altering strategy probability. Fig. 6 presents the influence... | Figure 8: Effect of dynamic degree index τ𝜏\tauitalic_τ on SPBLLA (2×1052superscript1052\times 10^{5}2 × 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT iterations). The result is the same as PBLLA, which illustrates that algorithm does not affect convergence states.
| B |
\nabla\cdot\mathbf{v}\end{array}\right)under¯ start_ARG bold_italic_π end_ARG = - italic_μ ( start_ARRAY start_ROW start_CELL 2 divide start_ARG ∂ italic_v start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_ARG start_ARG ∂ italic_r end_ARG - divide start_ARG 2 end_ARG start_ARG 3 end_ARG ∇ ⋅ bold_v end_CELL start_CELL ... | Π¯rsubscript¯Π𝑟\displaystyle\overline{\Pi}_{r}over¯ start_ARG roman_Π end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT
=[−2Dr¯^∗(μ^r^(Dr^¯∗v¯r))−Dz¯^∗(μ^r^(Dr^¯∗v¯z+Dz^¯∗v¯r))]/r¯absentdelimited-[]absent2^¯𝐷𝑟^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑟absent^¯𝐷𝑧^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑧¯^𝐷𝑧subscript¯𝑣𝑟¯𝑟\displ... | Q¯πsubscript¯𝑄𝜋\displaystyle\overline{Q}_{\pi}over¯ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT
=W¯^∗[μ^{2(Dr^¯∗v¯r)2+2(Dz^¯∗v¯z)2+(r^(∇^¯ω¯))2\displaystyle=\widehat{\overline{W}}*\biggl{[}\widehat{\mathbf{\mu}}\,\,\biggl% | =[−2Dz¯^∗(μ^r^(Dz^¯∗v¯z))−Dr¯^∗(μ^r^(Dr^¯∗v¯z+Dz^¯∗v¯r))]/r¯absentdelimited-[]absent2^¯𝐷𝑧^𝜇^𝑟¯^𝐷𝑧subscript¯𝑣𝑧absent^¯𝐷𝑟^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑧¯^𝐷𝑧subscript¯𝑣𝑟¯𝑟\displaystyle=\biggl{[}\underset{}{-2\widehat{\overline{Dz}}*\left(\widehat{%
\mathbf{\mu}}\,\,\widehat{r}\,\,\left(\overline{\wideh... | ∇^¯U¯¯^∇¯𝑈\displaystyle\overline{\widehat{\nabla}}\,\,\overline{U}over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_U end_ARG
=(Dr^¯∗U¯)𝐫^+(Dz^¯∗U¯)𝐳^absent¯^𝐷𝑟¯𝑈^𝐫¯^𝐷𝑧¯𝑈^𝐳\displaystyle=\left(\overline{\widehat{Dr}}*\overline{U}\right)\hat{\mathbf{r}% | A |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | A |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | C |
Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions... |
While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... |
We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images. |
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutio... | A |
The best case is the bipartite graph, where the MAXCUT is known and it cuts all the graph edges.
The partition 𝐳𝐳{\mathbf{z}}bold_z found by our spectral algorithm on bipartite graphs is optimal, i.e., γ(𝐳)=MAXCUT/|ℰ|=1𝛾𝐳MAXCUTℰ1\gamma({\mathbf{z}})=\texttt{\small{MAXCUT}}{}/|\mathcal{E}|=1italic_γ ( bold_z ) = M... | From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are ... | In graphs that are close to be bipartite or, in general, that have a very sparse and regular connectivity, a large percentage of edges can be cut if the nodes are partitioned correctly.
Indeed, for these graphs the MAXCUT is usually large and is closer to the upper-bound in (11). | We recall that in those cases the MAXCUT is unknown and the gaps between the lower bound (0.5) and the upper bound (λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2) can be arbitrarily large.
| We recall that in those cases the MAXCUT is unknown and the gaps between the lower bound (0.5) and the upper bound (λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2) can be arbitrarily large.
| B |
First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all... | The proposed method for generating labeled data from random forests by analyzing the decision boundaries enables training neural networks that imitate the random forests.
For instance, in the case of 5555 training examples per class, a two-hidden-layer network with 16161616 neurons in both layers already achieves the s... | Here, we additionally include decision trees, support vector machines, random forests, and neural networks in the comparison. The evaluation is performed on all nine datasets, and results for different numbers of training examples are shown (increasing from left to right). The overall performance of each method is summ... | NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the dec... | NRFI introduces imitation instead of direct mapping. In the following, a network architecture with 32323232 neurons in both hidden layers is selected.
The previous analysis has shown that this architecture is capable of imitating the random forests (see Figure 4 for details) across all datasets and different numbers of... | C |
Theoretically, we establish the sample efficiency of OPPO in an episodic setting of Markov decision processes (MDPs) with full-information feedback, where the transition dynamics are linear in features (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020). In particular, we allow the trans... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | Moreover, we prove that, even when the reward functions are adversarially chosen across the episodes, OPPO attains the same regret in terms of competing with the globally optimal policy in hindsight (Cesa-Bianchi and Lugosi, 2006; Bubeck and Cesa-Bianchi, 2012). In comparison, existing algorithms based on value iterati... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | C |
For instance, the prior p(𝐖)𝑝𝐖p(\mathbf{W})italic_p ( bold_W ) allows us to incorporate information about properties, such as sparsity, that we expect to be present in the DNN.
In Section 3.1.3, we review weight quantization approaches based on the Bayesian paradigm, and in Section 3.2.3, we review pruning approach... | We presented an overview of the vast literature of the highly active research area concerned with resource efficiency of DNN inference.
We have identified three major directions of research, namely (i) network quantization, (ii) network pruning, and (iii) approaches that target efficiency at the structural level. | In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements.
We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza... | Sparse attention mechanisms and approximations have been proposed to address this issue and improve the efficiency of transformers for longer sequences.
We refer to the work of Tay et al. (2022) which provides an overview of various transformer-based architectures that focus on efficiency, reduced memory-footprint and ... | This paper is dedicated to giving an extensive overview of the current directions of research of these approaches, all of which are concerned with reducing the model size and/or improving inference efficiency while at the same time maintaining accuracy levels close to state-of-the-art models.
We have identified three m... | B |
Take any embedding of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT into ℝ4superscriptℝ4\mathbb{R}^{4}blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT and let ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0 be small. Consider the boundary Cϵsubscript𝐶italic-ϵC_{\epsilon}italic_C st... | Given a closed connected n𝑛nitalic_n-dimensional metric manifold M𝑀Mitalic_M and a field 𝔽𝔽\mathbb{F}blackboard_F, we define the strong filling radius sFillRad(M;𝔽)sFillRad𝑀𝔽\mathrm{sFillRad}(M;\mathbb{F})roman_sFillRad ( italic_M ; blackboard_F ) as half the length of the largest interval in the n𝑛nitalic_n-t... |
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic... | The reader familiar with concepts from applied algebraic topology will have noticed that the definition of strong filling radius of an n𝑛nitalic_n-dimensional metric manifold coincides with (one half of) the maximal persistence of its associated Vietoris-Rips persistence module. In fact, for each nonnegative integer k... | By invoking the relationship between the Vietoris-Rips persistent homology and the strong filling radius, one can verify that the strong filling radii of two n𝑛nitalic_n-dimensional metric manifolds M𝑀Mitalic_M and N𝑁Nitalic_N are close if these two manifolds are similar in the Gromov-Hausdorff distance sense.
| D |
A tick indicates that the tool has the corresponding features/capabilities, while a tick in parentheses means the tool offers implicit support (i.e., it could be done manually, in an ad hoc manner, but is not explicitly supported).
The table does not include works that do not contain a concrete visualization tool as th... |
VisCoDeR [22] supports the comparison between multiple projections generated by different DR techniques and parameter settings, similarly to our initial parameter exploration, using a scatterplot view with an on-top heatmap visualization for evaluating the quality of these projections. In contrast to t-viSNE, it does ... | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | After the analysis, we decided on GEP mainly because it has a good overlap of functionalities with t-viSNE, is well-known, available online, and works correctly with user-provided data. VisCoDeR [22], for example, also provides an overlap of features, but the focus of the tool and the tasks it supports—the comparison o... | we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light... | A |
The second taxonomy classifies the reviewed algorithms based exclusively on their behavior, i.e., how they generate new candidate solutions for the function to be optimized. Our aim is to group together algorithms with similar behavior, without considering its inspirational metaphor.
|
Another criterion to group SI based algorithms is the specific behavior of the animal that captured the attention of researchers and inspired the algorithm. This second criterion is also reflected in Tables 3-6, classifying each algorithm as belonging to one of the following behavioral patterns: | We believe that this dual criterion can be very useful for researchers. The first one helps classify the different proposals by their origin of inspiration, whereas the second one provides valuable information about their algorithmic similarities and differences. This double classification allows researchers to identif... |
Considering the classifications obtained in our study, we have critically examined the reviewed literature classification in the different taxonomies proposed in this work. The goal is to analyze if there is a relationship between the algorithms classified in the same category in one taxonomy and their classification ... | Comparing the two taxonomies to each other and the algorithms falling into each of their categories, it can be observed that there is not a strong relationship between them. Interestingly, this unveils that features characterizing one algorithm are loosely associated with its inspirational model. For instance, algorith... | B |
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph... | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | D |
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P... | Methodology. The core idea of the Path MTU Discovery (PMTUD) based tool is to send the ICMP Packet too Big (PTB) message from a spoofed source IP address, belonging to the tested network, and in the 8 bytes payload of the ICMP to insert the real IP address belonging to the prober. If the network does not enforce ingres... | Methodology. We send a DNS request to the tested network from a spoofed IP address belonging to the tested network. If the network does not enforce ingress filtering, the request will arrive at the DNS resolver on that network. A query from a spoofed source IP address will cause the response to be sent to the IP addres... |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... |
Path Maximum Transmission Unit Discovery (PMTUD) determines the MTU size on the network path between two IP hosts. The process starts by setting the Don’t Fragment (DF) bit in IP headers. Any router along the path whose MTU is smaller than the packet will drop the packet, and send back an ICMP Fragmentation Needed / P... | A |
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill... |
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal... |
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
| While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | A |
Let ti+∈𝒯+subscriptsuperscript𝑡𝑖superscript𝒯t^{+}_{i}\in\mathcal{T}^{+}italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_T start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, and let q1subscript𝑞1q_{1}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT be a poi... | Case (ii): p∈P[st+(i−1)−5k2+1,st(i)−5k2]𝑝𝑃𝑠superscript𝑡𝑖15superscript𝑘21normal-st𝑖5superscript𝑘2p\in P[st^{+}\mkern-2.0mu(i-1)-5k^{2}+1,\mathrm{st}(i)-5k^{2}]italic_p ∈ italic_P [ italic_s italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_i - 1 ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSU... | A(3)[i,q1,q2]:={The length of the shortest path from q1 to q2 that visits all points in P[1,st(i)], such that the neighbour of q2 is a point in P[1,st(i)−5k2].assignsuperscript𝐴3𝑖subscript𝑞1subscript𝑞2casesThe length of the shortest path from q1 to q2 that visits all points in P[1,st(i)], such that the neig... | st}(i)]$, such that the neighbour of $q_{2}$ is a point in $P[1,\mathrm{st}(i)%
-5k^{2}]$.}\end{cases}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT [ italic_i , italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] := { start_ROW start_CELL The length of the sh... | Not shown is the property that the neighbour of q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a point in P[1,st(i)−5k2]𝑃1st𝑖5superscript𝑘2P[1,\mathrm{st}(i)-5k^{2}]italic_P [ 1 , roman_st ( italic_i ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ].
| B |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel... |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel... | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... | However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least
very close to being an auto... | D |
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... |
The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To te... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | B |
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application... |
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an... | Modern robust language models, such as transformer-based architectures, benefit from increasingly large training sets. These models can be used on downstream tasks (Devlin et al., 2019) to improve performance. Results have shown that in-domain fine tuning of such pre-trained language models have produced a significant ... |
Table 3 shows the results for the answer sentence selection task comparing the performance between BERT and PrivBERT. Results from BERT are as reported by Ravichander et al. (2019). PrivBERT achieves state of the art results improving on the results of BERT by about 6%. PrivBERT therefore has been shown to achieve sta... | Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model. We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label whil... | C |
Following our design goals and derived analytical tasks, we implemented StackGenVis, an interactive VA system that allows users to build powerful stacking ensembles from scratch. Our system consists of six main interactive visualization panels (see StackGenVis: Alignment of Data, Algorithms, and Models for Stacking En... | and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignme... | (ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models;
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model explo... | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | The model exploration phase is perhaps the most important step on the way to build a good ensemble. It focuses on comparing and exploring different models both individually and in groups. Due to the page limits, we now assume that we selected the most performant models, removed the remaining from the stack, and reached... | B |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | D |
For both BLEU and C Score, Jac Score is around 1 in each cluster, which means the persona descriptions are not similar. The dialogue quantity also seems similar among different clusters. So we can conclude that data quantity and task profile does not have a major impact on the fine-tuning process.
| Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2).
When the data quantity i... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o... | To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor... | D |
In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network.
To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebo... |
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DR... |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Sectio... | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ... |
In this section, we characterize the CCA from several relevent aspects in III-A and design a specialized hierarchical codebook for the DRE-covered CCA in III-B, wherein the subarray activation/partitioning patterns (in terms of subarray location and size) are carefully integrated with the angular domain beam patterns ... | D |
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the ri... | The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping.
This completes the proof for case 2 when the assumptions (a1) and (a2) hold. | C |
Let the initial distribution ρ0subscript𝜌0\rho_{0}italic_ρ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT be the standard Gaussian distribution N(0,ID)𝑁0subscript𝐼𝐷N(0,I_{D})italic_N ( 0 , italic_I start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ). Under certain regularity conditions, ρ^⌊t/ϵ⌋(m)superscriptsubscript^𝜌𝑡it... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019).
In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati... | B |
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduce... | It is a common problem that increasing the depth does not always lead to better performance, whether with residual connections Li et al. (2022b) or other previous studies on deep Transformers Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a), and the use of wider models is the usual method of choice for furthe... |
We implemented our approach based on the Neutron implementation of the Transformer Xu and Liu (2019). To show the effects of depth-wise LSTMs on the 6-layer Transformer, we first conducted experiments on the WMT 14 English to German and English to French news translation tasks to compare with the Transformer baseline ... |
Our approach with the Transformer base setting brings about more improvements on the English-German task than that on the English-French task. We conjecture that maybe because the performance on the English-French task using a large dataset (∼similar-to\sim∼36363636M sentence pairs) may rely more on the capacity of th... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... | D |
A∈⟦φ⟧XA\in\llbracket\varphi\rrbracket_{X}italic_A ∈ ⟦ italic_φ ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT and B𝐵Bitalic_B is a σσ\upsigmaroman_σ-structure in X𝑋Xitalic_X such
that A≤B𝐴𝐵A\leq Bitalic_A ≤ italic_B, then B∈⟦φ⟧XB\in\llbracket\varphi\rrbracket_{X}italic_B ∈ ⟦ italic_φ ⟧ start_POSTSUBSCRIPT italic_... | (a1,…,an)∈𝐑Asubscript𝑎1…subscript𝑎𝑛superscript𝐑𝐴(a_{1},\dots,a_{n})\in\mathbf{R}^{\!A}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ∈ bold_R start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT, (f(a1),…,f(an))∈𝐑B𝑓subscript𝑎1…𝑓subscript𝑎𝑛s... | and such that (a1,…,an)∈𝐑f(A)⇔A⊧ρR(a1,…,an)iffsubscript𝑎1…subscript𝑎𝑛superscript𝐑𝑓𝐴models𝐴subscript𝜌𝑅subscript𝑎1…subscript𝑎𝑛(a_{1},\dots,a_{n})\in\mathbf{R}^{\!f(A)}\iff A\models\rho_{R}(a_{1},\dots,a_{%
n})( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end... | by |fi(A)|≜|A|≜subscript𝑓𝑖𝐴𝐴|f_{i}(A)|\triangleq|A|| italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_A ) | ≜ | italic_A | and (a1,…,an)∈𝐑fi(A)subscript𝑎1…subscript𝑎𝑛superscript𝐑subscript𝑓𝑖𝐴(a_{1},\dots,a_{n})\in\mathbf{R}^{\!f_{i}(A)}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … ... | (a1,…,an)∈|A|nsubscript𝑎1…subscript𝑎𝑛superscript𝐴𝑛(a_{1},\dots,a_{n})\in|A|^{n}( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ∈ | italic_A | start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, (f(a1),…,f(an))∈𝐑B𝑓subscript𝑎1…𝑓subscript𝑎𝑛sup... | A |
IMAGES captured by wide-angle camera usually suffer from a strong distortion, which influences the important scene perception tasks such as the object detection and recognition [1, 2, 3], semantic segmentation [4, 5], and image denoising [6, 7]. The distortion rectification tries to recover the real geometric attribut... | In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in distortion rectification. However, two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... | Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective... | B |
The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies.
The model is tra... | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point.
Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods... | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1
shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed | B |
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage sto... | We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a... |
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,... |
We follow up with 3333-approximations for the homogeneous robust outlier MatSup and MuSup problems, which are slight variations on algorithms of [6] (specifically, our approach in Section 4.1 is a variation on their solve-or-cut methods). In Section 5, we describe a 9-approximation algorithm for an inhomogeneous MatSu... | D |
The ways to deal with the convex cost functions with bounded or Lipschitz continuous (sub)gradients employ the boundness or Lipschitz continuity of the (sub)gradients, respectively ([4], [7], [13]-[17]).
In [13], the gradients of local cost functions satisfy Lipschitz continuity, in which, the key step of analyzing the... | That is, the mean square error at the next time can be controlled by that at the
previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n... | As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... | A |
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene... |
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and mo... |
Results from Figure 10 show that the increase of l𝑙litalic_l lowers the information loss but raises the relative error rate. It is mainly because the number of tuples in each group increases with the growth of l𝑙litalic_l. On the one hand, in random output tables, the probabilities that tuples have to cover on the Q... |
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ... |
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene... | C |
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... |
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | B |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | D |
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh... |
After showing the action-value function estimate is the optimistic upper bound of the optimal action-value function, we can derive the dynamic regret bound within one epoch via recursive regret decomposition. The dynamic regret within one epoch for Algorithm 1 with the knowledge of B𝜽,ℰsubscript𝐵𝜽ℰB_{\bm{\theta},\m... |
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bo... |
Our proposed algorithm LSVI-UCB-Restart has two key ingredients: least-squares value iteration with upper confidence bound to properly handle the exploration-exploitation trade-off (Jin et al., 2020), and restart strategy to adapt to the unknown nonstationarity. Our algorithm is summarized in Algorithm 1. From a high-... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... | C |
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... | A |
We conduct experiments to investigate the performance gain concerning entity degrees. Typically, an entity with a higher degree indicates that it has more neighboring entities. Consequently, the computation of attention scores to aggregate these neighbors becomes crucial.
|
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex... | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... |
The results on the ZH-EN dataset are depicted in Figure 7. For entities with only a few neighbors, the advantage of leveraging DAN is not significant. However, as the degree increases, incorporating DAN yields more performance gain. This upward trend halts until the degree exceeds 20. Overall, DAN exhibits significant... | Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da... | C |
The ensemble-based baseline contains three individual encoder-decoder networks. As shown in Fig. 4, three images are generated from each model with the same input. We do not average the outputs of the three models. In (a), we use the image of digit ‘0’ as the input and generate a prediction from each network in the en... |
The related exploration methods aim to remove the stochasticity of the dynamics rather than modeling it. For example, Inverse Dynamics [10], Random Features [11], and EMI [30] learn a feature space to remove the task-irrelevant information in feature space such as white-noise. Curiosity-Bottleneck [31] and Dynamic Bot... | We analyze the possible reasons in the following. (i) The probabilistic-ensemble model proposed in [48] is used in continuous control tasks, where the state is low-dimensional and unstructured. However, Noisy-Mnist has high-dimensional image-based observations. The probabilistic ensemble may not suitable for this setti... | As an example, we model the transition dynamics in MDP of ‘Noisy-Mnist’ in Fig. 2. We first use an ensemble-based model that contains three individual encoder-decoder networks as a baseline. According to a resent research in model-based RL [48], the ensemble model with probabilistic neural networks achieves the state-o... |
We implement a CVAE-based exploration algorithm by modifying the prior of VDM to a standard Gaussian444The code is released at https://github.com/Baichenjia/CAVE_NoisyMinist (for Noisy-Mnist) and https://github.com/Baichenjia/CVAE_exploration (for other tasks) for reproducibility and further improvement.. For Noisy-Mn... | B |
If we would add nodes to make the grid symmetric or tensorial, then
the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}ital... |
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider | We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that
P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros... | Here, we answer Questions 1–2.
To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us... | for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π,
find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 ... | B |
Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS).
However, it is pointed out in [23] that when the bandwidth is chosen based on ... | The orthogonal constraint on the projection mapping A𝐴Aitalic_A is for normalization, such that any two different projection mappings have distinct projection directions.
The projected Wasserstein distance can also be viewed as a special case of integral probability metric with the function space | On the one hand, it should be rich enough to claim μ=ν𝜇𝜈\mu=\nuitalic_μ = italic_ν if the metric vanishes.
On the other hand, to control the type-I error, the function space should also be relatively small so that the empirical estimate of IPM decays quickly into zero. | In other words, we only scale the first two diagonal entries in the covariance matrix of ν𝜈\nuitalic_ν to make the hypothesis testing problem difficult to perform.
We compare the performance of the PW test with the MMD test discussed in [20], where the kernel function is chosen to be the standard Gaussian kernel with ... | It is shown in [39] that its empirical estimate decays into zero with rate O(n−1/2)𝑂superscript𝑛12O(n^{-1/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) under mild conditions, and a two-sample test can be constructed based on this nice statistical behavior.
However, it is costly to comput... | B |
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | C |
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only... |
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only... | The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si... | Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... | Unlike these biographies, however, light is so structural-dependent that there is geometrical optics, which is a study of the placement of mediums and their trajectory by their shape, which is straight forward by Fermat’s principle of minimum time. Thus, to address errors in electricity, structural computers will be us... | D |
Any permutation polynomial f(x)𝑓𝑥f(x)italic_f ( italic_x ) decomposes the finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT into sets containing mutually exclusive orbits, with the cardinality of each set being equal to the cycle length of the elements in that se... |
Given an n𝑛nitalic_n-dimensional vector space 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over finite field 𝔽𝔽\mathbb{F}blackboard_F, maps F:𝔽n→𝔽n:𝐹→superscript𝔽𝑛superscript𝔽𝑛F:\mathbb{F}^{n}\to\mathbb{F}^{n}italic_F : blackboard_F start_POSTSUPERSCRIPT ita... | Univariate polynomials f(x):𝔽→𝔽:𝑓𝑥→𝔽𝔽f(x):\mathbb{F}\to\mathbb{F}italic_f ( italic_x ) : blackboard_F → blackboard_F that induces a bijection over the field 𝔽𝔽\mathbb{F}blackboard_F are called permutation polynomials (in short, PP) and have been studied extensively in the literature. For instance, given a gene... | There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields. Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few. Conditions for such families of maps to... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... | C |
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... |
In this study we only considered different meta-learners within the MVS framework. Of course, many other algorithms for training classifiers exist. Some of those classifiers may be expected to perform better in terms of classification performance than the classifiers presented here, but not many have the embedded view... | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi... | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... | Stacked penalized logistic regression (StaPLR) (Van Loon \BOthers., \APACyear2020) is a method specifically developed to tackle the joint classification and view selection problem. Compared with a variant of the lasso for selecting groups of features (the so-called group lasso (M. Yuan \BBA Lin, \APACyear2007)), StaPLR... | A |
In the experiments, if a method is unable to produce a result within four hours, we stop the experiments. The stopped methods and data sets include 1) FastABOD and SOD on datasets Backdoor and Census; 2) ALSO on datasets Backdoor, CalTech16, Census, Secom, MNIST, CalTech28, Fashion and Ads; 3) COMBN on datasets Backdoo... | Figure 7: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of ROC AUC. The X axis stands for the ROC AUC of a comparison method, and the Y axis represents the ROC AUC of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART... | Figure 6: Results (ROC AUC and AP) of the 125 DepAD algorithms. Each sub-figure uses different colors for variable selection techniques and different shapes for prediction models, as shown at the top of the sub-figure. Results are grouped by the techniques used in the anomaly score generation phase.
|
The comparison results are shown in Figures 7 (ROC AUC) and 8 (AP), where each sub-figure corresponds to the comparison results between the two DepAD algorithms with one benchmark method. In a sub-figure, the more circles or pluses sitting above the diagonal line, the more datasets on which the DepAD algorithm outperf... |
Figure 8: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of AP. The X axis stands for the AP of a comparison method, and the Y axis represents the AP of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART-PS (or FBED-C... | C |
For an intuitive understanding of the choice model, consider an example of an online furniture retailer that offers N𝑁Nitalic_N distinct products where the ithsuperscript𝑖𝑡ℎi^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT product has an attribute vector xisubscript𝑥𝑖x_{i}italic_x start_P... |
Motivated by these issues, we consider the dynamic assortment optimization problem. In every round, the retailer offers a subset (assortment) of products to a consumer and observes the consumer response. Consumers purchase (at most one product from each assortment) products that maximize their utility, and the retaile... | where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star... |
In this section we compare the empirical performance of our proposed algorithm CB-MNL with the previous state of the art in the MNL contextual bandit literature: UCB-MNL[Oh & Iyengar, 2021] and TS-MNL[Oh & Iyengar, 2019] on artificial data. We focus on performance comparison for varying values of parameter κ𝜅\kappait... |
The rest of this section is organized as follows: We first describe the related literature and qualitative significance of the parameter κ𝜅\kappaitalic_κ. Then, we highlight our contributions and end the section by contrasting them with recent notable research works. | D |
2) We propose a novel temporal action localization framework VSGN, which features two key components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). For effective feature aggregation, we design a cross-scale graph network for each level in xGPN with a hybrid module of a temporal branch and a gra... | Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s... | 3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t... |
Besides evaluating all actions in general, we also provide average mAPs of short actions for VSGN as well as other methods that have detection results available. Here, we refer to action instances that are shorter than 30 seconds as short actions. On ActivityNet, there are 54.4% short actions, whereas on THUMOS, there... | We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp... | B |
Support for (1) selecting proper validation metrics for balanced and imbalanced data sets and (2) directing the experts’ attention to different classes for the given problem constitute two of the critical open challenges in ML.
For instance, accuracy is preferred to the g-mean metric for a balanced data set [BDA13]. | Another open issue is the avoidance of hyperparameter tuning per se, as noted by E3. The goal of the tool is not to explore or bring insights about the individual sets of hyperparameters of the models or algorithms, but instead we focus on the search for new powerful models and implicitly store their hyperparameters.
T... | However, \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C3}}}}⃝ achieves better results for the precision metric.
In the grid-based view (d.1), LR, RF, and GradB algorithms appear more powerful than other algorithms that are more diverse due to the good predictions of hard-to-classify instances. | In another example, a medical expert might focus more on eliminating false-negative predictions than false-positives (e.g., a patient being actually ill but predicted as healthy) with a bad impact on the latter. However, this trade-off is necessary when considering a person’s life.
| In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algo... | C |
Therefore, the total probability of the transient states becomes zero in a finite time.
In [7], it is shown that the condition ρ(M1)<1𝜌subscript𝑀11\rho(M_{1})<1italic_ρ ( italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) < 1 is satisfied using the properties of M-matrices, which are shown in Theorem 2.5.3 (parts 2.... | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
|
In this section, we introduce a shortest-path algorithm that is proposed as a modification to the Metropolis-Hastings algorithm in [7, Section V-E] and integrated with the Markov chain synthesis methods described in [14] and [15]. This algorithm can also be integrated with the DSMC algorithm to further increase the co... | Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... | and a complex communication architecture is not required for the estimation of the distribution.
By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially f... | A |
A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so... | There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar... | A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so... | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati... | Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5.
From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con... | D |
The main goal of our paper is: given a graph G𝐺Gitalic_G, find a (directed) clique path tree of G𝐺Gitalic_G or say that G𝐺Gitalic_G is not a (directed) path graph. To reach our purpose, we follow the same way in [18], by decomposing recursively G𝐺Gitalic_G by clique separators. |
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a directed path graph, 𝑈𝑝𝑝𝑒𝑟C=(u1,u2,…,ur)subscript𝑈𝑝𝑝�... |
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and the γisubscript𝛾𝑖\gamma_{i}italic_γ start_PO... | A chordal graph G𝐺Gitalic_G is a path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and there exists f:ΓC→[s]normal-:𝑓normal-→subscriptnormal-Γ... |
A clique is a clique separator if its removal disconnects the graph in at least two connected components. A graph with no clique separator is called atom. For example, every cycle has no clique separator, and the butterfly/hourglass graph has two cliques and it is an atom. In [18] it is proved that an atom is a path g... | D |
In experiments 1(a) and 1(b), we study how the fraction of pure nodes affects the behaviors of these mixed membership community detection methods under MMSB and DCMM, respectively. We fix (x,ρ)=(0.4,0.1)𝑥𝜌0.40.1(x,\rho)=(0.4,0.1)( italic_x , italic_ρ ) = ( 0.4 , 0.1 ) and let n0subscript𝑛0n_{0}italic_n start_POSTSUB... |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... | C |
See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm.
Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL[p(z),p(z|x))... | When ℳℳ\mathcal{M}caligraphic_M is specified by the level set of KL divergence, for any fixed θ𝜃\thetaitalic_θ, using Lagrangian duality, we can transform the inner problem in (3.7) into a KL divergence regularized distributional optimization problem as in (3.1) with g𝑔gitalic_g is replaced by ℓ(⋅;θ)ℓ⋅𝜃\ell(\cdot;\... | In other words, posterior sampling with Langevin MCMC can be posed as a distributional optimization method.
Furthermore, in addition to the KL divergence, F(p)𝐹𝑝F(p)italic_F ( italic_p ) in (3.1) also incorporates other f𝑓fitalic_f-divergences (Csiszár, 1967). | To circumvent such intractability, variational inference turns to minimize the KL divergence between a variational posterior p𝑝pitalic_p and the true posterior p(z|x)𝑝conditional𝑧𝑥p(z{\,|\,}x)italic_p ( italic_z | italic_x ) in
(3.8) (Wainwright and Jordan, 2008; Blei et al., 2017), yielding the following distribu... |
The goal of GAN (Goodfellow et al., 2014) is to learn a generative model p𝑝pitalic_p that is close to a target distribution q𝑞qitalic_q, where p𝑝pitalic_p is defined by transforming a low dimensional noise via a neural network. Since the objective in (3.1) includes f𝑓fitalic_f-divergences as special cases, our dis... | B |
Intrinsic motivation refers to reward functions that allow agents to learn useful behaviors across various tasks. Previous approaches to intrinsic motivation often focus on curiosity [17], imagination [18] or synergy [19], these approaches rely on hand-crafted rewards specific to the environment, or limit to the biman... | We conduct extensive experiments on CityFlow [20] in public datasets Hangzhou (China), Jinan (China), New York (USA), and our derived dataset Shenzhen (China) road networks under various traffic patterns, and empirically demonstrate that our proposed method can achieve state-of-the-art performances over the above scena... | Figure 6: The illustration of the road networks. The first row shows the road networks of Jinan (China), Hangzhou (China) and New York (USA), containing 12, 16 and 48 traffic signals respectively, and the second row shows the road network of Shenzhen containing 33 traffic signals.
|
Real. The traffic flows of Hangzhou (China), Jinan (China) and New York (USA) are from the public datasets444https://traffic-signal-control.github.io/, which are processed from multiple sources. The traffic flow of Shenzhen (China) is made by ourselves generated based on the traffic trajectories collected from 80 red-... |
The evaluation scenarios come from four real road network maps of different scales, including Hangzhou (China), Jinan (China), New York (USA) and Shenzhen (China), illustrated in Fig. 6. The road networks and data of Hangzhou, Jinan and New York are from the public datasets222https://traffic-signal-control.github.io/.... | A |
Jrank-r(𝐱k)†𝐟(𝐱k)subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}\,\mathbf{f}(\mathbf{%
x}_{k})italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTS... | dimension where Jrank-r(𝐱k)†subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT is the Moore-Penrose invers... | Jrank-r(𝐱k)†𝐟(𝐱k)subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}\,\mathbf{f}(\mathbf{%
x}_{k})italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTS... | Computing the iteration shift Jrank-r(𝐱k)†𝐟(𝐱k)subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†𝐟subscript𝐱𝑘J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}\,\mathbf{f}(\mathbf{%
x}_{k})italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) sta... | to ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT or ℂnsuperscriptℂ𝑛\mathbbm{C}^{n}blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT where Jrank-r(𝐱k)†subscript𝐽rank-rsuperscriptsubscript𝐱𝑘†J_{\mbox{\scriptsize rank-$r$}}(\mathbf{x}_{k})^{\dagger}itali... | C |
We set the bin capacity to k=100𝑘100k=100italic_k = 100, and we also scale down each item to the closest integer in [1,k]1𝑘[1,k][ 1 , italic_k ].
This choice is relevant for applications such as Virtual Machine placement (Section 5.1), as explained in Section 5.1. We generate two classes of input sequences. | The Weibull distribution is specified by two parameters: the shape parameter sh𝑠ℎshitalic_s italic_h and the scale parameter sc𝑠𝑐scitalic_s italic_c (with sh,sc>0𝑠ℎ𝑠𝑐0sh,sc>0italic_s italic_h , italic_s italic_c > 0). The shape parameter defines the spread of item sizes: lower values indicate greater skew tow... | The distribution of the input sequence changes every 50000 items. Namely, the input sequence is the concatenation of n/50000𝑛50000n/50000italic_n / 50000 subsequences. For Weibull benchmarks, each subsequence is a Weibull distribution, whose shape parameter is chosen uniformly at random from [1.0,4.0]1.04.0[1.0,4.0][ ... | For Weibull benchmarks, the input sequence consists of items generated independently and uniformly at random, and the shape parameter is set to sh=3.0𝑠ℎ3.0sh=3.0italic_s italic_h = 3.0. For BPPLIB benchmarks, we first select a file of
the benchmark uniformly at random, then generate input items from the chosen file, ... | sh=3𝑠ℎ3sh=3italic_s italic_h = 3, or a file from the GI Benchmark), we generate 20 random sequences of length 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT.
For each sequence, we compute FirstFit, BestFit, and the L2𝐿2L2italic_L 2 lower bound. The average costs of these algorithms, over the ... | C |
where Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are weights of ϕitalic-ϕ\phiitalic_ϕ produced by the hypernetwork directly from the point cloud embedding and [⋅,⋅]⋅⋅[\cdot,\cdot][ ⋅ , ⋅ ] is a concatenation operator. | Table 2: Shape auto-encoding on the ShapeNet dataset. The best results are highlighted in bold. CD is multiplied by 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and EMD is multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencod... | Table 1: Generation results. MMD-CD scores are multiplied by
103superscript10310^{3}10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT; MMD-EMD and JSD scores are multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencoder in LoCondA, and (HF) - the HyperFlow... |
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of WT𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl... |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | B |
By using the standard restarts or regularization arguments, all the results of this paper have convex-concave or strongly convex-concave analogues. Unfortunately, optimalilty w.r.t. ε𝜀\varepsilonitalic_ε take places only for the convex-concave case not for the strongly convex-concave one.222The analysis developed in ... |
By using the standard restarts or regularization arguments, all the results of this paper have convex-concave or strongly convex-concave analogues. Unfortunately, optimalilty w.r.t. ε𝜀\varepsilonitalic_ε take places only for the convex-concave case not for the strongly convex-concave one.222The analysis developed in ... | Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem... | Our paper technique can be generalized to non-smooth problems by using another variant of sliding procedure [34, 15, 23]. By using batching technique, the results can be generalized to stochastic saddle-point problems [15, 23]. Instead of the smooth convex-concave saddle-point problem we can consider general sum-type s... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | C |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... | In the case that we can find some non-star spanning tree T𝑇Titalic_T of
G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Tital... | A |
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm(ℱ)≥δ(|ℱ|m)... |
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ... | One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in... |
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit... |
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and... | B |
Feature transformation usually denotes less sophisticated modifications over the features [14]. Some of the standard transformations also supported by our approach are: (1) rounding, (2) binning, (3) scaling, (4) logarithmic transformations, (5) exponential transformations, and (6) power functions. In this scenario, ML... |
Various visualization techniques have been proposed for the task of feature selection, including correlation matrices [42, 43], radial visualizations [44, 45, 46], scatterplots [47], scatterplot matrices [48], feature ranking [49, 50, 51, 52, 53, 54, 55, 56], feature clustering [57], and dimensionality reduction (DR) ... | Next, as XGBoost [29] is a nonlinear ML algorithm, we also train a linear classifier (a logistic regression [83] model with the default Scikit-learn’s hyperparameters [84]) to compute the coefficients matrix and then use Recursive Feature Elimination (RFE) [40] to rank the features from the best to the worst in terms o... | There is a rather large body of existing work on automatic feature selection techniques [16, 19, 17]. However, one limitation is that features can be redundant if there is a strong correlation among them, and the correlation coefficient is unable to characterize nonlinear relationships. Thus, this is a problem where th... |
Feature selection is about choosing a subset of features from the pool of features available by that time. Feature selection methods can be generally divided into four high-level categories: (1) filter methods, (2) wrapper methods, (3) embedded methods, and (4) hybrid methods [16, 17, 18]. Our feature selection strate... | D |
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT itali... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... | B |
Explicit bias mitigation techniques directly access the bias variables: bexpl.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT during training to develop invariance to them. Based on the way these variables are utilized during training, we choose five d... | Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli... |
Re-sampling/Re-weighting: These approaches balance out the spurious correlations. The classical approach is to re-balance the class distribution by adjusting the sampling probability/ loss weight for majority/minority samples [14, 26, 41, 72, 20]. This includes synthesizing minority instances too [14, 26]. Moving beyo... | Results.
For CelebA, methods generally show large variance on the minority patterns (blond haired male celebrities), and lower variance on the majority patterns (mean over rest of the groups), whereas for Biased MNISTv1, we find that methods only work for certain set of hyperparameters and show degraded results on both... | In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu... | B |
The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras.
More concretely, the 2D eye feature regression method learns a mapping function from... | The 3D eye model recovery-based methods usually require personal calibration to recover person-specific parameters such as iris radius and kappa angle.
While these methods often achieve high accuracy, they require dedicated devices such as infrared cameras. | The eye model is fitted with geometric features, such as the infrared corneal reflections [28, 29], pupil center [30] and iris contours [31]. However, they usually require a personal calibration process for each subject, since the eye model contains subject-specific parameters such as cornea radius, kappa angles.
| The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras.
More concretely, the 2D eye feature regression method learns a mapping function from... |
It is non-trivial to learn an accurate and universal gaze estimation model. Conventional 3D eye model recovery methods usually build a unified gaze model including subject-specific parameters such as eyeball radius [28]. They perform a personal calibration to estimate these subject-specific parameters. In the field of... | B |
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (... |
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b... | simonyan2014very is trained on the ImageNet dataset which has over 14 million images and 1000 classes. Its name VGG-16 comes from the fact that it has 16 layers. It contains different layers including convolutional layers, Max Pooling layers, Activation layers, and Fully Connected (fc) layers. There are 13 convolution... |
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ... | has been successfully employed for image classification tasks krizhevsky2017imagenet . This deep model is pre-trained on a few millions of images from the ImageNet database through eight learned layers, five convolutional layers and three fully-connected layers. The last fully-connected layer allows to classify one tho... | A |
Certain type systems for π𝜋\piitalic_π-calculi [Kob06, Pad14, GKL14] guarantee the eventual success of communication only if or regardless of whether processes diverge [DP22]. Considering a configuration C𝐶Citalic_C such that Γ⊢C::(Γ,a:X[n])\Gamma\vdash C::(\Gamma,a:X[n])roman_Γ ⊢ italic_C : : ( roman_Γ , italic_a :... | On the other hand, there are type systems that themselves guarantee termination—some assign numeric levels to each channel name and restrict communication such that a measure induced by said levels decreases consistently [DS06, DHS10, CH16]. While message passing is a different setting than ours, we are interested in t... |
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata... | Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was... | Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], ... | A |
where r1subscript𝑟1r_{1}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and r2subscript𝑟2r_{2}italic_r start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are random numbers in Zqsubscript𝑍𝑞Z_{q}italic_Z start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, and i∈{1,2}𝑖12i\ {\in}\ \{1,2\}italic_i ∈ { 1 , 2 }. In addition, we emph... |
In this section, we bring forward two cloud media sharing schemes, namely FairCMS-I and FairCMS-II. FairCMS-I essentially delegates the re-encryption management of LUTs to the cloud, thus significantly reducing the overhead of the owner side. Nevertheless, FairCMS-I cannot achieve IND-CPA security for the media conten... | However, the IND-CPA security of the encrypted media content collection {𝐜}𝐜\{\mathbf{c}\}{ bold_c } stored in the cloud cannot be achieved [30]. In applications with high media privacy requirements, a more secure cloud media sharing scheme is desired. In this concern, we propose the second cloud media sharing scheme... |
This paper solves the three problems faced by cloud media sharing and proposes two schemes FairCMS-I and FairCMS-II. FairCMS-I gives a method to transfer the management of LUTs to the cloud, enabling the calculation of each user’s D-LUT in the ciphertext domain and its subsequent distribution. However, utilizing the s... | The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado... | A |
On Avazu dataset, the model performance peaks with m1=23,m2=10,m3=2formulae-sequencesubscript𝑚123formulae-sequencesubscript𝑚210subscript𝑚32m_{1}=23,m_{2}=10,m_{3}=2italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 23 , italic_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 10 , italic_m start_POSTSUBSCRIPT 3 end_POSTSU... |
Figure 4: Heat maps of estimated edge weights of correctly predicted instance (a) and wrongly predicted instance (b) on MovieLens-1M dataset, where positive edge weights indicate beneficial feature interactions. The axises represent feature fields (Gender, Age, Occupation, Zipcode, ReleaseTime, WatchTime, Genre). | Since the features along with selected beneficial feature interactions are treated as a graph, it can provide human readable interpretations on the prediction. Here we visualize heat maps of estimated edge weights of two cherry-pick instances on MovieLens-1M dataset in Fig. 4. We show the measured edge weights of each ... | The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4.
H... | This proves that our model can indeed select meaningful feature combination and model feature interactions of increasing orders with multiple layers in most cases, rather than select the redundant feature combinations of same feature fields.
We can also find some meaningful feature combinations in common cases. For exa... | A |
𝐲+γ(𝐱−𝐲)+γ(1−γ)⋅κ‖𝐱−𝐲‖q𝐳∈𝒳.𝐲𝛾𝐱𝐲⋅𝛾1𝛾𝜅superscriptnorm𝐱𝐲𝑞𝐳𝒳\mathbf{y}+\gamma(\mathbf{x}-\mathbf{y})+\gamma(1-\gamma)\cdot\kappa\left\|%
\mathbf{x}-\mathbf{y}\right\|^{q}\mathbf{z}\in\mathcal{X}.bold_y + italic_γ ( bold_x - bold_y ) + italic_γ ( 1 - italic_γ ) ⋅ italic_κ ∥ bold_x - bold_y ∥ start_POS... |
In order to prove convergence rate results for the case where the feasible region is (κ,p)𝜅𝑝(\kappa,p)( italic_κ , italic_p )-uniformly convex, we first review the definition of the (κ,p)𝜅𝑝(\kappa,p)( italic_κ , italic_p )-uniform convexity of a set (see Definition 2.12), as well as a useful lemma that allows us t... | The previous definition allows us to obtain a scaling inequality very similar to the one shown in Proposition 2.10, which is key to proving the following convergence rates, and can be implicitly found in Kerdreux et al. [2021] and Garber & Hazan [2016].
| We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
|
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability o... | B |
Table 1: A summary of the running times in several different models, compared to the previous state-of-the-art, for computing a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate maximum matching. In the distributed setting, “running time” refers to the round complexity, while in the streaming setting it refers to th... |
Given a graph on n𝑛nitalic_n vertices, there is a deterministic (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximation algorithm for maximum matching that runs in poly(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes in the semi-streaming model. | Our DFS search approach guarantees that we find a polyεpoly𝜀\operatorname{poly}\varepsilonroman_poly italic_ε fraction of all possible augmentations, giving rise to an algorithm that in poly1/εpoly1𝜀\operatorname{poly}1/\varepsilonroman_poly 1 / italic_ε passes finds a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approx... | Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and let ε∈(0,1/2)𝜀012\varepsilon\in(0,1/2)italic_ε ∈ ( 0 , 1 / 2 ) be a parameter. Let Amatchingsubscript𝐴matchingA_{\rm{matching}}italic_A start_POSTSUBSCRIPT roman_matching end_POSTSUBSCRIPT be an algorithm that finds an O(1)𝑂1O(1)italic_O ( 1 )-approximate max... | In the special case of bipartite graphs, the deterministic algorithms by Ahn and Guha [AG11], Eggert et al. [EKMS12], as well as Assadi et al. [AJJ+22] obtain a runtime of poly(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes.
The first algorithm can also be adapted to the case of genera... | A |
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the... | In the second part of this paper, we propose a broadcast-like CPP algorithm (B-CPP) that allows for asynchronous updates of the agents: at every iteration of the algorithm, only a subset of the agents wake up to perform prescribed updates. Thus, B-CPP is more flexible, and due to its broadcast nature, it can further sa... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies. |
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun... | D |
We develop multiple novel algorithms to solve decentralized personalized federated saddle-point problems. These methods (Algorithm 1 and Algorithm 2) are based on recent sliding technique [27, 28, 29] adapted to SPPs in a decentralized PFL. In addition, we present Algorithm 3 which used the randomized local method fro... | We divided our experiments into two parts: 1) toy experiments on strongly convex – strongly concave bilinear saddle point problems to verify the theoretical results and 2) adversarial training of neural networks to compare deterministic (Algorithm 1) and stochastic (Algorithm 3) approaches.
|
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... |
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler... | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... | C |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... |
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th... | The set of (C)CEs forms a convex polytope, and therefore any strictly convex function could uniquely select amongst this set. The literature only provides one such example: MECE (Ortiz et al., 2007) which has a number of appealing properties, but was found to be slow to solve large games. There is a gap in the literatu... | An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has... | There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve ... | C |
\right]}}=\underset{X\sim D}{\text{Cov}}\left({q}\left(X\right),{K}\left(X,v%
\right)\right).italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT ) - italic_q ( italic_D ) = start_UNDERACCENT italic_X ∼ italic_D end_UNDERACCENT start_ARG blackboard_E end_ARG [ italic_K ( italic_X , italic_v ) italic_q... |
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it... | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient... | \frac{{D}\left(v\,|\,x\right)}{{D}\left(v\right)}italic_K ( italic_x , italic_v ) ≔ divide start_ARG italic_D ( italic_x | italic_v ) end_ARG start_ARG italic_D ( italic_x ) end_ARG = divide start_ARG italic_D ( italic_v | italic_x ) end_ARG start_ARG italic_D ( italic_v ) end_ARG
is the Bayes factor of x𝑥xitalic_x gi... |
The second part is a direct result of the known variational representation of total variation distance and χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divergence, which are both f𝑓fitalic_f-divergences (see Equations 7.88 and 7.91 in Polyanskiy and Wu (2022) for more details). | D |
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... |
We show first that any z𝑧zitalic_z-properly colored antler prior to executing the algorithm remains z𝑧zitalic_z-properly colored after termination. Afterwards we argue that in Item 5, the pair (χV−1(𝖢˙),χV−1(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V... |
To show the algorithm preserves properness of the coloring, we show that every individual recoloring preserves properness, that is, if an arbitrary z𝑧zitalic_z-antler is z𝑧zitalic_z-properly colored prior to the recoloring, it is also z𝑧zitalic_z-properly colored after the recoloring. | We now show that a z𝑧zitalic_z-antler can be obtained from a suitable coloring χ𝜒\chiitalic_χ of the graph. The algorithm we give updates the coloring χ𝜒\chiitalic_χ and recolors any vertex or edge that is not part of a z𝑧zitalic_z-properly colored antler to color 𝖱˙˙𝖱\mathsf{\dot{R}}over˙ start_ARG sansserif_R e... | All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... | B |
TABLE I: The issues to be solved in image composition task and the corresponding deep learning methods to solve these issue. Note that some methods only focus on one issue while some methods attempt to solve multiple issues simultaneously. “Boundary” means refining the boundary between foreground and background. “Appea... |
The appearance inconsistency is including but not limited to: 1) unnatural boundary between foreground and background; 2) incompatible illumination statistics between foreground and background; 3) missing or implausible shadow and reflection of foreground; 4) resolution, sharpness, and noise discrepancy between foregr... |
The geometric inconsistency is including but not limited to: 1) the foreground object is too large or too small; 2) the foreground object does not have reasonable supporting force (e.g., hanging in the air); 3) unreasonable occlusion; 4) inconsistent perspectives between foreground and background. In summary, the loca... |
During image composition, the foreground is usually extracted using image segmentation [108] or matting [180] methods. However, the segmentation or matting results may be noisy and the foregrounds are not precisely delineated. When the foreground with jagged boundaries is pasted on the background, there will be abrupt... | To solve this issue, image blending [172, 198] aims to address the unnatural boundary between foreground and background, so that the foreground could be seamlessly blended with the background. For the second issue, since the foreground and background may be captured in different conditions (e.g., weather, season, time ... | A |
CityNet’s comprehensive and correlated data make it a valuable resource for machine learning tasks in urban computing. These tasks include spatio-temporal predictions and its multi-task variant, spatio-temporal transfer learning, and reinforcement learning. In this paper, we present extensive benchmarking results for t... |
Data-driven analytical techniques have become increasingly prevalent in both the research community and industry for addressing various tasks in urban computing [1]. In recent years, several machine learning techniques, including deep learning [2, 3], transfer learning [4, 5], and reinforcement learning [6, 7], have b... | To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem... | In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp... | CityNet’s comprehensive and correlated data make it a valuable resource for machine learning tasks in urban computing. These tasks include spatio-temporal predictions and its multi-task variant, spatio-temporal transfer learning, and reinforcement learning. In this paper, we present extensive benchmarking results for t... | B |
We provide an in-depth experimental comparison of the four main classes of methods based on their performance across a wide range of data sets. We interpret the observed differences and discuss practical difficulties such as hyperparameter tuning and model selection based on prior knowledge. |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... |
The general structure of the paper is as follows. In Section 2 some general aspects of the estimation of prediction intervals for regression are discussed. Subsequently, in Section 3, the different classes of methods are reviewed. The setup of an experimental assessment for a selection of methods is presented in Secti... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... |
In the preceding four sections, we introduced different classes of interval estimators, each having its own characteristics. In this section, we summarize the main properties for clarity and convenience. We identify four properties that are important for practical purposes. The first one is the main notion of this pap... | B |
While each time step corresponds to a single token in REMI, each time step would correspond to a super token that assembles four tokens in total in CP. Without such a token grouping, the sequence length (in terms of the number of time steps) of REMI is longer than that of CP (in this example, 16 versus 4). Please note ... | Figure 1: An example of a piece of score encoded using the proposed simplified version of the (a) REMI and (b) CP representations, using seven types of tokens, Bar, Sub-bar, Pitch, Velocity, Duration, Tempo and Pad (not shown here), for piano-only MIDI performance.
The text inside parentheses indicates the value each t... |
For MIDI scores, our final token vocabulary for REMI contains 16 unique Sub-bar tokens, 86 Pitch tokens, 64 Duration tokens, one Bar token, one Pad token and one Mask token, in total 169 tokens. For CP, we do not use a Pad token but represent a zero-padded super token by Bar(Pad), Sub-bar(Pad), Pitch(Pad) and Duration... | Fig. 1(a) shows that, except for Bar, the other tokens in a REMI sequence always occur consecutively in groups, in the order of Sub-bar, Pitch, Duration. We can further differentiate Bar(new) and Bar(cont), representing respectively the beginning of a new bar and a continuation of the current bar and always have one of... | The REMI representation \parencitehuang2020pop for MIDI performances uses Bar and Sub-bar tokens to represent the advancement in time. The former marks the beginning of a new bar, while the latter points to a discrete position within a bar. Specifically, as we divide a bar into 16 equidistant sample points, the Sub-bar... | D |
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSU... | To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and findi... |
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t... | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen... | Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.