context stringlengths 250 4.37k | A stringlengths 250 8.2k | B stringlengths 250 4.23k | C stringlengths 250 4.99k | D stringlengths 250 3.54k | label stringclasses 4
values |
|---|---|---|---|---|---|
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT
is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE... | Rnm(x)=∑s=0(n−m)/2(−1)s(n−m2s)(D2+n−s−1n−m2)xn−2s.superscriptsubscript𝑅𝑛𝑚𝑥superscriptsubscript𝑠0𝑛𝑚2superscript1𝑠binomial𝑛𝑚2𝑠binomial𝐷2𝑛𝑠1𝑛𝑚2superscript𝑥𝑛2𝑠\displaystyle R_{n}^{m}(x)=\sum_{s=0}^{(n-m)/2}(-1)^{s}\binom{\frac{n-m}{2}}{s%
}\binom{\frac{D}{2}+n-s-1}{\frac{n-m}{2}}x^{n-2s}.italic_R st... | that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2
Gaussian integrations for moments xD−1+n−2ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage |
Gaussian integration rules for integrals ∫01xD−1Rnm(x)f(x)𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 ... | to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT
is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPE... | B |
In other words, our algorithm initialises w:=gassign𝑤𝑔w:=gitalic_w := italic_g, u1:=1assignsubscript𝑢11u_{1}:=1italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := 1 and u2:=1assignsubscript𝑢21u_{2}:=1italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := 1 and multiplies w𝑤witalic_w, u1subscript𝑢1u_{1}italic_u start... |
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gicgrc−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSU... | does not yield an upper bound for the memory requirement in a theoretical analysis.
Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be... |
As for the simpler examples considered in the previous section, here to keep the presentation clear we do not write down explicit MSLP instructions, but instead determine the cost of Algorithm 3 while keeping track of the number of elements that an MSLP for this algorithm would need to keep in memory at any given time... | The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
| C |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... |
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT. | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... | A |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | A |
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event. |
Training data for single tweet classification. Here we follow our assumption that an event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless 333the terminology subless indicates an eve... |
We use the same dataset described in Section 5.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. Those rumors and news fall comparatively evenly in 8 different categories, namely Politics, Science, Attacks, Disaster, A... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | story descriptions we manually constructed queries to retrieve the relevant tweets for 270 rumors with high impact. Our approach to query construction mainly follows [11]. For the news event instances (non-rumor examples), we make use of the manually constructed corpus from Mcminn et al. [21], which covers 500 real-wor... | B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... |
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | A |
The performance of this feature group is not so convincing. The feature Pasubscript𝑃𝑎P_{a}italic_P start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT from SpikeM model is the best one of them. The problem of these two models which we have already figured out in Section 3.2.3 is that two models need substantial data to f... | . As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... |
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ... | As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | A |
Evaluating methodology.
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of b... |
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the ... |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | B |
\right)\;.\\
\end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT... | —i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag—
one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein. | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states,
as the Monte Carlo error of pM(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( ital... | A |
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other.
In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other.
In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | B |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... |
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was rep... |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which... | To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect... | B |
There is a polynomial-time O(log(opt)log(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm and a polynomial-time O(log(opt)opt)Ooptopt\operatorname{O}(\sqrt{\log(\operatorname{\texts... |
In this section, we discuss some examples that illustrate the concepts of marking sequences and the locality number, and we also discuss some word combinatorial properties related to the locality number. Note that for illustration purposes, the example words considered in this section are not necessarily condensed. | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... |
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection.... |
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under... | D |
Besides solving the data and interpretability problems, researchers in cardiology could utilize the already established deep learning architectures that have not been widely applied in cardiology such as capsule networks.
Capsule networks[265] are deep neural networks that require less training data than CNNs and its l... | They have been used by a number of publications in cardiology in medical history prediction[70], ECG beat classification[86] and CVD prediction using fundus[192].
Another simpler tool for interpretability is saliency maps[264] that uses the gradient of the output with respect to the input which intuitively shows the re... | Amongst their experiments they found that rotational and scaling data augmentations did not help increase accuracy, attributing it to interpolation altering pixel intensities which is problematic due to the sensitivity of CNN to pixel distribution patterns.
| However an important constraint they currently have which limits them from achieving wider use, is the high computational cost compared to CNNs due to the ‘routing by agreement’ algorithm.
Amongst their recent uses in medicine include brain tumor classification[266] and breast cancer classification[267]. | Lessman et al.[195] method for coronary calcium scoring utilizes three independently trained CNNs to estimate a bounding box around the heart, in which connected components above a Hounsfield unit threshold are considered candidates for CACs.
Classification of extracted voxels was performed by feeding two-dimensional p... | C |
An important step in this direction was made by Leibfried et al. (2016), which extends the work of Oh et al. (2015) by including reward prediction, but does not use the model to learn policies that play the games.
Most of these approaches, including ours, encode knowledge of the game in implicit way. Unlike this, there... | Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies
from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Haf... | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari en... | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this metho... | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using... | C |
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
| This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | A high level overview of these combined methods is shown in Fig. 1.
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem. |
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals. | For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 sampl... | C |
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... |
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga... | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi... | C |
Suppose that you have an investment account with a significant amount in it, and that your financial institution advises you periodically on investments. One day, your banker informs you that company X will soon receive a big boost, and advises to use the entire account to buy stocks. If you were to completely trust th... |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ... |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
In this work we focus on the online computation with advice. Our motivation stems from observing that, unlike the real world, the advice under the known models is often closer to “fiat” than “recommendation”. Our objective is to propose a model which allows the possibility of incorrect advice, with the objective of ob... | D |
With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3Δ, whose policy also takes into account the changes in both slopes.
As it can be seen from Algorithm 3 and as mentioned before, SS3Δ additionally classifies a subject as positive if the positive slope chan... | the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th c... | This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (l... | In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes.
Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classi... |
the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed. | C |
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1).
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter... | Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai... | With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
These methods can be implemented on distributed frameworks like parameter server and al... | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red... | GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )... | B |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | Figure 1: Visualization of the activation maps of five activation functions (Identity, ReLU, top-k absolutes, Extrema-Pool indices and Extrema) for 1D and 2D input in the top and bottom row respectively.
The 1D input to the activation functions is denoted with the continuous transparent green line using an example from... | Imposing a med𝑚𝑒𝑑meditalic_m italic_e italic_d on the extrema detection algorithm makes 𝜶𝜶\bm{\alpha}bold_italic_α sparser than the previous cases and solves the problem of double extrema activations that Extrema-Pool indices have (as shown in Fig. 1LABEL:sub@subfig:extrema).
The sparsity parameter in this case ... | The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error.
However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the data... |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | C |
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better. |
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ... |
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better. |
We construct a UAV ad-hoc network in a post-disaster scenario with M𝑀Mitalic_M identical UAVs being randomly deployed, in which M𝑀Mitalic_M is a huge number compared with normal Multi-UAV system. All the UAVs have the same volume of battery E𝐸Eitalic_E and communication capability. The topological structure of Mult... | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit... | C |
Π¯rsubscript¯Π𝑟\displaystyle\overline{\Pi}_{r}over¯ start_ARG roman_Π end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT
=[−2Dr¯^∗(μ^r^(Dr^¯∗v¯r))−Dz¯^∗(μ^r^(Dr^¯∗v¯z+Dz^¯∗v¯r))]/r¯absentdelimited-[]absent2^¯𝐷𝑟^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑟absent^¯𝐷𝑧^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑧¯^𝐷𝑧subscript¯𝑣𝑟¯𝑟\displ... | }}\,\,\widehat{r}\,\,\left(\overline{\widehat{Dr}}*\overline{v}_{z}+\overline{%
\widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_AR... | \widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG e... | \overline{\psi}\right)\,\,\left(\overline{\widehat{Dz}}*\overline{f}\right)%
\right)\,/\,\widehat{r}\right\}= divide start_ARG 2 italic_π end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ( over^ start_ARG italic_s end_ARG over^ start_ARG italic_r end_ARG ) start_POSTSUPERSCRIPT italic_T end_PO... | }_{r}\,/\,\overline{r}^{2}}start_UNDERACCENT end_UNDERACCENT start_ARG + divide start_ARG 2 end_ARG start_ARG 3 end_ARG ( over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG ( over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG ⋅ over¯ start_ARG bold_v end_ARG ) ) ) end_... | A |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | A |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... | For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the
DQN loss, ADAM optimizer was used[25]. |
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u... |
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25]. | B |
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d... |
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural im... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... |
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information... |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ... | A |
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are ... | Fig. 9(c) shows that NMF produces graphs that are very dense, as a consequence of the multiplication with the dense soft-assignment matrix to construct the coarsened graph.
Finally, Fig. 9(d) shows that NDP produces coarsened graphs that are sparse and preserve well the topology of the original graph. | Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs.
The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig... | Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs.
The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}calig... | From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are ... | A |
For real-world applications, the dependency on large amounts of labeled data represents a significant limitation (Breiman et al., 1984; Hekler et al., 2019; Barz & Denzler, 2020; Qi & Luo, 2020; Phoo & Hariharan, 2021; Wang et al., 2021). Frequently, there is little or even no labeled data for a particular task and hun... | Transfer learning and regularization methods are usually applied to reduce overfitting.
However, for training with little data, the networks still have a considerable number of parameters that have to be fine-tuned – even if just the last layers are trained. | Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages.
Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ... | Additionally, the experiment shows that the training is very robust to overfitting even when the number of parameters in the network increases.
When combining the generated data and original data, the accuracy on Car and Covertype improves with an increasing number of training examples. | First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all... | A |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | C |
In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations.
There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters. | Several works have investigated special matrix structures that require fewer parameters and allow for faster matrix multiplications—the main workload in fully connected layers.
Furthermore, there exist several manually designed architectures that introduced lightweight building blocks or modified existing building bloc... | In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations.
There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters. | In most cases, the implicitly represented matrix is never computed explicitly such that also a computational speed-up is achieved.
Furthermore, there exist approaches using special matrices that are specified by only few parameters and whose structure allows for extremely efficient matrix multiplications. | In Cheng et al. (2015), the weight matrices of fully connected layers are restricted to circulant matrices 𝐖∈ℝn×n𝐖superscriptℝ𝑛𝑛\mathbf{W}\in\mathbb{R}^{n\times n}bold_W ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT, which are fully specified by only n𝑛nitalic_n parameters.
While thi... | C |
(iλ,λ′)∗(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ita... |
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by | and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\m... | ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
|
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by | D |
The remaining costs are one aspect of estimating the projection quality. This means that projected points with high remaining costs can be moved by an additional optimization step. Akin to this idea, t-viSNE might show a preview of the data points in the next optimization step. In consequence, users could determine whe... |
Clustervision [51] is a visualization tool used to test multiple batches of a varying number of clusters and allows the users to pick the best partitioning according to their task. Then, the dimensions are ordered according to a cluster separation importance ranking. As a result, the interpretation and assessment of t... | The goals of the comparative study presented in this paper were to provide initial evidence of the acceptance of t-viSNE by analysts, the consistency of their results when exploring a t-SNE projection using our tool, and the improvement over another state-of-the-art tool.
The tasks of the study were designed to test ho... | we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light... | In this paper, we introduced t-viSNE, an interactive tool for the visual investigation of t-SNE projections. By partly opening the black box of the t-SNE algorithm, we managed to give power to users allowing them to test the quality of the projections and understand the rationale behind the choices of the algorithm whe... | D |
Nature inspired optimization algorithms or simply variations of metaheuristics? - 2021 [15]: This overview focuses on the study of the frequency of new proposals that are no more than variations of old ones. The authors critique a large set of algorithms based on three criteria: (1) whether there is a physical analogy... |
Initialization of metaheuristics: comprehensive review, critical analysis, and research directions - 2023 [35]: This review addresses a gap in the literature by developing a taxonomy of initialization methods for metaheuristics. This classification is based on the initialization of metaheuristics according to random t... |
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n... |
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex... |
An exhaustive review of the metaheuristic algorithms for search and optimization: taxonomy, applications, and open challenges - 2023 [34]: This taxonomy provides a large classification of metaheuristics based on the number of control parameters of the algorithm. In this work, the authors question the novelty of new pr... | D |
}).italic_Z = italic_φ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_φ start_POSTSUBSCRIPT italic_m - 1 end_POSTSUBSCRIPT ( ⋯ italic_φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_X italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ⋯ ) ita... | To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi... |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... |
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph. | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | C |
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route serve... |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... |
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes a... | There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th... | Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ... | C |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... | A |
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We previously placed these separators in every second nonempty drum σi:=[iδ,(i+1)δ]×Balld−1(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=... | We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O(1)𝑂1O(1)italic_O ( ... | However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1.
Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide,... | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecu... | Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above.
The only nontrivial requirement is that T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBS... | B |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel... | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bel... | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least
very close to being an auto... | D |
Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations,... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... | This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any spon... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the... | B |
For each topic, we identified a corresponding entry from the OPP-115 annotation scheme (Wilson et al., 2016), which was created by legal experts to label the contents of privacy policies. While Wilson et al. (2016) followed a bottom-up approach and identified different categories from analysis of data practices in priv... |
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as P... |
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detai... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... | B |
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
| As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS... | Figure 2(a.2) displays overlapping barcharts for depicting the per-class performances for each algorithm, i.e., two colors for the two classes in our example. The more saturated bar in the center of each class bar represents the altered performance when the parameters of algorithms are modified. Note that the view only... |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.... | Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for th... | C |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr... | C |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | B |
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated. Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, t... | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ... | The r-UAV needs to select multiple appropriate AWVs 𝒗(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , ital... | Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast m... |
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as | C |
Thus,
a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregula... | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the ri... | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument... | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | C |
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS... | Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | C |
Regarding parameter efficiency for NMT, Wu et al. (2019a) present lightweight and dynamic convolutions. Ma et al. (2021) approximate softmax attention with two nested linear attention functions. These methods are orthogonal to our work and it should be possible to combine them with our approach. | We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while... | Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne... |
We use depth-wise LSTM rather than a depth-wise multi-head attention network Dou et al. (2018) with which we can build the NMT model solely based on the attention mechanism for two reasons: 1) we have to compute the stacking of Transformer layers sequentially as in sequential token-by-token decoding, and compared to t... |
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-... | D |
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_P... | lpps is indeed a pre-spectral space. Conversely, ⟨X,τ,𝒦∘(X)⟩𝑋τsuperscript𝒦𝑋\left\langle X,\uptau,\mathcal{K}^{\circ}\!\left(X\right)\right\rangle⟨ italic_X , roman_τ , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) ⟩
is well-defined whenever (X,τ)𝑋τ(X,\uptau)( italic_X , roman_τ ) is a pre... | definition, this map is surjective. Notice that this map is actually
a logical map from ⟨Y,τY,𝒦∘(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\left\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\right\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POS... | {U∣U∈⟨τY∩⟦𝖥𝖮[σ]⟧Y⟩}\left\{U\mid U\in\langle\uptau_{Y}\cap\llbracket\mathsf{FO}[\upsigma]%
\rrbracket_{Y}\rangle\right\}{ italic_U ∣ italic_U ∈ ⟨ roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ⟩ } | pre-spectral space. Recall that ⟨Y,τY,𝒦∘(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ is a lpps. We are going to exhibit
a su... | D |
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... |
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneo... | Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective... |
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to esti... | B |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets.
The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs. |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | C |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>... | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s... | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa... | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C... |
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star... | B |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... | D |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | D |
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | B |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| A |
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizo... | Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al.... | The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analys... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | B |
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... |
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures 1 and 2) which is statistically significant (r(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | B |
However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_... | Alternatively, we can implement the decentralized approach using a second-order attention mechanism. As depicted in 2b, each layer in DAN consists of two steps, similar to a multi-layer GAT. The computation involves the previous two layers and can be formulated using the following equation:
| Figure 2: Insight into multi-layer DAN. a. In the single-layer DAN, we first use an additional aggregation layer to obtain the neighbor context (1-2); we then use the neighbor context as query to score neighbors (3); we finally aggregate the neighbors with the attention scores to obtain the final output embedding (4-5)... | However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_... | If 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is unobservable during the training phase, it becomes less useful and potentially detrimental when computing attention scores during the testing phase. To address this issue, we can introduce a decentralized attention network.... | D |
∇ηJPPO=∇η𝔼t[δt]2,subscript∇𝜂superscript𝐽PPOsubscript∇𝜂subscript𝔼𝑡superscriptdelimited-[]subscript𝛿𝑡2\nabla_{\eta}J^{{\rm PPO}}={\nabla_{\eta}}\mathbb{E}_{t}[\delta_{t}]^{2},∇ start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT italic_J start_POSTSUPERSCRIPT roman_PPO end_POSTSUPERSCRIPT = ∇ start_POSTSUBSCRIPT ital... |
Previous work typically utilizes intrinsic motivation for exploration in complex decision-making problems with sparse rewards. Count-based exploration [20, 21] builds a density model and encourages the agent to visit the states with less pseudo visitation count. Episodic curiosity [22] compares the current observation... | Figure 6: The evaluation curve in Atari games. The first 6 games are hard exploration tasks. The different methods are trained with different intrinsic rewards, and extrinsic rewards are used to measure the performance. Our method performs best in most games, both in learning speed and quality of the final policy. The ... |
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to ... | In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-err... | A |
Until today, the classic Gauss quadrature formula is the best approach to approximating integrals IGauss(f)≈∫Ωf(x)dxsubscript𝐼Gauss𝑓subscriptΩ𝑓𝑥differential-d𝑥I_{\mathrm{Gauss}}(f)\approx\int_{\Omega}f(x)\,\mathrm{d}xitalic_I start_POSTSUBSCRIPT roman_Gauss end_POSTSUBSCRIPT ( italic_f ) ≈ ∫ start_POSTSUBSCRIPT... | However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Tr... | We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that
P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros... | Leslie Greengard, Christian L. Mueller, Alex Barnett, Manas Rachh, Heide Meissner, Uwe Hernandez Acosta, and Nico Hoffmann are deeply acknowledged for their inspiring hints and helpful discussions.
Further, we are grateful to Michael Bussmann and thank the whole CASUS institute (Görlitz, Germany) for hosting stimulatin... | convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality.... | C |
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + f... | A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark.
In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance ... | In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance.
Omitted proofs can be found in Appendix A. | The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1].
However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t inves... | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | C |
Learning disentangled factors h∼qϕ(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpreta... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... | C |
As shown in the above method, logical aggregates can be constructed with structural wiring if digital signals are computed in pairs of inverted signals. Especially for the NOT gate, you can twist the α𝛼\alphaitalic_α line and the β𝛽\betaitalic_β line once, making it much simpler to operate than a semiconductor-based ... | As shown in the above method, logical aggregates can be constructed with structural wiring if digital signals are computed in pairs of inverted signals. Especially for the NOT gate, you can twist the α𝛼\alphaitalic_α line and the β𝛽\betaitalic_β line once, making it much simpler to operate than a semiconductor-based ... | The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si... | If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve... |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the... | C |
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... |
Let the matrix representation of KF=𝐊|Wsubscript𝐾𝐹conditional𝐊𝑊K_{F}=\mathbf{K}|Witalic_K start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = bold_K | italic_W in ℬℬ\mathcal{B}caligraphic_B be denoted as M𝑀Mitalic_M. (The notation for matrix representation is explained in (8)). Analogous to the univariate case, the... | The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic... |
The first author would like to thank the Department of Electrical Engineering, Indian Institute of Technology - Bombay, as the work was done in full during his tenure as a Institue Post-Doctoral Fellow. The authors would also like to thank the reviewers for their suggestions in the proofs of Lemma 1, Proposition 1 and... | The second statement of the theorem gives a necessary and sufficient condition for an element of the set ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT to be in ΣfsubscriptΣ𝑓\Sigma_{f}roman_Σ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT. If the choice of basis is as in (6), once the s... | C |
The NNFS algorithm performed surprisingly well in our simulations given its simple and greedy nature, showing performance very similar to that of the adaptive lasso. However, in both gene expression data sets it was among the two worst performing methods, both in terms of accuracy and view selection stability. If one ... | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... |
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the add... | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An exam... | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi... | A |
Regarding AP, HITON-PC and FBED exhibit significantly better performance than the other three techniques, as depicted in Figure 3(b). Notably, the results of AP generally display larger variances than those of ROC AUC, which indicates the unstable performance measuring with AP.
|
Table 6 presents the reduction rates achieved by each of the five techniques. The reduction rate is computed as 1 minus the ratio of the number of relevant variables selected to the total number of variables in a dataset. The results reveal substantial variations in reduction rates among the different techniques for t... |
As shown in Figure 3(a), the two causal feature selection techniques, HITON-PC and FBED, show better performance than the other three techniques. HITON-PC has the best average results, followed by FBED, IEPC, MI and DC. From the p𝑝pitalic_p-values shown in the figure, HITON-PC is significatly better than MI and DC, a... | In conclusion, the relevant variable selection phase of the DepAD framework is crucial for identifying optimal predictors for the target variable in anomaly detection. Striking a balance between selecting too many or too few variables is essential for maintaining prediction accuracy. When the ground-truth relevant vari... | Compared to other methods, IEPC exhibits a notably lower reduction rate, which, we believe, contributes to its unstable performance. The experimental results in Figure 3 indicate that when considering only linear prediction models, IEPC performs better with regularization techniques such as LASSO and Ridge, as opposed ... | A |
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, th... | Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
| where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star... |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | A |
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s... | Table 6: xGN levels in xGPN (ActivityNet-v1.3). We show the mAPs (%) at different tIoU thresholds, average mAPs as well as mAPs for short actions (less than 30 seconds) when using xGN at different xGPN encoder levels. The levels in the columns with ✓use xGN and the ones in the blank columns use a Conv1d(3,2)Conv1d32\t... | Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ... | We provide ablation study for the key components VSS and xGPN in VSGN to verify their effectiveness on the two datasets in Table 3 and 4, respectively. The baselines are implemented by replacing each xGN module in xGPN with a layer of Conv1d(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) and ReLU, and not using cutt... | To further improve the boundaries generated from Mlocsubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we design Madjsubscript𝑀𝑎𝑑𝑗M_{adj}italic_M start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT inspired by FGD in [24]. For each updated anchor seg... | C |
The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | (ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
(iii) during the detailed examination phase, zoo... | The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | (2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in... | C |
Consensus protocols, in contrast to Markov chains, operate without the limitations of non-negative nodes and edges or the requirement for the sum of nodes to equal one [18]. This broader scope enables consensus protocols to address a significantly wider range of problem spaces.
Therefore, there is a significant interes... | There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot... | we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs.
Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analy... | Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab... | Consensus protocols form an important field of research that has a strong connection with Markov chains [18].
Consensus protocols are a set of rules used in distributed systems to achieve agreement among a group of agents on the value of a variable [19, 20, 21, 22]. | A |
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5.
From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con... | In this work we fill this gap by introducing a generalisation of state-of-the-art isometric two-shape matching approaches towards isometric multi-shape matching. We demonstrate that explicitly exploiting the isometry property leads to a natural and elegant formulation that achieves improved results compared to previous... | It was shown that deep learning is an extremely powerful approach for extracting
shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me... | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | A |
The first three steps of algorithm RecognizePG are implied by the first part of Theorem 6. By following Theorem 6, we have to check that there are no full antipodal triangle in UpperCsubscriptUpper𝐶\text{Upper}_{C}Upper start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT (this is made in Step 4), and we have to find f:ΓC→[... | In this section we analyze all steps of algorithm RecognizePG. We want to explain them in details and compute the computational complexity of the algorithm. Some of these steps are already discussed in [22], anyway, we describe them in order to have a complete treatment.
|
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some parti... | On the side of path graphs, we believe that, compared to algorithms in [3, 22], our algorithm is simpler for several reasons: the overall treatment is shorter, the algorithm does not require complex data structures, its correctness is a consequence of the characterization in [1], and there are a few implementation deta... | The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov... | A |
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_... |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... | B |
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward... | Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
In each iteration, variati... | In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward... | To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods.
I... |
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals. | D |
∥R(rt+1∣aj,t,zt2)−R(rt+1∣zt2)∥).\displaystyle\qquad\big{\|}R\left(r_{t+1}\mid a_{j,t},z_{t}^{2}\right)-R\left(%
r_{t+1}\mid z_{t}^{2}\right)\big{\|}\Big{)}.∥ italic_R ( italic_r start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∣ italic_a start_POSTSUBSCRIPT italic_j , italic_t end_POSTSUBSCRIPT , italic_z start_POSTS... |
Besides the above two classes, other intrinsic reward methods are mainly task-oriented and for a specific purpose. For example, the method in [19] uses the discrepancy between the marginal policy and the conditional policy as the intrinsic reward for encouraging agents to have a greater social impact on others. The er... | Thus, in expectation, the intrinsic reward is the negative of MI above. As each agent maximizes the long-term cumulative reward, which therefore minimizes MI. As a result, agents become independent. This can be an interpretation from the information-theoretical perspective. Note that the prediction results are only use... | To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traf... | Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards a... | B |
such that
𝓇𝒶𝓃𝓀(ϕ𝐳(𝐳^))≡𝓀𝓇𝒶𝓃𝓀subscriptitalic-ϕ𝐳^𝐳𝓀\mathpzc{rank}\left(\,\phi_{\mathbf{z}}(\hat{\mathbf{z}})\,\right)\,\equiv\,kitalic_script_r italic_script_a italic_script_n italic_script_k ( italic_ϕ start_POSTSUBSCRIPT bold_z end_POSTSUBSCRIPT ( over^ start_ARG bold_z end_ARG ) ) ≡ italic_script_k ... | 𝐱∈ϕ(Λ∗)𝐱italic-ϕsubscriptΛ\mathbf{x}\,\in\,\phi(\Lambda_{*})bold_x ∈ italic_ϕ ( roman_Λ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) is in the same branch of zeros as
𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT and, if a zero 𝐱~~𝐱\tilde{\mathbf{x}}over~ start_ARG bold_x end_ARG is in th... | for computing a zero 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT of 𝐟𝐟\mathbf{f}bold_f at which the Jacobian
J(𝐱∗)𝐽subscript𝐱J(\mathbf{x}_{*})italic_J ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) is of rank r𝑟ritalic_r particularly when 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x star... | \mathbf{0}italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( bold_x ) = bold_0 implies 𝐱𝐱\mathbf{x}bold_x is a semiregular
zero of 𝐟𝐟\mathbf{f}bold_f in the same branch of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_PO... | toward a semiregular zero 𝐱^^𝐱\hat{\mathbf{x}}over^ start_ARG bold_x end_ARG of 𝐱↦𝐟(𝐱,𝐲∗)maps-to𝐱𝐟𝐱subscript𝐲\mathbf{x}\,\mapsto\,\mathbf{f}(\mathbf{x},\mathbf{y}_{*})bold_x ↦ bold_f ( bold_x , bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT )
in the same branch of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POS... | A |
We bound the overall time complexity of ProfilePacking for serving a sequence of n𝑛nitalic_n items as a function of n,k𝑛𝑘n,kitalic_n , italic_k, and m𝑚mitalic_m. The initial phase of the algorithm, which involves computing the profile and its optimal packing, runs in time independent of n𝑛nitalic_n and does not i... | We will now use Lemma 2 to prove a more general result that incorporates the prediction error into the analysis. To this end, we will relate the cost of the packing of ProfilePacking to the packing that the algorithm would output if the prediction were error-free, which will allow us to apply the result of Lemma 2. Spe... | As the prediction error grows, ProfilePacking may not be robust; we show, however, that this is an unavoidable price that any optimally-consistent algorithm with frequency predictions must pay. We thus design and analyze a more general class of hybrid algorithms that combine ProfilePacking and any one of the known robu... |
In this section, we describe and analyze a more general class of algorithms which offer better robustness in comparison to ProfilePacking, at the expense of slightly worse consistency. To this end, we will combine ProfilePacking with any algorithm A𝐴Aitalic_A that has efficient worst-case competitive ratio, in the | We conclude that the robustness of ProfilePacking is close-to-optimal and no (1+ϵ)1italic-ϵ(1+\epsilon)( 1 + italic_ϵ )-consistent algorithm can do asymptotically better. It is possible, however, to obtain more general tradeoffs between consistency and robustness, as we discuss in the next section.
| C |
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou... | In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar... |
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of WT𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl... | In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro... |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | D |
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... |
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n... | Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate.
Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constr... | D |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba... | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g... |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... | B |
(m+1)𝑚1(m+1)( italic_m + 1 )-tuples of ℱℱ\mathcal{F}caligraphic_F with nonempty intersection. In other words, πm+1(ℱ)subscript𝜋𝑚1ℱ\pi_{m+1}(\mathcal{F})italic_π start_POSTSUBSCRIPT italic_m + 1 end_POSTSUBSCRIPT ( caligraphic_F ) is at least δ′=defρ/(mtm+1)superscriptdefsuperscript𝛿′𝜌binomial𝑚𝑡𝑚1\delta^{\prim... | The rest of Section 4.1 is devoted to the proof of Lemma 4.2. The proof first handles the case k=m𝑘𝑚k=mitalic_k = italic_m, and then uses it to prove the case k<m𝑘𝑚k<mitalic_k < italic_m. Note that for k>m𝑘𝑚k>mitalic_k > italic_m the lemma is trivial, as the chain group contains only a trivial chain and we can ta... | Lemma 4.6 assumes that the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F has the property that for 0≤j<dimK0𝑗dimension𝐾0\leq j<\dim K0 ≤ italic_j < roman_dim italic_K and for every colorful subfamily 𝒢𝒢\mathcal{G}caligraphic_G of ℱℱ\mathcal{F}caligraphic_F, the j𝑗jitalic_jth reduced Betti number β~j(⋂F∈�... | If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
| a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K.
So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ... | C |
The selected features are highlighted in the dark gray color (because it matches the default color, which is gray) of the VIF metric’s region, as demonstrated in Fig. 1(d), and the combinations are generated for the two or three selected features automatically, as can be seen in Fig. 1(b). It is up to the user to selec... | Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S... | Next, we focus on the overall inspection of features for all instances (see Fig. 3(d.1–d.4)).
F4 (the ellipsoid shape) appears the worst in terms of target correlation (the small circular bar), and it has one of the lowest MI values (light blue color). | To the best of our knowledge, little empirical evidence exists for choosing a particular measure over others. In general, target correlation and mutual information (both related to the influence between features and the dependent variable) may be good candidates for identifying important features [71]. After these firs... | By comparing the lengths of the circular bars in Fig. 3(e), we see that the lowest overall target correlation is reported for F4 (on hover shown as 10%).
Also, the MI exhibits low values for both F3 and F4. As we proceed, we observe that F3, F4, and F6 may cause problems regarding collinearity based on the VIF heuristi... | D |
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT itali... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | C |
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA ... | To test scalability on a natural dataset, we conduct four experiments per explicit method on GQA-OOD with the explicit bias variables: a) head/tail (2 groups), b) answer class (1833 groups), c) global group (115 groups), and d) local group (133328 groups). Unlike Biased MNISTv1, we do not test with combinations of thes... | We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u... | Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain compara... |
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betait... | A |
In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms.
As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform. | In this survey, we present a comprehensive overview of deep learning-based gaze estimation methods. Unlike the conventional gaze estimation methods that requires dedicated devices, the deep learning-based approaches regress the gaze from the eye appearance captured by web cameras. This makes it easy to implement the al... | In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms.
As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform. | From the deep feature extraction perspective, we describe the strategies for extracting features from eye images, face images and videos.
Under the deep neural network architecture design perspective, we first review methods based on the supervised strategy, containing the supervised, self-supervised, semi-supervised a... | Convolutional neural networks have been widely used in many compute vision tasks [88]. They also demonstrate superior performance in the field of gaze estimation.
In this section, we first review the existing gaze estimation methods from the learning strategy perspective, i.e., the supervised CNNs and the semi-/self-/u... | C |
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source l... | he2016deep has been successfully used in various pattern recognition tasks such as face and pedestrian detection mliki2020improved . It containing 50 layers trained on the ImageNet dataset. This network is a combination of Residual network integrations and Deep architecture parsing. Training with ResNet-50 is faster d... | The next step is to apply a cropping filter in order to extract only the non-masked region. To do so, we firstly normalize all face images into 240 ×\times× 240 pixels. Next, we partition a face into blocks. The principle of this technique is to divide the image into 100 fixed-size square blocks (24 ×\times× 24 pixels ... | Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (... |
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source l... | B |
If ⋅⊢C::Δ\cdot\vdash C::\Delta⋅ ⊢ italic_C : : roman_Δ, then C𝐶Citalic_C terminates, i.e., either C𝐶Citalic_C is final or, inductively, C′superscript𝐶′C^{\prime}italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT terminates for all reducts C′superscript𝐶′C^{\prime}italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCR... |
Moreover, some prior work, which is based on sequential functional languages, encodes recursion via various fixed point combinators that make both mixed inductive-coinductive programming [Bas18] and substructural typing difficult, the latter requiring the use of the ! modality [Wad12]. Thus, like Fωcopsuperscriptsubsc... | Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was... | Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], ... |
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to c... | D |
where 𝐆¯=𝐁m𝐆¯𝐆superscript𝐁𝑚𝐆{\bar{\mathbf{G}}}={{\mathbf{B}}^{m}}{\mathbf{G}}over¯ start_ARG bold_G end_ARG = bold_B start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT bold_G. It is clear from Eq. (3) that the fingerprint 𝐛ksubscript𝐛𝑘\mathbf{b}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT has b... | Judge. The judge is a trusted entity who is only responsible for arbitration in the case of illegal redistribution, as in existing traitor tracing systems [10, 11, 12, 13, 14, 3]. After receiving the owner’s request for arbitration, the judge makes a fair judgment based on the evidence provided by the owner. Although o... | The whole FairCMS-I scheme is summarized as follows.
First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 2. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\mathb... | Upon the detection of a suspicious media content copy 𝐦~ksuperscript~𝐦𝑘\tilde{\mathbf{m}}^{k}over~ start_ARG bold_m end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, the owner resorts to the judge for violation identification. To this end, the proofs that the owner needs to provide the judge includes the o... | Once a copyright dispute occurs between the owner and the user, they delegate a judge that is credible for both parties to make a fair arbitration. Due to the possible noise effect during data transmission, the received suspicious media content copy is assumed to be contaminated by the an additive noise 𝐧𝐧\mathbf{n}b... | D |
The feature embeddings described in Section 3.1 are taken as the initial feature embeddings of GraphFM, i.e., ei(1)=eisubscriptsuperscripte1𝑖subscripte𝑖\textbf{e}^{(1)}_{i}=\textbf{e}_{i}e start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = e start_POSTSUBSCRIPT italic_i e... | In summary, when dealing with feature interactions, FM suffers intrinsic drawbacks. We thus propose a novel model Graph Factorization Machine (GraphFM), which takes advantage of GNN to overcome the problems of FM for feature interaction modeling.
By treating features as nodes and feature interactions as the edges betwe... | At each layer of GraphFM, we select the beneficial feature interactions and treat them as edges in a graph. Then we utilize a neighborhood/interaction aggregation operation to encode the interactions into feature representations.
By design, the highest order of feature interaction increases at each layer and is determi... |
GraphFM(-S): interaction selection is the first component in each layer of GraphFM, which selects only the beneficial feature interactions and treat them as edges. As a consequence, we can model only these beneficial interactions with the next interaction aggregation component. To check the necessity of this component... | Then we aggregate these selected feature interactions to update feature embeddings in the neighborhood aggregation component.
Within each k𝑘kitalic_k-th layer, we are able to select and model only the beneficial k𝑘kitalic_k-th order feature interactions and encode these factorized interactions into feature representa... | D |
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes wh... | Complexity comparison: Number of iterations needed to reach a solution with h(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) below ε𝜀\varepsilonitalic_ε for Problem 1.1 for Frank-Wolfe-type algorithms in the literature. The asterisk on FW-LLOO highlights the fact that the procedure is different from the standard LMO procedur... | the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020],
denoted by B-FW and B-AFW respectively. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... |
Research reported in this paper was partially supported through the Research Campus Modal funded by the German Federal Ministry of Education and Research (fund numbers 05M14ZAM,05M20ZBM) and the Deutsche Forschungsgemeinschaft (DFG) through the DFG Cluster of Excellence MATH+. We would like to thank the anonymous revi... | D |
Here, we make the observation that by combining the prefixes of P𝑃Pitalic_P and P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT until the edge ajsubscript𝑎𝑗a_{j}italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, we obtain an augmenting path.
On a high level, our approach is to sh... | For the rest of the graph, [EKMS12] show that it is enough to store the length of the shortest alternating path that has reached each matched edge. This length is called label.
In the first challenge, we considered the possibility that a vertex γ𝛾\gammaitalic_γ “blocks” the DFS exploration of α𝛼\alphaitalic_α and dis... | Therefore, we have an augmenting path from γ𝛾\gammaitalic_γ to α𝛼\alphaitalic_α, which will be detected in Algorithm 3 of Algorithm 3.
This implies that the augmenting path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β will be removed from the graph in Pass-Bundle τ𝜏\tauitalic_τ. | If the alternating path Pγsubscript𝑃𝛾P_{\gamma}italic_P start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT starting from γ𝛾\gammaitalic_γ was of length i′>isuperscript𝑖′𝑖i^{\prime}>iitalic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT > italic_i, then it could be that γ𝛾\gammaitalic_γ did not find β𝛽\betaitalic_β si... | Nodes α𝛼\alphaitalic_α, β𝛽\betaitalic_β, and γ𝛾\gammaitalic_γ are free. The black single-segments are unmatched and black (full) double-segments are matched edges. The path P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT corresponding to a DFS branch of γ𝛾\gammaitalic_γ is shown by th... | A |
\bm{\mathit{A}}}\right)^{k}\overline{\bm{\mathit{v}}}^{\prime}_{1:4},over~ start_ARG bold_italic_d end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : 4 end_POSTSUBSCRIPT ≤ italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_ρ ( over~ start_ARG bold_italic_A end_ARG ) start_POSTSU... |
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun... | In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies. | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | B |
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
[10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net... | One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
[10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net... | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... |
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ... |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | C |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... | An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has... |
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th... | The set of (C)CEs forms a convex polytope, and therefore any strictly convex function could uniquely select amongst this set. The literature only provides one such example: MECE (Ortiz et al., 2007) which has a number of appealing properties, but was found to be slow to solve large games. There is a gap in the literatu... | There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve ... | A |
\epsilon^{\prime}-\xi}^{\infty}{\delta_{2}}\left(t\right)dt\right)italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_ϵ ) ≔ start_UNDERACCENT italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ ( 0 , italic_ϵ ) , italic_ξ ∈ ( 0 , italic_ϵ - italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_UNDERAC... | Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta... | In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior d... | Our Covariance Lemma (3.5) shows that there are two possible ways to avoid adaptivity-driven overfitting—by bounding the Bayes factor term, which induces a bound on |q(Dv)−q(D)|𝑞superscript𝐷𝑣𝑞𝐷\left|{q}\left(D^{v}\right)-{q}\left(D\right)\right|| italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSC... | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient... | A |
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... |
To show the algorithm preserves properness of the coloring, we show that every individual recoloring preserves properness, that is, if an arbitrary z𝑧zitalic_z-antler is z𝑧zitalic_z-properly colored prior to the recoloring, it is also z𝑧zitalic_z-properly colored after the recoloring. |
We show first that any z𝑧zitalic_z-properly colored antler prior to executing the algorithm remains z𝑧zitalic_z-properly colored after termination. Afterwards we argue that in Item 5, the pair (χV−1(𝖢˙),χV−1(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V... | All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algor... | We now show that a z𝑧zitalic_z-antler can be obtained from a suitable coloring χ𝜒\chiitalic_χ of the graph. The algorithm we give updates the coloring χ𝜒\chiitalic_χ and recolors any vertex or edge that is not part of a z𝑧zitalic_z-properly colored antler to color 𝖱˙˙𝖱\mathsf{\dot{R}}over˙ start_ARG sansserif_R e... | A |
Painterly image harmonization is more challenging because multiple levels of styles (i.e., color, simple texture, complex texture) [115] need to be transferred from background to foreground, while standard image harmonization only needs to transfer low-level style (i.e., illumination).
Painterly image harmonization is ... | The existing painterly image harmonization methods [104, 119, 10, 99, 166, 115, 114] can be roughly categorized into optimization-based methods and feed-forward methods.
Optimization-based methods optimize the input image to minimize the style loss and content loss, which is very time-consuming. |
Image harmonization is closely related to style transfer. Note that both artistic style transfer [37, 56, 118] and photorealistic style transfer [103, 82] belong to style transfer. Image harmonization is closer to photorealistic style transfer, which transfers the style of a reference photo to another input photo. The... | For example, Luan et al. [104] proposed to optimize the input image with two passes, in which the first pass aims at robust coarse harmonization and the second pass targets at high-quality refinement.
Feed-forward methods send the input image through the model to output the harmonized result. For example, Peng et al. [... |
The above methods based on gradient domain smoothness can smooth the transition between foreground and background to some extent. However, background colors may seep through the foreground too much and distort the foreground color, which would bring significant loss to the foreground content. | A |
\\
\sum_{j=0}^{n}a_{ij}=1,i=1,2,3,...,m\end{matrix}\right.\end{split}start_ROW start_CELL end_CELL start_CELL roman_max start_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPER... |
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A(i,j)�... | Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To... |
Our experimental results demonstrate that LPA outperforms LLD in most cases. This can be attributed to the fact that LPA optimizes the expected long-term revenues at each dispatching round, while LLD only focuses on the immediate reward. As a result, LPA is better suited for maximizing the total revenue of the system ... | LLD algorithm is an optimization-based approach formulated by Eqn. (13), where aij=1subscript𝑎𝑖𝑗1a_{ij}=1italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 1 if taxi j𝑗jitalic_j is dispatched to passenger i𝑖iitalic_i and 0 otherwise; Here, A(i,j)𝐴𝑖𝑗A(i,j)italic_A ( italic_i , italic_j ) repre... | A |
(y|𝐱,θ)∼𝒩(y^θ(𝐱),σ2).similar-toconditional𝑦𝐱𝜃𝒩superscript^𝑦𝜃𝐱superscript𝜎2\displaystyle(y\,|\,\mathbf{x},\theta)\sim\mathcal{N}\big{(}\hat{y}^{\theta}(%
\mathbf{x}),\sigma^{2}\big{)}\,.( italic_y | bold_x , italic_θ ) ∼ caligraphic_N ( over^ start_ARG italic_y end_ARG start_POSTSUPERSCRIPT italic_θ end_POS... | Although ordinary neural networks have the benefit that even for a large number of features and weights they can be implemented very efficiently, their Bayesian incarnation suffers from a problem. The nonlinearities in the activation functions and the sheer number of parameters, although they are the features that make... |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... | Most of the data sets were obtained from the UCI repository Dua2019 . Specific references are given in Table 2. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. All data sets were standardized (both features and target variables) befor... | A |
EMOPIA is a dataset of pop piano music collected recently by \textciteemopia from YouTube for research on emotion-related tasks.888https://annahung31.github.io/EMOPIA/
It has 1,087 clips (each around 30 seconds) segmented from 387 songs, covering Japanese anime, Korean & Western pop song covers, movie soundtracks and p... | There is little performance difference between REMI and CP in this task.
Fig. 7 further shows that the evaluated models can fairly easily distinguish between high arousal and low arousal pieces (i.e., “HAHV, HALV” versus “LALV, LAHV”), but they have a much harder time along the valence axis (e.g., “HAHV” versus “HALV” ... |
Tab. 2 shows that the accuracy on our 6-class velocity classification task is not high, reaching 52.11% at best. This may be due to the fact that velocity is rather subjective, meaning that musicians can perform the same music piece fairly differently. Moreover, we note that the data is highly imbalanced, with the lat... | The emotion of each clip has been labelled using the following 4-class taxonomy: HAHV (high arousal high valence); LAHV (low arousal high valence); HALV (high arousal low valence); and LALV (low arousal low valence). This taxonomy is derived from the Russell’s valence-arousal model of emotion \parenciterussell, where v... | We use this dataset for the emotion classification task. As Tab. 1 shows, the average length of the pieces in the EMOPIA dataset is the shortest among the five, since they are actually clips manually selected by dedicated annotators \parenciteemopia to ensure that each performance expresses a single emotion.
| C |
Observe that for a tree on n𝑛nitalic_n vertices we can compute for every vertex v𝑣vitalic_v and its neighbor u𝑢uitalic_u functions f(v,u)𝑓𝑣𝑢f(v,u)italic_f ( italic_v , italic_u ) and g(v,u)𝑔𝑣𝑢g(v,u)italic_g ( italic_v , italic_u ) denoting the sizes of subsets of C1(T)subscript𝐶1𝑇C_{1}(T)italic_C start_PO... | In every tree T𝑇Titalic_T there exists a central vertex v∈V(T)𝑣𝑉𝑇v\in V(T)italic_v ∈ italic_V ( italic_T ) such that every connected component of T−v𝑇𝑣T-vitalic_T - italic_v has at most |V(T)|2𝑉𝑇2\frac{|V(T)|}{2}divide start_ARG | italic_V ( italic_T ) | end_ARG start_ARG 2 end_ARG vertices.
| Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the... | The idea is to start from any vertex w𝑤witalic_w, and then jump to its neighbor with the largest component size in T−w𝑇𝑤T-witalic_T - italic_w, until we hit a vertex with desired property.
Note that for any vertex v𝑣vitalic_v there can be at most one neighbor u𝑢uitalic_u such that its connected component Tusubscri... | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.