context stringlengths 250 4.88k | A stringlengths 250 4.17k | B stringlengths 250 4.73k | C stringlengths 250 3.89k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
Rnm(x)Rnm′(x)=xm+2zF′(a,b;c;z)F(a,b;c;z).superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑥𝑚2𝑧superscript𝐹′𝑎𝑏𝑐𝑧𝐹𝑎𝑏𝑐𝑧\frac{R_{n}^{m}(x)}{{R_{n}^{m}}^{\prime}(x)}=\frac{x}{m+2z\frac{F^{\prime}(a,b%
;c;z)}{F(a,b;c;z)}}.divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUB... | )z}{c(c+1)}}{\frac{(a+1-b)z}{c+1}+1-\cdots}\,\frac{\frac{(a+2)(c+1-b)z}{(c+1)(%
c+2)}}{\frac{(a+2-b)z}{c+2}+1-\cdots}divide start_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) end_ARG start_ARG italic_F ( italic_a + 1 , italic_b + 1 ; italic_c + 1 ; italic_z ) end_ARG ≡ divide start_ARG - italic_b italic_z... |
F′(a,b;c;z)=abcF(a+1,b+1;c+1;z)superscript𝐹′𝑎𝑏𝑐𝑧𝑎𝑏𝑐𝐹𝑎1𝑏1𝑐1𝑧F^{\prime}(a,b;c;z)=\frac{ab}{c}F(a+1,b+1;c+1;z)italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_a , italic_b ; italic_c ; italic_z ) = divide start_ARG italic_a italic_b end_ARG start_ARG italic_c end_ARG italic_F ( italic_a + 1 ... |
z(1−z)F′′(a,b;c;z)+[c−(a+b+1)z]F′(a,b;c;z)=abF(a,b;c;z)𝑧1𝑧superscript𝐹′′𝑎𝑏𝑐𝑧delimited-[]𝑐𝑎𝑏1𝑧superscript𝐹′𝑎𝑏𝑐𝑧𝑎𝑏𝐹𝑎𝑏𝑐𝑧z(1-z)F^{\prime\prime}(a,b;c;z)+[c-(a+b+1)z]F^{\prime}(a,b;c;z)=abF(a,b;c;z)italic_z ( 1 - italic_z ) italic_F start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_a , ... | (c−a−1)F=(b−a−1)(1−z)F(a+)+(c−b)F(a+,b−);𝑐𝑎1𝐹𝑏𝑎11𝑧𝐹superscript𝑎𝑐𝑏𝐹superscript𝑎superscript𝑏(c-a-1)F=(b-a-1)(1-z)F(a^{+})+(c-b)F(a^{+},b^{-});( italic_c - italic_a - 1 ) italic_F = ( italic_b - italic_a - 1 ) ( 1 - italic_z ) italic_F ( italic_a start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) + ( italic_... | B |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali... | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... | One important task in this context is writing elements of classical groups as words in standard generators using SLPs. This is done in Magma [14] using the results of Elliot Costi [6] and in GAP using the results of this paper see Section 6. Other rewriting algorithms also exist, for example Cohen et al. [26] present a... |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | B |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are
related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... | B |
We think Alg-A is better in almost every aspect. This is because it is essentially simpler.
Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others: | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | B |
Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte... | CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, [18] use an extensive list of bipolar sentiments with a set of combinational rules. In... | For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new... | D |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | C |
Text Features are derived from a tweet’s text content. We consider 16 text features including lengthOftweet and smile (contain :−>,:−),;−>,;−)..):->,:-),;->,;-)..): - > , : - ) , ; - > , ; - ) . . ), sad, exclamation, I-you-heshe (contain first, second, third pronouns). In addition, we use the natural language Toolkit ... | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ... | In this section, we compare the performance our model with the human rumor debunking websites: snopes.com and urbanlegend.com. Snopes has their own Twitter account141414https://twitter.com/snopes. They regularly post tweets via this account about rumors which they collected and verified. We consider the creation time o... | Twitter Features refer to basic Twitter features, such as hashtags, mentions, retweets. In addition, we derive three more URL-based features. The first is the WOT–trustworthy-based– score which is crawled from the APIs of WOT.com555https://www.mywot.com/en/api. The second is domain categories which we have collected fr... | For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... | C |
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector
of ℳLR2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... |
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes... | D |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST... | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,... | B |
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | A |
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... |
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe... |
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone... |
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone)... |
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai... | D |
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,... | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap... | Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s... | One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr... |
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,... | B |
Wolterink et al.[149] trained a ten layer CNN with increasing levels of dilation for segmenting the myocardium and blood pool in axial, sagittal and coronal image slices.
They also employ deep supervision[165] to alleviate the vanishing gradients problem and improve the training efficiency of their network using a smal... | Experiments performed with and without dilations on this architecture indicated the usefulness of this configuration.
In their article Li et al.[150] start with a 3D FCN for voxel-wise labeling and then introduce dilated convolutional layers into the baseline model to expand its receptive field. | They train a FCN with a concatenation layer that allows high level perception guide the work in lower levels and evaluate their model on DRIVE and STARE databases, achieving comparable results with other methods that use real labeling.
In[173] the authors trained a 12 CNNs ensemble with three layers each on the DRIVE d... | In their article Tran et al.[142] trained a four layer FCN model for LV/RV segmentation on SUN09, STA11.
They compared previous state-of-the-art methods along with two initializations of their model: a fine-tuned version of their model using STA11 and a Xavier initialized model with the former performing best in almost... | In their article Hong et al.[201] trained a DBN using image patches for the detection, segmentation and severity classification of Abdominal Aortic Aneurysm region in CT images.
Liu et al.[202] used an FCN with twelve layers for left atrium segmentation in 3D CT volumes and then refined the segmentation results of the ... | A |
We presented SimPLe, a model-based reinforcement learning approach that operates directly on raw pixel observations and learns effective policies to play games in the Atari Learning Environment. Our experiments demonstrate that SimPLe learns to play many of the games with just 100100100100K interactions with the envir... |
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world mod... | Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-... |
In this paper our focus was to demonstrate the capability and generality of SimPLe only across a suite of Atari games, however, we believe similar methods can be applied to other environments and tasks which is one of our main directions for future work. As a long-term challenge, we believe that model-based reinforcem... | Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a... | D |
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz).
Truong et al. [9] used Short... | For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 sampl... | Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ... | A |
This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
| In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | Figure 2: The Cricket robot (left) and its leg joints layout (right). The Cricket robot [20] is a hybrid locomotion system that utilizes four revolute joints on each leg. The outermost leg segment is equipped with a drivable track that encircles it, enabling the robot to move like traditional skid-steer tank robots.
| The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | C |
In other words, the algorithm designer can hedge against untrusted advice, by a small sacrifice in the trusted performance. Thus we can interpret r𝑟ritalic_r as the “risk” for trusting the advice: the smaller the r𝑟ritalic_r, the bigger the risk.
Likewise, for the list update problem, our (r,f(r))𝑟𝑓𝑟(r,f(r))( ita... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | B |
In that context, our proposal is a potential tool with which systems could be developed in the future for large-scale passive monitoring of social media to help to detect early traces of depression by analyzing users’ linguistic patterns, for instance, filtering users and presenting possible candidates, along with rich... | The dataset used in this task, which was initially introduced and described in [Losada & Crestani, 2016], is a collection of writings (submissions) posted by users; here users will also be referred to as “subjects”.
There are two categories of subjects in the dataset, depressed and control (non-depressed). | Although the use of MDP is very appealing from a theoretical point of view, and we will consider it for future work, the model they proposed would not be suitable for risk tasks. The use of SVMs along with Φ(s)Φ𝑠\Phi(s)roman_Φ ( italic_s ) implies that the model is a black box, not only hiding the reasons for classif... | On the other hand, in the machine learning community, it is well known the importance of having publicly available datasets to foster research on a particular topic, in this case, predicting depression based on language use.
That was the reason why the main goal in [Losada & Crestani, 2016] was to provide, to the best ... |
The dataset used in this task had the advantage of being publicly available and played an important role in determining how the use of language is related to the EDD problem. However, it exhibits some limitations from a methodological/clinical point of view. Beyond the potential “noise” introduced by the method to ass... | D |
Note that we impose a constraint on the momentum coefficient β𝛽\betaitalic_β during the theoretical proof. But in practice, even when the constraint is relaxed, e.g., β=0.9𝛽0.9\beta=0.9italic_β = 0.9,
GMC still converges well. More details about the convergence performance of GMC are provided in Section 5. | However, the top-s𝑠sitalic_s compressor requires extra computation overhead to find the largest components and extra communication overhead to communicate the indices of the components. Some works (Vogels et al., 2019; Xie et al., 2020; Xu and Huang, 2022) consider Random Blockwise Gradient Sparsification (RBGS) compr... | Each worker computes stochastic gradients locally and communicates with the server or other workers to obtain the aggregated stochastic gradients for updating the model parameter. Recently, more and more large-scale deep learning models, such as large language models (Devlin et al., 2019; Brown et al., 2020; Touvron et... |
To further verify the superiority of global momentum, we also evaluate DEF-A and GMC+++ when using the RBGS compressor. In RBGS, we randomly select a block that contains s𝑠sitalic_s components using the same random seed among the workers, where sd=11024𝑠𝑑11024\frac{s}{d}=\frac{1}{1024}divide start_ARG italic_s end_... | with Error Reset (CSER) that combines partial synchronization and error reset techniques. Due to the extra communication and computation overhead of the top-s𝑠sitalic_s compressor,
some works (Vogels et al., 2019; Xie et al., 2020; Xu and Huang, 2022) also consider a more aggressive sparsification compressor, called R... | A |
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations.
In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ... | We then defined SANs which have minimal structure and with the use of sparse activation functions learn to compress data without losing important information.
Using Physionet datasets and MNIST we demonstrated that SANs are able to create high quality representations with interpretable kernels. |
In Section II we define the φ𝜑\varphiitalic_φ metric, then in Section III we define the five tested activation functions along with the architecture and training procedure of SANs, in Section IV we experiment SANs on the Physionet [32], UCI-epilepsy [33], MNIST [34] and FMNIST [35] databases and provide visualization... | SANs combined with the φ𝜑\varphiitalic_φ metric compress the description of the data in a way a minimum description language framework would, by encoding them into 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α... | During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs.
The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as... | B |
In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... |
where A𝐴Aitalic_A, B𝐵Bitalic_B and C𝐶Citalic_C are balance indices that balance three utilities on the basis of post-disaster scenario. The ultimate goal for enlarging the utility of the networks is to enlarge the summation of utility function (9) of each UAV, and we define the global utility function as the goal f... |
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ... | Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increase... | In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... | B |
\nabla\psi+f\,\nabla f\bigg{)}+\frac{\mathbf{B}\cdot\nabla f}{\mu_{0}r}%
\widehat{\boldsymbol{\phi}}bold_J × bold_B = - divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ( roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT... | \omega\mathbf{B}+\frac{\eta}{r^{2}}\nabla f\right)italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∇ ⋅ ( - ( divide start_ARG italic_f end_ARG start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG bold_v ) + italic_ω bold_B + divide start_ARG italic_η end_ARG start_ARG italic_r start_POSTSUPERSCRIP... | \omega\mathbf{B}+\frac{\eta}{r^{2}}\nabla f\bigg{)}+\dot{f}_{form}(z,\,t)over˙ start_ARG italic_f end_ARG ( bold_r , italic_t ) = italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∇ ⋅ ( - divide start_ARG italic_f end_ARG start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG bold_v + italic_ω bold_B... | (-\frac{f}{r^{2}}\mathbf{v}+\omega\mathbf{B}+\frac{\eta}{r^{2}}\nabla f\right)%
\cdot d\boldsymbol{\Gamma}over˙ start_ARG roman_Φ end_ARG = divide start_ARG 1 end_ARG start_ARG 2 italic_π end_ARG ∫ ∇ ⋅ ( - divide start_ARG italic_f end_ARG start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG bold_v + ... | _{0}r^{2}}\nabla f+\mathbf{q}_{i}+\mathbf{q}_{e}+\underline{\boldsymbol{\pi}}%
\cdot\mathbf{v}\biggr{)}+ divide start_ARG italic_f start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG bold_v - divide s... | A |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | A |
θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and θi−superscriptsubscript𝜃𝑖\theta_{i}^{-}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are the parameters of network and target network at iteration i respectively. The target netw... | Figure 5 demonstrates that using Dropout methods in DQN reduce the overestimation from the optimal policy. Despite that Gridworld environment is not suffering from intangible overestimation that can distort the overall cumulative rewards but reducing overestimation leads to more accurate predictions.
|
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | C |
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical d... |
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information... | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr... |
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural im... |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing ... | C |
Fig. 6 depicts in blue the variation of spectral distance between 𝐋𝐋{\mathbf{L}}bold_L and 𝐋¯¯𝐋\bar{{\mathbf{L}}}over¯ start_ARG bold_L end_ARG, as we increase the threshold ϵitalic-ϵ\epsilonitalic_ϵ used to compute 𝐀¯¯𝐀\bar{\mathbf{A}}over¯ start_ARG bold_A end_ARG. | Figure 6: In blue, the variation of spectral distance between the Laplacian 𝐋𝐋{\mathbf{L}}bold_L and the Laplacian 𝐋¯¯𝐋\bar{{\mathbf{L}}}over¯ start_ARG bold_L end_ARG, associated with the adjacency matrix 𝐀𝐀{\mathbf{A}}bold_A sparsified with threshold ϵitalic-ϵ\epsilonitalic_ϵ. In red, the number of edges that r... | The red line indicates the number of edges that remain in 𝐀¯¯𝐀\bar{{\mathbf{A}}}over¯ start_ARG bold_A end_ARG after sparsification.
It is possible to see that for small increments of ϵitalic-ϵ\epsilonitalic_ϵ the spectral distance increases linearly, while the number of edges in the graph drops exponentially. |
Figure 13: In blue, the variation of spectral distance between the Laplacian 𝐋𝐋{\mathbf{L}}bold_L associated with 𝐀𝐀{\mathbf{A}}bold_A and the Laplacian 𝐋¯¯𝐋\bar{{\mathbf{L}}}over¯ start_ARG bold_L end_ARG associated with the adjacency matrix 𝐀¯¯𝐀\bar{\mathbf{A}}over¯ start_ARG bold_A end_ARG sparsified with a... |
Figure 13: In blue, the variation of spectral distance between the Laplacian 𝐋𝐋{\mathbf{L}}bold_L associated with 𝐀𝐀{\mathbf{A}}bold_A and the Laplacian 𝐋¯¯𝐋\bar{{\mathbf{L}}}over¯ start_ARG bold_L end_ARG associated with the adjacency matrix 𝐀¯¯𝐀\bar{\mathbf{A}}over¯ start_ARG bold_A end_ARG sparsified with a... | B |
Sparse connectivity maintains the tree structures and has fewer weights to train. In practice, sparse weights require a special differentiable implementation, which can drastically decrease performance, especially when training on a GPU. Full connectivity optimizes all parameters of the fully connected network.
Massice... | For training, we generate input-target pairs (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) as described in the last section.
These training examples are fed into the training process to teach the network to predict the same results as the random forest. To avoid overfitting, the data is generated on-the-fly so that each traini... | The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees.
Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim... | In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models.
We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t... | These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption.
In this work, we address this issue by proposing an imitation learning-based me... | D |
Theoretically, we establish the sample efficiency of OPPO in an episodic setting of Markov decision processes (MDPs) with full-information feedback, where the transition dynamics are linear in features (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020). In particular, we allow the trans... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | Moreover, we prove that, even when the reward functions are adversarially chosen across the episodes, OPPO attains the same regret in terms of competing with the globally optimal policy in hindsight (Cesa-Bianchi and Lugosi, 2006; Bubeck and Cesa-Bianchi, 2012). In comparison, existing algorithms based on value iterati... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | B |
However, we are currently witnessing a transition of machine learning moving into “the wild”, where most prominent examples are autonomous navigation for personal transport and delivery services, and the Internet of Things (IoT).
Evidently, this trend opens several real-world challenges for machine learning engineers. | We furthermore point out that hardware properties and the corresponding computational efficiency form a large fraction of resource efficiency.
This highlights the need to consider particular hardware targets when searching for resource-efficient machine learning models. | Machine learning is a key technology in the 21st century and the main contributing factor for many recent performance boosts in computer vision, natural language processing, speech recognition and signal processing.
Today, the main application domain and comfort zone of machine learning applications is the “virtual wor... | However, we are currently witnessing a transition of machine learning moving into “the wild”, where most prominent examples are autonomous navigation for personal transport and delivery services, and the Internet of Things (IoT).
Evidently, this trend opens several real-world challenges for machine learning engineers. | However, in real-world applications the computing infrastructure during the operation phase is typically limited, which effectively rules out most of the current resource-hungry machine learning approaches.
There are several key challenges—illustrated in Figure 1—which have to be jointly considered to facilitate machin... | D |
In Section 9, we give some applications of our ideas to the filling radius of Riemannian manifolds and also study consequences related to the characterization of spheres by their persistence barcodes and some generalizations and novel stability properties of the filling radius. |
In Section 9, we give some applications of our ideas to the filling radius of Riemannian manifolds and also study consequences related to the characterization of spheres by their persistence barcodes and some generalizations and novel stability properties of the filling radius. | Of central interest in topological data analysis has been the question of providing a complete characterization of the Vietoris-Rips persistence barcodes of spheres of different dimensions. Despite the existence of a complete answer to the question for the case of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTS... | We thank Prof. Henry Adams and Dr. Johnathan Bush for very useful feedback about a previous version of this article. We also thank Prof. Mikhail Katz and Prof. Michael Lesnick for explaining to us some aspects of their work. We thank Dr. Qingsong Wang for bringing to our attention the paper [76] which was critical for ... |
In this section, we recall the notions of spread and filling radius, as well as their relationship. In particular, we prove a number of statements about the filling radius of a closed connected manifold. Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius whic... | C |
Anna loads the data into t-viSNE and starts the hyper-parameter exploration with a grid search. After the execution, she sees several projections that accurately separate the two classes. As she does not have any special preference, she selects the top-left projection, because the projections are sorted from best to wo... | Anna uses the Dimension Correlation in order to determine the role of the data set’s dimensions in the outcome of the projection. She interactively draws a polyline with her mouse following the pattern from the benign cases to the malignant ones, as shown in Figure 6(c). By looking at the Dimension Correlation view (se... |
Figure 6: Usage scenario based on the Breast Cancer Wisconsin data set. The Overview (a) and the Shepard Heatmap (b) indicate that the overall accuracy is good. The high density of benign cases (c) seems to indicate that their high-dimensional profile is clearer and less diverse than malignant cases, which are more sp... |
When she looks at the main view again, one thing catches her eye: there is quite a difference in density between the two large clusters of points (as shown by the points’ colors in Figure 6(c)). The cluster to the left (mostly malignant cases) has low density in general, as opposed to the cluster to the right (mostly ... | Anna loads the data into t-viSNE and starts the hyper-parameter exploration with a grid search. After the execution, she sees several projections that accurately separate the two classes. As she does not have any special preference, she selects the top-left projection, because the projections are sorted from best to wo... | C |
Recently, [77] offers a review of meta-heuristics from the 1970s until 2015, i.e., from the development of neural networks to novel algorithms like Cuckoo Search. Specifically, a broad view of new proposals is given, but without proposing any category. The most recent survey to date is that in [78], in which nature-ins... | The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b... |
The prior related work reviewed above indicates that the community widely acknowledges (with more emphasis in recent times) the need for properly organizing the plethora of bio- and nature-inspired algorithms in a coherent taxonomy. However, the majority of them are only focused on the natural inspiration of the algor... |
Considering the classifications obtained in our study, we have critically examined the reviewed literature classification in the different taxonomies proposed in this work. The goal is to analyze if there is a relationship between the algorithms classified in the same category in one taxonomy and their classification ... | We have reviewed 518 nature- and bio-inspired algorithms and grouped them into two taxonomies. The first taxonomy has considered the source of inspiration, while the second has discriminated algorithms based on their behavior in generating new candidate solutions. We have provided clear descriptions, examples, and an e... | B |
After the embedding is obtained, the complexity to get clustering assignments is O(n2c)𝑂superscript𝑛2𝑐O(n^{2}c)italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_c ) (using the spectral clustering) or O(ndc)𝑂𝑛𝑑𝑐O(ndc)italic_O ( italic_n italic_d italic_c ) (using k𝑘kitalic_k-means).
| Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph... | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | Three deep clustering methods for general data, DEC [8] DFKM [9], and SpectralNet [7], also serve as an important baseline. Besides, four GAE-based methods are used, including GAE [20], MGAE [21], GALA [32], and SDCN [31]. All codes are downloaded from the homepages of authors.
| D |
IPID technique. Load balancing can introduce a challenge in identifying whether a given network enforces ingress filtering. As a result of load balancing our packets will be split between multiple instances of the server, hence resulting in low IPID counter values. There are different approaches for distributing the l... |
We define the result of SMap evaluation successful (i.e., true positive) if at least one of the three tests outputs that the tested network does not filter spoofed packets: either the IPID value on the server in the tested network was incremented as expected (IPID test) or we receive a query at our domain (DNS test) o... |
Identifying DNS resolvers. The main challenge here is to locate the DNS resolvers within a domain/network and to trigger a DNS request to our Name servers. We use Email service in the target networks (retrieved via the MX type request in the target domain) to find the DNS resolvers. We send an email to target domain’s... |
Inferring spoofing. Given a DNS resolver at IP 1.2.3.7, we send a DNS query to 1.2.3.7 port 53 asking for a record in domain under our control. The query is sent from a spoofed source IP address belonging to the tested network. We monitor for DNS requests arriving at our Name server. If a query for the requested recor... |
DNS technique. Firewalls, blocking incoming packets on port 53, would as a result generate a similar effect as ingress filtering on our servers: we would not receive any DNS requests to our domain. However, such a setting does not indicate that the tested network actually performs ingress filtering. | D |
Machine learning applications frequently deal with data-generating processes that change over time. Applications in such nonstationary environments include power use forecasting, recommendation systems, and environmental sensors [9]. Semisupervised learning, which has received a lot of attention in the sensor communit... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | Biology frequently deals with drift [16]. For instance olfactory systems are constantly adapting, predominantly through feedback mechanisms. This section details some such models from computer science and neuroscience [17]. One example is the KIII model, a dynamic network resembling the olfactory bulb and feedforward a... |
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal... | C |
We use the same definition for A(1)[i,B]superscript𝐴1𝑖𝐵A^{(1)}[i,B]italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] for all B∈ℬi(1)𝐵superscriptsubscriptℬ𝑖1B\in\mathcal{B}_{i}^{(1)}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) e... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A |
The free product of two semigroups R=⟨P∣ℛ⟩𝑅inner-product𝑃ℛR=\langle P\mid\mathcal{R}\rangleitalic_R = ⟨ italic_P ∣ caligraphic_R ⟩ and S=⟨Q∣𝒮⟩𝑆inner-product𝑄𝒮S=\langle Q\mid\mathcal{S}\rangleitalic_S = ⟨ italic_Q ∣ caligraphic_S ⟩
(with P∩Q=∅𝑃𝑄P\cap Q=\emptysetitalic_P ∩ italic_Q = ∅) is the semigroup with pres... | Note that there is a difference between the free product in the category of semigroups and the free product in the category of monoids or groups.
In particular, in the semigroup free product (which we are exclusively concerned with in this paper) there is no amalgamation over the identity element of two monoids. Thus, ... |
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While t... | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there ... | While the question which free groups and semigroups can be generated using automata is settled, there is a related natural question, which is still open: is the free product of two automaton/self-similar (semi)groups again an automaton/self-similar (semi)group? The free product of two groups or semigroups X=⟨P∣ℛ⟩𝑋inne... | A |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... |
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ... | Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o... | D |
We downloaded the URL dump of the May 2019 archive.333https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-22/cc-index.paths.gz Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. We applied a selection cr... | We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with... |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... | A |
We answered that the per-class performance is also a very important component, and exploratory visualization can assist in the selection process, as seen in Figure 2(b and c.1).
The expert understood the importance of visualization in that situation, compared to not using it. | Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems.
E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could onl... | Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re... | Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. |
Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long ... | A |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | D |
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/... | In meta-learning, we have multiple tasks T𝑇Titalic_T sampled from distribution p(𝒯)𝑝𝒯p(\mathcal{T})italic_p ( caligraphic_T ) [Ravi and Larochelle, 2017, Andrychowicz et al., 2016, Santoro et al., 2016]. For each task Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we train a base mode... |
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali... |
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/... | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o... | B |
The CCA codebook-based multi-UAV beam tracking scheme with TE awareness. Based on the designed codebook, by exploiting the Gaussian process (GP) tool, both the position and attitude of UAVs can be fast tracked for fast multiuser beam tracking along with dynamic TE estimation. Moreover, the estimated TE is leveraged to... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... |
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da... |
Note that there exist some mobile mmWave beam tracking schemes exploiting the position or motion state information (MSI) based on conventional ULA/UPA recently. For example, the beam tracking is achieved by directly predicting the AOD/AOA through the improved Kalman filtering [26], however, the work of [26] only targe... | Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base... | C |
There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper,
which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_... | The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful
quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element. | Related one-variable fragments in which we have only a
unary relational vocabulary and the main quantification is ∃Sxϕ(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability ... | There are other logics, incomparable
in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The | B |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | D |
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
| Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... |
Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-laye... |
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ... | We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
| C |
Let (Xi,θi)i∈Isubscriptsubscript𝑋𝑖subscriptθ𝑖𝑖𝐼(X_{i},\uptheta_{i})_{i\in I}( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT be a family of pre-spectral spaces,
where the index set I𝐼Iital... | \left(X_{i}\right)caligraphic_S ( ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ≃ ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT caligraphic_S ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) [18, Theorem 8.4.8].
Therefore, ... | X_{i}\right)caligraphic_S ( ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ≃ ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT caligraphic_S ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )
thanks to [18, Fact 8.4.3]. |
By Fact 4.7, (𝒮(Xi),𝒮(θi))𝒮subscript𝑋𝑖𝒮subscriptθ𝑖(\mathcal{S}\left(X_{i}\right),\mathcal{S}\left(\uptheta_{i}\right))( caligraphic_S ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , caligraphic_S ( roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) | By Fact 4.7, (𝒮(Xi),𝒮(θi))𝒮subscript𝑋𝑖𝒮subscriptθ𝑖(\mathcal{S}\left(X_{i}\right),\mathcal{S}\left(\uptheta_{i}\right))( caligraphic_S ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , caligraphic_S ( roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) )
is a spectral space. Since spectral spac... | C |
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
The comparison results of the real distorted image are shown in Fig. 13. We collect the real distorted images from the videos on YouTube, captured by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. 13, our approach generates the best rect... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... |
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l... | B |
Apart from these empirical findings, there have been some theoretical
studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training. | Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD.
However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory. | We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33]
proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11] | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | A |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>... |
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star... | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C... | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa... | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s... | C |
Figure 1: (a) LASSO regression: trajectories of states; (b) LASSO regression: convergence of mean square errors with c(k)=1/(k+1)0.4𝑐𝑘1superscript𝑘10.4c(k)=1/(k+1)^{0.4}italic_c ( italic_k ) = 1 / ( italic_k + 1 ) start_POSTSUPERSCRIPT 0.4 end_POSTSUPERSCRIPT and α(k)=3/(k+1)𝛼𝑘3𝑘1\alpha(k)=3/(k+1)italic_α ( i... | We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions.
We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio... | (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... |
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian... | A |
δ≥max(pj)∑i=1mpij≥1m,𝛿𝑚𝑎𝑥subscript𝑝𝑗subscriptsuperscript𝑚𝑖1subscript𝑝𝑖𝑗1𝑚\delta\geq\frac{max(p_{j})}{\sum^{m}_{i=1}p_{ij}}\geq\frac{1}{m},italic_δ ≥ divide start_ARG italic_m italic_a italic_x ( italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUPERSCRIPT italic_m... |
Results from Figure 10 show that the increase of l𝑙litalic_l lowers the information loss but raises the relative error rate. It is mainly because the number of tuples in each group increases with the growth of l𝑙litalic_l. On the one hand, in random output tables, the probabilities that tuples have to cover on the Q... |
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ... |
Property 1 demonstrates the constraint that the range of δ𝛿\deltaitalic_δ depends on the number of tuples in the group. Next, the relation between the value of δ𝛿\deltaitalic_δ and the number of correlative tuples, given a released QI value, is discussed as follows. | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... | C |
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti... | In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
| Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | B |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subsc... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | D |
^{l},a_{h}^{l})+\max_{a\in\mathcal{A}}Q_{h+1}^{k-1}(s_{h+1}^{l},a)-\langle\bm{%
\phi}(s_{h}^{l},a_{h}^{l}),\bm{w}\rangle]^{2}+\left\lVert\bm{w}\right\rVert_{2}.bold_italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT bold_i... | From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio... |
Finally, we use epoch restart strategy to adapt to the drifting environment, which achieves near-optimal dynamic regret notwithstanding its simplicity. Specifically, we restart the estimation of 𝒘𝒘\bm{w}bold_italic_w after WH𝑊𝐻\frac{W}{H}divide start_ARG italic_W end_ARG start_ARG italic_H end_ARG episodes, all il... |
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bo... |
One might be skeptical since simply applying least-squares method to solve 𝒘𝒘\bm{w}bold_italic_w does not take the distribution drift in ℙℙ\mathbb{P}blackboard_P and r𝑟ritalic_r into account and hence, may lead to non-trivial estimation error. However, we show that the estimation error can gracefully adapt to the n... | D |
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,SD=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | A |
However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_... | However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_... | Figure 2: Insight into multi-layer DAN. a. In the single-layer DAN, we first use an additional aggregation layer to obtain the neighbor context (1-2); we then use the neighbor context as query to score neighbors (3); we finally aggregate the neighbors with the attention scores to obtain the final output embedding (4-5)... | If 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is unobservable during the training phase, it becomes less useful and potentially detrimental when computing attention scores during the testing phase. To address this issue, we can introduce a decentralized attention network.... | Alternatively, we can implement the decentralized approach using a second-order attention mechanism. As depicted in 2b, each layer in DAN consists of two steps, similar to a multi-layer GAT. The computation involves the previous two layers and can be formulated using the following equation:
| C |
9: Taking stochastic gradient ascent tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times to maximize LVDMsubscript𝐿VDML_{\rm VDM}italic_L start_POSTSUBSCRIPT roman_VDM end_POSTSUBSCRIPT and update parameters (φ,ψ,θ)𝜑𝜓𝜃(\varphi,\psi,\theta)( italic_φ , italic_ψ , italic_θ... | Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which... |
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode... |
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The ... |
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch... | C |
Especially for Trefethen functions, such as the Runge function f(x)=1/(1+10‖x‖2)𝑓𝑥1110superscriptnorm𝑥2f(x)=1/(1+10\|x\|^{2})italic_f ( italic_x ) = 1 / ( 1 + 10 ∥ italic_x ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ), (B)𝐵(B)( italic_B ) prevents spline interpolation. | Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl... | Consequently, any restriction Q|M∈ΠMQ_{|M}\in\Pi_{M}italic_Q start_POSTSUBSCRIPT | italic_M end_POSTSUBSCRIPT ∈ roman_Π start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT of a polynomial Q∈ΠA𝑄subscriptΠ𝐴Q\in\Pi_{A}italic_Q ∈ roman_Π start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT to M𝑀Mitalic_M can be interpolated as in ... | Several improvements have been presented, including Floatman–Hormann interpolation [16, 38], that reach better approximation quality than splines.
However, all of them share the above weaknesses (A,B,C), as we demonstrate in the numerical experiments of Section 8. | Finally, we observe that Floater-Hormann interpolation performs better than multivariate cubic splines. It is comparable to 5thsuperscript5𝑡ℎ5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT-order splines,
but reaches an accuracy of 10−7superscript10710^{-7}10 start_POSTSUPERSCRIPT - 7 end_POSTSUPER... | C |
3(a)) Illustration of the projection mapping trained on two collections of samples generated from two different target distributions with m=n=100𝑚𝑛100m=n=100italic_m = italic_n = 100.
Here the red and blue points are generated from Gaussian distributions with two different covariance matrix. | The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized.
The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ... | While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings.
We propose the projected Wasserstein distance to address this issue. | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | The computation of projected Wasserstein distance was recently studied in [43, 32, 34].
We use the Riemannian gradient method discussed in [32, Algorithm 3] to compute the projected Wasserstein distance, where the details of the corresponding algorithm are summarized in Appendix B. | D |
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related... | D |
This window operator calculates the connection between the pie and alpha, or beta, at A and B and transfers it to the right side (A AND B). In case of output, it is possible to measure by firing a laser onto a pie pin on the resulting side and checking whether it returns to either alpha or beta. The picture shows the c... |
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins. | Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... | The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si... |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the... | B |
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with gcd(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start... |
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representati... | The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat... | The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic... |
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form | D |
The NNFS algorithm performed surprisingly well in our simulations given its simple and greedy nature, showing performance very similar to that of the adaptive lasso. However, in both gene expression data sets it was among the two worst performing methods, both in terms of accuracy and view selection stability. If one ... | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of vi... | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expressi... | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An exam... |
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the add... | B |
According to Figure 7 and Table 8, the two DepAD algorithms are significantly better than all benchmark methods except for wkNN and iForest in terms of ROC AUC . With wkNN, the results are similar. With iForest, the p𝑝pitalic_p-values are very close to 0.05. In terms of AP, the two DepAD algorithms yield significantl... | As FBED-CART-PS and FBED-CART-Sum show similar results as wkNN, in this section, we explain the performance difference between DepAD algorithms and wkNN. The following analysis is conducted with both FBED-CART-PS and FBED-CART-Sum, and the results are very similar. We only present the analysis based on FBED-CART-PS in ... | Figure 7: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of ROC AUC. The X axis stands for the ROC AUC of a comparison method, and the Y axis represents the ROC AUC of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART... |
In summary, the DepAD methods FBED-CART-RZPS, FBED-CART-PS, and FBED-CART-Sum generally demonstrate good performance in terms of ROC AUC. Among them, FBED-CART-PS and FBED-CART-Sum are considered good choices as they exhibit favorable performance in both ROC AUC and AP. It is noteworthy that FBED-CART-PS is the same a... |
Figure 8: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of AP. The X axis stands for the AP of a comparison method, and the Y axis represents the AP of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART-PS (or FBED-C... | A |
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, th... |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m... | Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
| where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C star... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | B |
Inspired by FPN [22], which computes multi-scale features with different levels, we propose a cross-scale graph pyramid network (xGPN). It progressively aggregates features from cross scales as well as from the same scale at multiple network levels via a hybrid module of a temporal branch and a graph branch. As shown ... | Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The ... | To further improve the boundaries generated from Mlocsubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we design Madjsubscript𝑀𝑎𝑑𝑗M_{adj}italic_M start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT inspired by FGD in [24]. For each updated anchor seg... | We provide ablation study for the key components VSS and xGPN in VSGN to verify their effectiveness on the two datasets in Table 3 and 4, respectively. The baselines are implemented by replacing each xGN module in xGPN with a layer of Conv1d(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) and ReLU, and not using cutt... |
2) We propose a novel temporal action localization framework VSGN, which features two key components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). For effective feature aggregation, we design a cross-scale graph network for each level in xGPN with a hybrid module of a temporal branch and a gra... | A |
Hyperparameter optimization (also called hyperparameter tuning) is the process of selecting appropriate values of hyperparameters for machine learning (ML) models, often independently for each data set, to achieve their best possible results.
Although time consuming, this process is required for the vast majority of ML... | One common focus of related work is the hyperparameter search for deep learning models. HyperTuner [LCW∗18] is an interactive VA system that enables hyperparameter search by using a multi-class confusion matrix for summarizing the predictions and setting user-defined ranges for multiple validation metrics to filter out... | Important contributions of this research include the formalization of primary concepts [CDM15], the identification of methods for assessing hyperparameter importance [JWXY16, PBB19, vRH17, HHLB13, HHLB14, vRH18], and resulting libraries and frameworks for specific hyperparameter optimization methods [KGG∗18, THHLB13]. ... | Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t... | Numerous techniques exist that try to solve this challenge, such as the well-known grid search, random search [BB12], and Bayesian optimization that belong to the generic type of sequential-based methods [BBBK11, SSW∗16]. Other proposed methods include bandit-based approaches [FKH18, LJD∗17], population-based methods [... | D |
Markov chain synthesis has garnered attention from various disciplines, including physics, systems theory, computer science, and numerous other fields of science and engineering. This attention is particularly notable within the context of Monte Carlo Markov Chain (MCMC) algorithms [1, 2, 3]. | and a complex communication architecture is not required for the estimation of the distribution.
By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially f... | This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state.
The probabilistic guidance algorithm led to the dev... | The fundamental idea underlying MCMC algorithms is to synthesize a Markov chain that converges to a specified steady-state distribution.
Random sampling of a large state space while adhering to a predefined probability distribution is the predominant use of MCMC algorithms. | Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi... | C |
𝐞(xi)=distgeo(xj,xj∗)diam(𝒳j),𝐞subscript𝑥𝑖subscriptdist𝑔𝑒𝑜subscript𝑥𝑗superscriptsubscript𝑥𝑗diamsubscript𝒳𝑗\displaystyle\mathbf{e}(x_{i})=\frac{\text{dist}_{geo}(x_{j},x_{j}^{*})}{\text%
{diam}(\mathcal{X}_{j})}\,,bold_e ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = divide start_ARG di... | In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut r... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | We compare our method against several recent state-of-the-art methods, including the pairwise matching approach ZoomOut [47], the two-stage approach ZoomOut+Sync that performs synchronisation to achieve cycle consistency in the results produced by ZoomOut, as well as the multi-matching methods HiPPI [9] and ConsistentZ... | Our method shows state-of-the-art results on this dataset, see Fig. 2 and Tab. 2.
While the PCK curves between ours, ZoomOut+Sync and HiPPI in Fig. 2 are close, the AUC in Tab. 2 shows that our performance is still superior by a small margin. Qualitative results can be found in the supplementary material. | C |
{C}\text{ and }\gamma\leftrightarrow\gamma^{\prime}\})italic_A start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT = ( roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT , { italic_γ italic_γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | italic_γ , italic_γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ roman_Γ start_POS... |
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and the γisubscript𝛾𝑖\gamma_{i}italic_γ start_PO... | If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed
path graph. | A chordal graph G𝐺Gitalic_G is a path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and there exists f:ΓC→[s]normal-:𝑓normal-→subscriptnormal-Γ... | The tree T𝑇Titalic_T of the previous theorem is called the clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a path graph or the directed clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a directed path graph. In Figure 1, the left part shows a path graph G𝐺Gitalic_G, and on the right there is a clique path tree... | B |
\end{bmatrix}.italic_P start_POSTSUBSCRIPT ( italic_k italic_l ) end_POSTSUBSCRIPT = [ start_ARG start_ROW start_CELL 0.5 + italic_q end_CELL start_CELL 0.5 end_CELL start_CELL 0.3 end_CELL start_CELL 0.3 end_CELL end_ROW start_ROW start_CELL 0.5 end_CELL start_CELL 0.5 + italic_q end_CELL start_CELL 0.3 end_CELL start... |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... | B |
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical... | See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018... | variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes.
However, | See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm.
Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL[p(z),p(z|x))... | See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (... | A |
, i.e., each agent makes decision for its own. This type of methods is usually easy to scale, but may have difficulty to achieve global optimal performance due to the lack of collaboration. To address the problem, another way is to jointly model the action among learning agents with centralized optimization [16, 15]. H... | To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traf... |
In this paper, we propose a novel Meta RL method MetaVIM for multi-intersection traffic signal control, which can make the policy learned from a training scenario generalizable to new unseen scenarios. MetaVIM learns the decentralized policy for each intersection which considers neighbor information in a latent way. W... | We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compa... | 2) The performances of Individual RL and PressLight drop 38% and 41% when the model is transferred. It shows that the models learned by the regular RL algorithms indeed rely on the training scenario. MetaLight is more robust to various scenarios than Individual RL and PressLight, and it indicates the advantage of the m... | A |
𝐲j+1=𝐲j−[2τ𝐲j𝖧Rk−1]†[τ𝐲j𝖧𝐲j−τRk−1𝐲j],j=0,1,…formulae-sequencesubscript𝐲𝑗1subscript𝐲𝑗superscriptdelimited-[]2𝜏superscriptsubscript𝐲𝑗𝖧subscript𝑅𝑘1†delimited-[]𝜏superscriptsubscript𝐲𝑗𝖧subscript𝐲𝑗𝜏subscript𝑅𝑘1subscript𝐲𝑗𝑗01…\mathbf{y}_{j+1}=\mathbf{y}_{j}-\left[\begin{array}[]{c}2\tau\,\... | }^{{\mbox{\tiny$\mathsf{H}$}}}\,\mathbf{y}_{j}-\tau\\
G_{k-1}\,\mathbf{y}_{j}\end{array}\right],~{}~{}~{}j=0,1,\ldotsbold_y start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT = bold_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - [ start_ARRAY start_ROW start_CELL 2 italic_τ bold_y start_POSTSUBSCRIPT italic_j end_P... | R_{k-1}\,\mathbf{y}_{j}\end{array}\right],~{}~{}~{}j=0,1,\ldotsbold_y start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT = bold_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - [ start_ARRAY start_ROW start_CELL 2 italic_τ bold_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_H end_POS... | \mathbf{b}\end{array}\right].italic_A start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_b = ( italic_I - italic_N italic_N start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT ) [ start_ARRAY start_ROW start_CELL italic_μ italic_N start_POSTSUPERSCRIPT sansserif_H en... | \mathbf{b}\end{array}\right].[ start_ARRAY start_ROW start_CELL 2 italic_τ italic_U start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT end_CELL end_ROW end_ARRAY ] bold_y = [ s... | B |
Figure 4 depicts the number of bins opened by the algorithms. The experiments show that the parameter m𝑚mitalic_m has little impact on the performance of Hybrid(λ𝜆\lambdaitalic_λ), that is, as long as m𝑚mitalic_m is sufficiently large (e.g., when m≥1000𝑚1000m\geq 1000italic_m ≥ 1000), the performance of Hybrid(λ𝜆\... | We will now use Lemma 2 to prove a more general result that incorporates the prediction error into the analysis. To this end, we will relate the cost of the packing of ProfilePacking to the packing that the algorithm would output if the prediction were error-free, which will allow us to apply the result of Lemma 2. Spe... |
In the experiments that we discussed in Section 6.3, we reported the performance of the algorithm on a typical sequence. More precisely, we considered a single randomly generated sequence, as opposed to averaging the cost of the algorithm over multiple input sequences, because each input sequence is associated with it... | Figure 3 depicts the cost of the algorithms for a typical sequence, as a function of the prediction error. The chosen files are “csBA125_9” (for “GI”), “Schwerin2_BPP32” (for “Shwerin”), “BPP_750_50_0.1_0.8_2” (for “Randomly_Generated”), “Hard28_BPP832” (for “Schoenfield_Hard28”), and “Waescher_TEST0082” (for “Wäscher”... |
A second approach could be along the lines of (?), which describe a general method for combining an optimistic algorithm that trusts the prediction (in our context, ProfilePacking) and a pessimistic algorithm that ignores the prediction (in our context, the online algorithm A𝐴Aitalic_A). The optimistic and pessimisti... | B |
Finally, we empirically show the proposed framework produces high-fidelity and watertight meshes. It means that it solves the initial problem of disjoint patches occurring in the original AtlasNet (Groueix et al., 2018). To evaluate the continuity of output surfaces, we propose to use the following metric. |
To leverage that knowledge, we express watertigthness as a ratio of rays that passed the parity test to the total number of all casted rays. Firstly, we sample N𝑁Nitalic_N points p∈S^𝑝^𝑆p\in\hat{S}italic_p ∈ over^ start_ARG italic_S end_ARG from all triangles of the reconstructed object S^^𝑆\hat{S}over^ start_ARG ... |
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of WT𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl... |
The above formulation alone causes that many of the produced patches have unnecessarily long edges, and the network folds them, so the patch fits the surface of an object. To mitigate the issue, we add an edge length regularization motivated by (Wang et al., 2018). If we assume that the reconstructed mesh has the form... | Watertigthness Typically, a mesh is referred to as being either watertight or not watertight. Since it is a true or false statement, there is no well-established measure to define the degree of discontinuities in the object’s surface. To fill this gap, we propose a metric based on a simple, approximate check of whether... | D |
Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate.
Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constr... | To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
| The Mirror-prox algorithm can be performed in a decentralized manner, however, it is not known whether its optimality is preserved.
In this paper, we prove that Mirror-prox remains optimal even in a decentralized case w.r.t. the dependence on the desired accuracy ε𝜀\varepsilonitalic_ε and condition number χ𝜒\chiitali... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is ... | B |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g... | A |
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm(ℱ)≥δ(|ℱ|m)... | One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in... |
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit... |
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ... |
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and... | A |
Teal color encodes the current action’s score, and brown the best result reached so far. The choice of colors was made deliberately because they complement each other, and the former denotes the current action since it is brighter than the latter.
If the list of features is long, the user can scroll this view. | Using our approach, we managed to achieve the same accuracy as before, 89%, compared to 83% reported by Mansouri et al. [94] for the additional external data set. For precision and recall, we always use macro-average, which is identical to Mansouri et al. [94]. On the one hand, the precision was 4% lower in both test a... | A use case present in a visual diagnosis tool revealed that feature generation involving the combination of two features is capable of a slight increase in performance [30]. The authors tested the same mathematical operations as in our system (i.e., addition, subtraction, multiplication, and division), but the generati... | Fig. 3(b) is a table heatmap view with five automatic feature selection techniques, their Average contribution, and an # Action # button to exclude any number of features. As we originally train our ML algorithm with all features, the yellow color (one of the standard colors used for highlighting [77]) in the last colu... | High scores were reached in terms of accuracy, precision, and recall. All in all with FeatureEnVi, we improve the total combined score by using 6 well-engineered features instead of the original 11. On the contrary, Rojo et al. [33] reported a slight decrease in performance when selecting 6 features for this task as a ... | D |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so... | The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system.
We leverage the repeatability of the system, which is higher than the integrated encoder error of 3μm3𝜇𝑚3\mu m3 italic_μ italic_m, | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | We first optimize the performance of the simulated positioning system by adding a receding horizon MPCC stage where we pre-optimize the position and velocity references provided to the low level controller. This is enabled by the high repeatability of the system which results in run-to-run deviations of 3μm3𝜇𝑚3\mu ... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... | C |
Explicit bias mitigation techniques directly access the bias variables: bexpl.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT during training to develop invariance to them. Based on the way these variables are utilized during training, we choose five d... | Results.
For CelebA, methods generally show large variance on the minority patterns (blond haired male celebrities), and lower variance on the majority patterns (mean over rest of the groups), whereas for Biased MNISTv1, we find that methods only work for certain set of hyperparameters and show degraded results on both... | Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli... | In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu... |
Re-sampling/Re-weighting: These approaches balance out the spurious correlations. The classical approach is to re-balance the class distribution by adjusting the sampling probability/ loss weight for majority/minority samples [14, 26, 41, 72, 20]. This includes synthesizing minority instances too [14, 26]. Moving beyo... | D |
Cheng et al. [72] propose a domain generalization method. They improve the corss-dataset performance without knowing the target dataset or touching any new samples. They propose a self-adversarial framework to remove the gaze-irrelevant features in face images. Cui et al. define a new adaption problem [138]: adaptatio... | Meta learning and metric learning show great potentials in personalized gaze estimation. They usually require few-shot annotated samples for calibration.
Park et al. propose a meta learning-based calibration approach [47]. They train a highly adaptable gaze estimation network through meta learning. | They perform data augmentation w.r.t. rotation in target domains and require the rotation consistency in gaze estimation.
Wang et al. [143] propose a contrastive learning for cross-dataset gaze estimation. They propose a contrastive loss function to encourage close feature distance for the samples with close gaze direc... | Figure 1: Deep learning-based gaze estimation relies on simple devices but complex algorithms to estimate human gaze. It usually uses off-the-shelf cameras to capture facial appearance, and employs deep learning algorithms to regress gaze from the appearance. According to this pipeline, we survey current deep learning-... | Recently, deep learning-based methods have gained popularity as they offer several advantages over conventional appearance-based methods. These methods use convolution layers or transformers [22] to automatically extract high-level gaze features from images. Deep learning models are also highly non-linear and can fit t... | A |
The face images were firstly preprocessed as described in Section 4.1. In contrast to SMFRD dataset, RMFRD is imbalanced (5,000 masked faces vs 90,000 non-masked faces). Therefore, we have applied an over-sampling by cropping some non-masked faces to get an equivalent number of cropped and full faces. Next, using the n... |
The rest of this paper is organized as follows: Section 2 presents the related works. In Section 3 we present the motivation and contribution of the paper. The proposed method is detailed in Section 4. Experimental results are presented in Section 5. Conclusion ends the paper. |
The quantization is then applied to extract the histogram of a number of bins as presented in Section 4.3. Finally, MLP is applied to classify faces as presented in Section 4.4. In this experiment, the 10-fold cross-validation strategy is used to evaluate the recognition performance. The experiments are repeated ten t... | As presented in Fig. 1, the size of the extracted feature map defines the number of the feature vectors that will be used in the BoF layer. Here we refer by Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to the number of feature vectors extracted from the ithsuperscript𝑖𝑡ℎi^{t}hitalic_i ... | Once the global histogram is computed, we pass to the classification stage to assign each test image to its identity. To do so, we apply the Multilayer perceptron classifier (MLP) where each face is represented by a term vector. Deep BoF network can be trained using back-propagation and gradient descent. Note that the ... | B |
⊢iy←oddsix::(y:streamA[i])\displaystyle\vdash^{i}y\leftarrow\operatorname{odds}\,i~{}x::(y:\operatorname%
{stream}_{A}[i])⊢ start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT italic_y ← roman_odds italic_i italic_x : : ( italic_y : roman_stream start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT [ italic_i ] ) |
The even-indexed substream retains the head of the input, but its tail is the odd-indexed substream of the input’s tail. The odd-indexed substream, on the other hand, is simply the even-indexed substream of the input’s tail. Operationally, the heads and tails of both substreams are computed on demand similar to a lazy... |
Postponing the details of our typing judgment for the moment, the signature below describes definitions that project the even- and odd-indexed substreams (referred to by y𝑦yitalic_y) of some input stream (referred to by x𝑥xitalic_x) at half of the original depth. Note that indexing begins at zero. | If the processor issues a “get,” then the head of the input stream is consumed, recursing on its tail. Otherwise, the output stream is constructed recursively, first issuing the element received from the processor. It is clear that the program terminates by lexicographic induction on (i,j)𝑖𝑗(i,j)( italic_i , italic_j... |
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i][z/x]Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ ita... | A |
An intuitive approach to reduce overhead for the owner is to store the media contents in a cloud platform and, with the help of the cloud, share the media contents to the authorized users. It is evolving into an emerging technique called cloud media sharing [3, 4]. In this technique, on the one hand, the owner can make... | Implement privacy-preserving access control. On the one hand, the cloud should be prevented from obtaining the private plaintext of the data it encounters, including the owner’s media content, the users’ fingerprints, and the LUTs. On the other hand, only users authorized by the owner can access the media content.
| Problem 1: Data privacy leakage and access control in the cloud. On the one hand, the cloud service provider could be curious about the data it encounters. On the other hand, it is a challenge to implement access control over the media content without direct control by the owner.
| An intuitive approach to reduce overhead for the owner is to store the media contents in a cloud platform and, with the help of the cloud, share the media contents to the authorized users. It is evolving into an emerging technique called cloud media sharing [3, 4]. In this technique, on the one hand, the owner can make... |
The threats considered in this paper come from three entities: users, the owner, and the cloud. First, users are assumed to be malicious, who could illegally redistribute the owner’s media content with the hope that this behavior will not be detected. Second, the owner is also assumed to be malicious, who may try to o... | B |
Modeling feature interactions is a crucial aspect of predictive analytics and has been widely studied in the literature. FM Rendle (2010) is a popular method that learns pairwise feature interactions through vector inner products. Since its introduction, several variants of FM have been proposed, including Field-aware ... | One of the main limitations of FM is that it is not able to capture higher-order feature interactions, which are interactions between three or more features. While higher-order FM (HOFM) has been proposed Rendle (2010, 2012) as a way to address this issue, it suffers from high complexity due to the combinatorial expans... | Neural Factorization Machines (NFM) He and Chua (2017) design a bi-interaction layer to learn the pairwise feature interaction and apply DNN to learn the higher-order ones.
Wide&Deep Cheng et al. (2016) introduces a hybrid architecture containing both shallow and deep components to jointly learn low-order and high-orde... |
As deep neural networks (DNNs) have proven successful in a variety of fields, researchers have begun using them to learn high-order feature interactions due to their deeper structures and nonlinear activation functions. The general approach is to concatenate the representations of different feature fields and feed the... | However, like the other DNN-based approaches, these models learn high-order feature interactions in an implicit, bit-wise manner and may lack transparency in their feature interaction modeling process and model outputs. As a result, some studies have attempted to learn feature interactions in an explicit fashion throug... | C |
Moreover, our variant relying on the open-loop step size γt=2/(t+2)subscript𝛾𝑡2𝑡2\gamma_{t}=2/(t+2)italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 2 / ( italic_t + 2 ) allows us to establish a 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ) convergence rate for the Frank-Wolfe gap, is agnostic... | This means that Theorems 2.4 and 2.6 effectively bound the number of ZOO, FOO, DO, and LMO oracle calls needed to achieve a target primal gap or Frank-Wolfe gap accuracy ε𝜀\varepsilonitalic_ε as a function of Tνsubscript𝑇𝜈T_{\nu}italic_T start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT and ε𝜀\varepsilonitalic_ε; note... | Table 1:
Number of iterations needed to achieve an ε𝜀\varepsilonitalic_ε-optimal solution for Problem 1.1. We denote line search by LS, zeroth-order oracle by ZOO, second-order oracle by SOO, domain oracle by DO, local linear optimization oracle by LLOO, and the assumption that 𝒳𝒳\mathcal{X}caligraphic_X is polyhed... |
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability o... | We show that a small variation of the original Frank-Wolfe algorithm [Frank & Wolfe, 1956] with an open-loop step size of the form γt=2/(t+2)subscript𝛾𝑡2𝑡2\gamma_{t}=2/(t+2)italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 2 / ( italic_t + 2 ), where t𝑡titalic_t is the iteration count is all that is needed ... | B |
Informally speaking, the key observations are that in the former case, by Lemma 4.8, (a suffix of) the active path must form an odd cycle.
A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices. | Then, we argue that eventually, the odd cycle formed by {a1,…,aj}subscript𝑎1…subscript𝑎𝑗\{a_{1},\ldots,a_{j}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } can be used to extend a short active path to ajsubscript𝑎𝑗a_{j}italic_a start_POSTSUBSCRIP... | The rough idea of the proof is as follows. First, we observe that having a small number of short augmenting paths is a certificate for a good approximation, as formalized in Lemma 5.9. We use this observation to show in Lemma 5.10 that whenever we do not have a good approximation yet, we must find many augmenting paths... | From this, we can inductively derive that eventually, either all {a1,…,ak}subscript𝑎1…subscript𝑎𝑘\{a_{1},\ldots,a_{k}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } form an odd cycle or an augmentation has been found involving some of these arcs.
O... | Otherwise, we will find an augmentation and we have that an augmenting path satisfying one of the two desired properties has been found.
This property is formalized in Observation 4.2 and the process for finding these odd cycles is formalized in Definition 4.3 and Lemma 4.4. | D |
In the second part of this paper, we propose a broadcast-like CPP algorithm (B-CPP) that allows for asynchronous updates of the agents: at every iteration of the algorithm, only a subset of the agents wake up to perform prescribed updates. Thus, B-CPP is more flexible, and due to its broadcast nature, it can further sa... |
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the... | For strongly convex and smooth objective functions, [57] first considered a linearly convergent gradient tracking method based on a specific quantizer.
More recently, the paper [52] introduced LEAD that works with a general class of compression operators and still enjoys linear convergence. Some recent developments can... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com... | A |
SPPs cover a wider range of problems than minimization ones and has numerous important practical applications [6].
These include well-known and famous examples from game theory or optimal control [7]. In recent years, saddle point problems have become popular in several other respects. | One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
[10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net... | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... |
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler... |
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ... | A |
A (C)CE MS provides a distribution that is in equilibrium over the set of joint policies found so far, Π0:tsuperscriptΠ:0𝑡\Pi^{0:t}roman_Π start_POSTSUPERSCRIPT 0 : italic_t end_POSTSUPERSCRIPT. For the algorithm to have converged, it needs to also be in equilibrium over the set of all possible joint policies, Π∗supe... |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... | We evaluate a number of (C)CE MSs in JPSRO on pure competition, pure cooperation, and general-sum games (Section H). All games used are available in OpenSpiel (Lanctot et al., 2019). More thorough descriptions of the games used can be found in Section F. We use an exact BR oracle, and exactly evaluate policies in the m... |
PSRO consists of a response oracle that estimates the best response (BR) to a joint distribution of policies. Commonly the response oracle is either a reinforcement learning (RL) agent or a method that computes the exact BR. The component that determines the distribution of policies that the oracle responds to is call... | We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ... | B |
>\epsilon^{2}\right].start_UNDERACCENT FRACOP start_ARG italic_S ∼ italic_D start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_V ∼ italic_M ( italic_S , italic_A ) , italic_Q ∼ italic_A ( italic_V ) end_ARG end_UNDERACCENT start_ARG Pr end_ARG [ | italic_Q ( italic_D start_POSTSUPERSCRIPT i... |
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section ... | In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst.
This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability ... | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient... |
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mecha... | A |
For each u∈χ−1(𝖢˙)𝑢superscript𝜒1˙𝖢u\in\chi^{-1}(\mathsf{\dot{C}})italic_u ∈ italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) we perform a number of 𝒪(n+m)𝒪𝑛𝑚\mathcal{O}(n+m)caligraphic_O ( italic_n + italic_m )-time operations and run the dynamic programming algo... |
Using the previous lemmas the problem of finding a reducible single-tree FVC reduces to finding a coloring that properly colors a simple reducible FVC. We generate a set of colorings that is guaranteed to contain at least one such coloring. To generate this set we use the concept of a universal set. | Note that the condition |NG(F)|≤|C|+1subscript𝑁𝐺𝐹𝐶1|N_{G}(F)|\leq|C|+1| italic_N start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_F ) | ≤ | italic_C | + 1 trivially holds for any single-tree FVC. We will show that, given a reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), we can efficiently reduce to a s... | Similar to the algorithm from Lemma 5.8, we can use two (n+m,𝒪(k5z2))𝑛𝑚𝒪superscript𝑘5superscript𝑧2(n+m,\mathcal{O}(k^{5}z^{2}))( italic_n + italic_m , caligraphic_O ( italic_k start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )-universal sets to create a set of c... |
Given a multigraph G𝐺Gitalic_G and coloring χ𝜒\chiitalic_χ of G𝐺Gitalic_G that properly colors some simple reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), a reducible FVC (C′,F′)superscript𝐶normal-′superscript𝐹normal-′(C^{\prime},F^{\prime})( italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_F st... | A |
Discriminative approaches: Liu et al. [94] proposed a discriminative approach named SimOPA to verify whether a composite image is rational in terms of the foreground object placement. Particularly, they feed the concatenation of composite image and foreground mask into a binary classification network to predict a rati... |
Discriminative approaches: Liu et al. [94] proposed a discriminative approach named SimOPA to verify whether a composite image is rational in terms of the foreground object placement. Particularly, they feed the concatenation of composite image and foreground mask into a binary classification network to predict a rati... | Early deep learning based image harmonization methods target at making the harmonized images indistinguishable from real images. For instance, Zhu et al. [209] explored predicting the realism of an image using a CNN classifier. With such realism predictor, they learn the color transformation for the foreground to achie... | Similar to FOPA [111], Zhu et al. [211] proposed to predict the rationality scores of all scales and locations, based on the interaction output between foreground and background using transformer [158]. Zhu et al. [211] also explored using unlabeled images with deliberately designed loss functions for object placement ... | Generative approaches:
Tan et al. [145] proposed to predict the location and scale of inserted object by taking the background image and object layout as input. Besides, the bounding box prediction task is converted to a classification task by discretizing the locations and scales. | C |
Table I provides details on the properties of the collected data, including data range, size, and availability. It is important to note that due to limitations in data availability, not all types of data are accessible for each city. For ease of reference, we have compiled a list of notations used in this paper in Tabl... | In order to facilitate a clear understanding of the data used in this study, we have classified all taxi-related mobility data (including flow, pickup, and idle driving and traffic speed data) as service data, as they pertain to the operational states of transport service providers. Accordingly, all other data have bee... | In addition to the collection and processing of data, it is essential to identify and quantify the correlations between sub-datasets in CityNet to gain insights into the effective utilization of the multi-modal data. In this section, we leverage data mining tools to explore and visualize the relationships between servi... |
TABLE II: The sub-datasets of taxi mobility, road connectivity, and traffic speed for all cities are described by their respective statistical features. In particular, the temporal granularity for in/outflow data is 30 minutes, while pickup, idle driving, and traffic speed data are recorded at 10-minute intervals. The... | Mobility data: The mobility data in CityNet primarily consists of taxi movements, which provide valuable insights into citizen activities and the state of the transportation network. For instance, region-wise taxi flows can reveal urban crowd movement patterns, while taxi pickup and idle driving data can serve as proxi... | C |
Moreover, simply having an estimate of the uncertainty that satisfies condition (2) is not sufficient. A trivial example would be the situation where a model always outputs the full target space as a prediction region. It clearly satisfies the validity condition (2), but it is hard to extract any meaning from the resu... | In this section the models that predict the lower and upper bounds of prediction intervals are considered, for example the α/2𝛼2\alpha/2italic_α / 2- and (1−α/2)1𝛼2(1-\alpha/2)( 1 - italic_α / 2 )-quantile estimates for a given significance level α𝛼\alphaitalic_α. For this class of estimators a reasonable choice of ... | To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average ... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... |
where the functions u,l𝑢𝑙u,litalic_u , italic_l denote the upper and lower bound of the prediction interval produced by ΓΓ\Gammaroman_Γ. Depending on the context (this includes both the type of data and the class of models) other, more informative, measures can be considered. However, since this study aims to be as ... | D |
EMOPIA is a dataset of pop piano music collected recently by \textciteemopia from YouTube for research on emotion-related tasks.888https://annahung31.github.io/EMOPIA/
It has 1,087 clips (each around 30 seconds) segmented from 387 songs, covering Japanese anime, Korean & Western pop song covers, movie soundtracks and p... | There is little performance difference between REMI and CP in this task.
Fig. 7 further shows that the evaluated models can fairly easily distinguish between high arousal and low arousal pieces (i.e., “HAHV, HALV” versus “LALV, LAHV”), but they have a much harder time along the valence axis (e.g., “HAHV” versus “HALV” ... | We use this dataset for the emotion classification task. As Tab. 1 shows, the average length of the pieces in the EMOPIA dataset is the shortest among the five, since they are actually clips manually selected by dedicated annotators \parenciteemopia to ensure that each performance expresses a single emotion.
|
Tab. 2 shows that the accuracy on our 6-class velocity classification task is not high, reaching 52.11% at best. This may be due to the fact that velocity is rather subjective, meaning that musicians can perform the same music piece fairly differently. Moreover, we note that the data is highly imbalanced, with the lat... | The emotion of each clip has been labelled using the following 4-class taxonomy: HAHV (high arousal high valence); LAHV (low arousal high valence); HALV (high arousal low valence); and LALV (low arousal low valence). This taxonomy is derived from the Russell’s valence-arousal model of emotion \parenciterussell, where v... | D |
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t... |
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t... | Finally, note that the aforementioned forward carry resulting from backward carry appears in the block which has to be equal to (0,0,1)001(0,0,1)( 0 , 0 , 1 ) (as it has to be the second case above), so it turns it into (1,0,1)101(1,0,1)( 1 , 0 , 1 ) and it does not generate any future carries.
| In any way, the forward carry to the (i+1)𝑖1(i+1)( italic_i + 1 )-th block cannot exceed (1,1,0)110(1,1,0)( 1 , 1 , 0 ). However, since the (i+1)𝑖1(i+1)( italic_i + 1 )-th blocks of Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) and Z(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) are (0,... |
Therefore, the only possible backward carry from the block of type A to the block of type B has to be in the form (0,0,1)001(0,0,1)( 0 , 0 , 1 ). However, this will be combined with a block (0,1,0)010(0,1,0)( 0 , 1 , 0 ) from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) – thus, the sum of the blocks from Z(S)𝑍𝑆Z(S)italic_Z (... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.