context stringlengths 250 3.86k | A stringlengths 250 5.11k | B stringlengths 250 3.39k | C stringlengths 250 8.2k | D stringlengths 250 5.02k | label stringclasses 4
values |
|---|---|---|---|---|---|
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | (x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | \frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita... |
g2(x)fn′(x)=g1(x)fn(x)+g0(x)fn−1(x);subscript𝑔2𝑥superscriptsubscript𝑓𝑛′𝑥subscript𝑔1𝑥subscript𝑓𝑛𝑥subscript𝑔0𝑥subscript𝑓𝑛1𝑥\displaystyle g_{2}(x)f_{n}^{\prime}(x)=g_{1}(x)f_{n}(x)+g_{0}(x)f_{n-1}(x);italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_... | a1,n−1fn(x)=(a2,n−1+a3,n−1x)fn−1(x)−a4,n−1fn−2(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST... | D |
These include the recognition tree of Leedham-Green and O’Brien [9],
which, for example, allows the computation of the composition factors of a finite matrix group in polynomial time [25] and is implemented in the computational algebra package Magma [14], as well as a data structure |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... | One important task in this context is writing elements of classical groups as words in standard generators using SLPs. This is done in Magma [14] using the results of Elliot Costi [6] and in GAP using the results of this paper see Section 6. Other rewriting algorithms also exist, for example Cohen et al. [26] present a... | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali... | C |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are
related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the... | D |
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle
take nearly linear time [6, 1, 2, 3, 23], that is, O(nlogn)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O(nlog2n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP... | in the Rotate-and-Kill process,
and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2). | The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that
an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim... | Using a Rotate-and-Kill process (which is shown in Algorithm 5),
we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead. | Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-... | B |
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work... |
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... | B |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th... | A |
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... | To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i... | We use the same dataset described in Section 4.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. As a rumor is often of a long circurlating story (friggeri2014rumor, ), this results in a rather long time span. In this w... |
Training data for single tweet classification. An event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless events from the above dataset. In the end, we used 90 rumors and 90 news assoc... | The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999htt... | B |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | A |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST... | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,... | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | D |
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | C |
Table 6: A summary of the quantitative results for the models with ⊕direct-sum\oplus⊕ and without ⊖symmetric-difference\ominus⊖ an ASPP module. The evaluation was carried out on five eye tracking datasets respectively. Each network was independently trained 10 times resulting in a distribution of values characterized b... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which... |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the... |
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was rep... | A |
For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if th... | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap... |
A repetitive structure often leads to high locality. For example, note that tutustuttu from above is nearly a repetition. Regarding the question of how repetitions of a word affect its locality number, we can show the following result (see the Appendix for a proof). | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... | Regarding the locality of Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, note that marking x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT leads to 2i−2superscript2𝑖22^{i-2}2 start_POSTSUPERSCRIPT italic_i - 2 end_POSTSUPERSCRIPT marked blocks; further, marking x1subsc... | B |
Then, they segmented the RR intervals to 30 samples each and fed them to a network with two layers followed by a pooling layer and a LSTM layer with 100 units.
The method was validated on MITDB and NSRDB achieving an accuracy that indicates its generalizability. | In their article Luo et al.[79] utilized quality assessment to remove low quality heartbeats, two median filters for removing power line noise, high-frequency noise and baseline drift.
Then, they used a derivative-based algorithm to detect R-peaks and time windows to segment each heartbeat. | Yang et al.[81] normalized the ECG and then fed it to a Stacked Sparse AE (SSAE) which they fine-tuned.
They classify on six types of arrhythmia achieving accuracy of 99.5% while also demonstrating the noise resilience of their method with artificially added noise. | At each iteration the expert annotates the most uncertain ECG beats in the test set, which are then used for training, while the output of the network assigns the confidence measures to each test beat.
Experiments performed on MITDB, INDB, SVDB indicate the robustness and computational efficiency of the method. | In[90] the authors added noise signals from the NSTDB to the MITDB and then used scale-adaptive thresholding WT to remove most of the noise and a denoised AE to remove the residual noise.
Their experiments indicated that increasing the number of training data to 1000 the signal-to-noise ratio increases dramatically aft... | D |
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ... | We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a).
Our results are poor with 20202020K interactions. For 50505050K th... | Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training.
Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was... | The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol... |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... | B |
A high level overview of these combined methods is shown in Fig. 1.
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem. | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
| Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | The two layer module consists of two 1D convolutional layers (kernel sizes of 3333 with 8888 and 16161616 channels) with the first layer followed by a ReLU activation function and a 1D max pooling operation (kernel size of 2222).
The feature maps of the last convolutional layer for both modules are then concatenated al... | B |
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | B |
In other words, the algorithm designer can hedge against untrusted advice, by a small sacrifice in the trusted performance. Thus we can interpret r𝑟ritalic_r as the “risk” for trusting the advice: the smaller the r𝑟ritalic_r, the bigger the risk.
Likewise, for the list update problem, our (r,f(r))𝑟𝑓𝑟(r,f(r))( ita... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | A |
category𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦categoryitalic_c italic_a italic_t italic_e italic_g italic_o italic_r italic_y.DICTIONARY[word𝑤𝑜𝑟𝑑worditalic_w italic_o italic_r italic_d] ←←\leftarrow← category𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦categoryitalic_c italic_a italic_t italic_e italic_g italic_o italic_r italic_y.DICTIONA... |
Our approach to calculating gv𝑔𝑣gvitalic_g italic_v, as we will see later, tries to overcome some problems arising from the valuation of words only based on local information to a category. This is carried out by, firstly, computing a word local value (lv𝑙𝑣lvitalic_l italic_v) for every category, and secondly, c... | However, this instance of SS3 would not effectively fulfill our goals since terms would be valued simply and solely by their local raw frequency, which is precisely the problem that gv𝑔𝑣gvitalic_g italic_v computation tries to overcome.
For instance, under this “local view” of words, highly discriminatory words for ... | That is, when gv𝑔𝑣gvitalic_g italic_v is only applied to a word it outputs a vector in which each component is the global value of that word for each category cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
For instance, following the above example, we have: | global value (green) in relation to the local value (orange) for the “depressed” category. The abscissa represents individual words arranged in order of frequency.
Note that the zone in which stop words are located (close to 0 in the abscissa) the local value is very high (since they are highly frequent words) but the ... | A |
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝdF(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bol... | Since the server is typically the busiest node in parameter server architecture, we consider the communication cost on the server in our experiments.
For DMSGD which doesn’t use any communication compression techniques, the communication cost on the server includes receiving vectors from the K𝐾Kitalic_K workers and se... | In the experiments of (Lin et al., 2018), DGC gets far better performance on both accuracy and communication cost than quantization methods. Hence, we do not compare with quantization methods in this paper.
We don’t use the warm-up strategy in the experiments. The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90... | We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | C |
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | An advantage of SANs compared to Sparse Autoencoders [37] is that the constrain of activation proximity can be applied individually in each example instead of requiring the computation of forward-pass of all examples.
Additionally, SANs create exact zeros instead near-zeros, which reduces co-adaptation between instance... | From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14].
Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler... | B |
In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... |
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ... | Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increase... | In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... |
where A𝐴Aitalic_A, B𝐵Bitalic_B and C𝐶Citalic_C are balance indices that balance three utilities on the basis of post-disaster scenario. The ultimate goal for enlarging the utility of the networks is to enlarge the summation of utility function (9) of each UAV, and we define the global utility function as the goal f... | A |
are standard. The boundary conditions and closure for this model (namely,
definitions of thermal fluxes 𝐪isubscript𝐪𝑖\mathbf{q}_{i}bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪esubscript𝐪𝑒\mathbf{q}_{e}bold_q start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT, | \underbrace{\overline{Q}_{\pi}}_{-\underline{\boldsymbol{\pi}}:\nabla\mathbf{v%
}}+\overline{Q}_{\zeta}\right]= - over¯ start_ARG bold_v end_ARG ⋅ ( over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - italic_γ over¯ start_ARG italic_p end... | \gamma-1)\left(-\nabla\cdot\mathbf{q}_{i}+Q_{ie}-\underline{\boldsymbol{\pi}}:%
\nabla\mathbf{v}\right)over˙ start_ARG italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG = - bold_v ⋅ ∇ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_γ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT... | viscous stress tensor 𝝅¯¯𝝅\underline{\boldsymbol{\pi}}under¯ start_ARG bold_italic_π end_ARG and ion-electron
heat exchange rate Qiesubscript𝑄𝑖𝑒Q_{ie}italic_Q start_POSTSUBSCRIPT italic_i italic_e end_POSTSUBSCRIPT) will be discussed in section 3.2. | the species heat exchange term Q¯iesubscript¯𝑄𝑖𝑒\overline{Q}_{ie}over¯ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i italic_e end_POSTSUBSCRIPT, the resistive
diffusion coefficient η¯¯𝜂\overline{\eta}over¯ start_ARG italic_η end_ARG, and the heat flux density | C |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | A |
This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | C |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ... | Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b... | V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images. Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Simil... |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ... | Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al., 2015) by adding a new branch to predict th... | A |
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task.
The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to impro... | Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
| Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing.
On the contrary, the topological pooling methods obtain an accuracy close to a classical CNN, with NDP significantly outperforming the other two techniques. | In Fig. 7, we report the training time for the five different pooling methods.
As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods. |
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods. | C |
Fernández-Delgado et al. (2014) conduct extensive experiments comparing 179 classifiers on 121 UCI datasets (Dua & Graff, 2017). The authors show that random forests perform best, followed by support vector machines with a radial basis function kernel. Therefore, random forests are often considered as a reference for n... | Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018).
The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ... | Neural networks have become very popular in many areas, such as computer vision (Krizhevsky et al., 2012; Reinders et al., 2022; Ren et al., 2015; Simonyan & Zisserman, 2015; Zhao et al., 2017; Qiao et al., 2021; Rudolph et al., 2022; Sun et al., 2021), speech recognition (Graves et al., 2013; Park et al., 2019; Sun et... | Mapping random forests into neural networks is already used in many applications such as network initialization (Humbird et al., 2019), camera localization (Massiceti et al., 2017), object detection (Reinders et al., 2018, 2019), or semantic segmentation (Richmond et al., 2016).
State-of-the-art methods (Massiceti et a... | The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes.
Olson et al. (2018) evaluate the performance of modern... | D |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... | A |
Compared to ResNets, DenseNets achieve similar performance, allow for even deeper architectures, and they are more parameter and computation efficient.
However, the DenseNet architecture is highly non-uniform which complicates the hardware mapping and ultimately slows down training. | Section 5.1 explored the impact of several network quantization approaches and structured pruning on the prediction quality.
In this section. we use the well-performing LQ-Net approach for quantization and PSP (for channel pruning) to measure the inference throughput of the quantized and pruned models separately on an ... | In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1.
Note that this requires observing overall constraints such as pre... | By using depthwise-separable convolutions, the number of trainable parameters as well as the number of multiply-accumulate operations (MACs) can be substantially reduced.
It is empirically shown that this has little to no negative impact on prediction quality. | The challenge is to reduce the number of bits as much as possible while at the same time keeping the prediction accuracy close to that of a well-tuned full-precision DNN.
Subsequently, we provide a literature overview of approaches that train reduced-precision DNNs, and, in a broader view, we also consider methods that... | C |
(iλ,λ′)∗(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ita... |
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by |
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by | ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
| and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\m... | B |
In our use case, we chose the Pima Indian Diabetes data set [62] to illustrate how t-viSNE can lead to a better overview, quality of the results, dimension understanding, and even performance improvements. The data set includes 768 female patients of Pima Indian heritage, aged between 21 to 81. The main task in this e... |
The main goal of the Shepard Heatmap is to offer a broad, simplified overview of the accuracy of the projection in terms of distance preservation: cells close to the main diagonal of the heatmap indicate that the respective pairs of instances have been represented in the 2222-D space with distances that are comparable... |
Adaptive Parallel Coordinates Plot Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (... | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | C |
When should a new nature-inspired algorithm be introduced?: The authors analyze the cases in which it is necessary to create novel algorithms. In their words, “They could be used as global optimizers, while a heuristic algorithm could be added for acting as local search technique for the solutions provided by the natur... |
A critical point of reflection associated with this explosion of proposals has been that novel metaphors do not lead to new solvers, and that comparisons undergo serious methodological problems. Although there are increasingly more bio-inspired algorithms, many of them rely on so-claimed novel metaphors that do not cr... |
In Section 7, we pay attention from a triple critical position as it was pointed out in [2], highlighting the good (a present and future plenty of exciting applications), the bad (novel metaphors not leading to innovative solvers, going deeper into the group of works that criticize the lack of novelty of the new propo... | The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b... | Due to “useless metaphors”, “lack of novelty” and “poor experimental validation and comparison”, in [16] authors took the decision in this letter to “call upon all editors-in-chief in the field to adapt their editorial policies” to reject the publication of novel metaphor-based metaheuristics. More than 80 important re... | D |
In this paper, matrices and vectors are represented by uppercase and lowercase letters respectively.
A graph is represented as 𝒢=(𝒱,ℰ,𝒲)𝒢𝒱ℰ𝒲\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})caligraphic_G = ( caligraphic_V , caligraphic_E , caligraphic_W ) and |⋅||\cdot|| ⋅ | is the size of some set. Vectors whose ... | Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent y... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | C |
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | Requirements on Internet studies. The key requirements for conducting Internet studies upon which conclusions can be drawn include scalable measurement infrastructure, good coverage of the Internet and a representative selection of measurement’s vantage points. We summarise the limitations of the previous studies below... |
∙∙\bullet∙ Limited representativeness. Volunteer or crowd-sourcing studies, such as the Spoofer Project (Lone et al., 2018), are inherently limited due to bias introduced by the participants. These measurements are performed using a limited number of vantage points, which are set up in specific networks, and hence are... |
Our work provides the first comprehensive view of ingress filtering in the Internet. We showed how to improve the coverage of the Internet in ingress filtering measurements to include many more ASes that were previously not studied. Our techniques allow to cover more than 90% of the Internet ASes, in contrast to best ... | B |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... | D |
We use the same definition for A(1)[i,B]superscript𝐴1𝑖𝐵A^{(1)}[i,B]italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] for all B∈ℬi(1)𝐵superscriptsubscriptℬ𝑖1B\in\mathcal{B}_{i}^{(1)}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) e... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | D |
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belong... |
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cann... | The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup.
If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup. | A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1…qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic... | A |
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... | A |
We downloaded the URL dump of the May 2019 archive.333https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-22/cc-index.paths.gz Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. We applied a selection cr... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... | We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... | B |
Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. | Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re... |
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c... |
To illustrate how to choose different metrics (and with which weights), we start our exploration by selecting the heart disease data set in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a). Knowing that the data set is balanced, we pick accuracy (weight... | Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ... | D |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr... | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | C |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | B |
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm... |
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce... |
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da... | In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... | A |
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
are as required by Theorem 3.7. | Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A
is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever... | a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ.
The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI... | We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en... | Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition
is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital... | D |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
|
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | B |
Multilingual translation uses a single model to translate between multiple language pairs Firat et al. (2016); Johnson et al. (2017); Aharoni et al. (2019). Model capacity has been found crucial for massively multilingual NMT to support language pairs with varying typological characteristics Zhang et al. (2020); Xu et ... | For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et... | It is a common problem that increasing the depth does not always lead to better performance, whether with residual connections Li et al. (2022b) or other previous studies on deep Transformers Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a), and the use of wider models is the usual method of choice for furthe... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... |
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin... | D |
topology ττ\uptauroman_τ whenever ∀U∈τ,∀A∈U,∃V∈ℬ,A∈V⊆Uformulae-sequencefor-all𝑈τformulae-sequencefor-all𝐴𝑈formulae-sequence𝑉ℬ𝐴𝑉𝑈\forall U\in\uptau,\forall A\in U,\exists V\in\mathcal{B},A\in V\subseteq U∀ italic_U ∈ roman_τ , ∀ italic_A ∈ italic_U , ∃ italic_V ∈ caligraphic_B , italic_A ∈ italic_V ⊆ italic_U.
A ... | and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT are the same, i.e.,
⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct(σ)⟩=⟨⟦𝖤𝖥𝖮[σ]⟧Struct(σ)⟩\left\langle\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{F... | \llbracket\varphi\rrbracket_{\operatorname{Struct}(\upsigma)}\subseteq Uitalic_A ∈ ⟦ italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ ⟦ italic_φ ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ italic_U. Therefore, ⟦𝖤𝖥𝖮[σ]⟧St... | ⟦ψA⟧Struct(σ)∈⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\psi_{A}\rrbracket_{\operatorname{Struct}(\upsigma)}\in\llbracket%
\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ∈ ⟦ sansserif_EFO ... | ⟦𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta... | D |
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... |
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
The comparison results of the real distorted image are shown in Fig. 13. We collect the real distorted images from the videos on YouTube, captured by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. 13, our approach generates the best rect... | D |
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ... |
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ... | argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32]
observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a... | B |
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... |
We remark that if we make an additional assumption that the stage-II cost is at most some polynomial value ΔΔ\Deltaroman_Δ, we can use standard SAA techniques without discarding scenarios; see Theorem 2.6 for full details. However, this assumption is stronger than is usually used in the literature for two-stage stocha... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... |
Unfortunately, standard SAA approaches [26, 7] do not directly apply to radius minimization problems. On a high level, the obstacle is that radius-minimization requires estimating the cost of each approximate solution; counter-intuitively, this may be harder than optimizing the cost (which is what is done in previous ... | D |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | B |
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | D |
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | As shown in Table 3, all PointRend models achieve promising performance. Even without ensemble, our PointRend baseline, which yields 77.38 mAP, has already achieved 1st place on the test leaderboard. Note that several attempts, like BFP Pang et al. (2019) and EnrichFeat, give no improvements against PointRend baseline,... | D |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | A |
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh... |
After showing the action-value function estimate is the optimistic upper bound of the optimal action-value function, we can derive the dynamic regret bound within one epoch via recursive regret decomposition. The dynamic regret within one epoch for Algorithm 1 with the knowledge of B𝜽,ℰsubscript𝐵𝜽ℰB_{\bm{\theta},\m... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... |
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bo... |
Our proposed algorithm LSVI-UCB-Restart has two key ingredients: least-squares value iteration with upper confidence bound to properly handle the exploration-exploitation trade-off (Jin et al., 2020), and restart strategy to adapt to the unknown nonstationarity. Our algorithm is summarized in Algorithm 1. From a high-... | D |
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | D |
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg... |
The results in Table 10 demonstrate that all variants of decentRL achieves state-of-the-art performance on Hits@1, empirically proving the superiority of using neighbor context as the query vector for aggregating neighbor embeddings. The proposed decentRL outperforms both decentRL w/ infoNCE and decentRL w/ L2, provid... |
In Table 8, we present more detailed entity prediction results on open-world FB15K-237, considering the influence of different decoders. Our observations indicate that decentRL consistently outperforms the other methods across most metrics when using TransE and DistMult as decoders. Furthermore, we provide results on ... |
Table 6 and Table 7 present the results for conventional entity prediction. decentRL demonstrates competitive or even superior performance when compared to state-of-the-art methods on the FB15K and WN18 benchmarks, showcasing its efficacy in entity prediction. While on the FB15K-237 and WN18RR datasets, the performanc... | In the entity prediction task, we use four prominent datasets: (1) FB15K which is a dataset that has been widely used for many years and includes Freebase entities and their relations [11, 33, 66, 67]; (2) WN18 which is another extensively used dataset comprising entities and relations from WordNet [11, 30, 31, 32, 34,... | C |
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ... |
We implement a CVAE-based exploration algorithm by modifying the prior of VDM to a standard Gaussian444The code is released at https://github.com/Baichenjia/CAVE_NoisyMinist (for Noisy-Mnist) and https://github.com/Baichenjia/CVAE_exploration (for other tasks) for reproducibility and further improvement.. For Noisy-Mn... |
Nevertheless, the introduce of latent variable often introduce instability to neural networks. For example, the popular deep learning models like VAEs and GANs are shown to be unstable since the introduce of stochasticity in latent space [51, 52]. We find VDM performs generally well and shows small performance varianc... | We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ... |
(i) For the network architecture, the important hyper-parameters include the dimensions of latent space Z𝑍Zitalic_Z, the dimensions of state features d𝑑ditalic_d, and the use of skip-connection between the prior and generative networks. We add an ablation study in Tab. IV to perform a grid search. The result shows t... | B |
+1)}\|_{C^{0}(\Omega)}}{2^{n}(n+1)!}\,,\,\,\,P_{A}=\mathrm{Cheb}_{n}^{1\mathrm%
{st}}\,.| italic_f ( italic_x ) - italic_Q start_POSTSUBSCRIPT italic_f , italic_A end_POSTSUBSCRIPT ( italic_x ) | ≤ divide start_ARG | italic_f start_POSTSUPERSCRIPT ( italic_n + 1 ) end_POSTSUPERSCRIPT ( italic_ξ start_POSTSUBSCRIPT ital... | Recently, Lloyd N. Trefethen [83] proposed a way of delivering a potential solution to the problem: For continuous functions f:Ω⟶ℝ:𝑓⟶Ωℝf:\Omega\longrightarrow\mathbb{R}italic_f : roman_Ω ⟶ blackboard_R
that are analytic in the unbounded Trefethen domain (a genralization of a Bernstein ellipse) Nm,ρ⊊Ω=[−1,1]msubscript�... | This result states that any sufficiently smooth function f𝑓fitalic_f can be approximated by piecewise polynomial functions, which allows to approximate f𝑓fitalic_f by Hermite or spline interpolation.
Generalizations of this result rely on this fact and are formulated in a similar manner [23, 24, 26]. | Our result in Eq. (7.8) provides a similar bound on the approximation error in m𝑚mitalic_mD whenever the k𝑘kitalic_k-th derivatives of f𝑓fitalic_f are known or bounded.
However, usually these bounds are unknown. By validating the proposed Trefethen approximation rates in the next section, we even though provide a po... | Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl... | C |
Since the second-order moment terms in the threshold are unknown, we can replace it with the unbiased estimate based on collected samples when performing the two-sample test.
The term 𝒥nsubscript𝒥𝑛\mathcal{J}_{n}caligraphic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT dominates the threshold since it scales with... | Assumption 1(II) does not hold when distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν have unbounded supports.
In that case, we restrict the target distribution in a bounded support such that the probability of locating in such support is relatively large. | However, the two-sample tests based on concentration inequalities in Section III give conservative results in practice. We examine the two-sample tests using the projected Wasserstein distance via the permutation approach.
Specifically, we permute the collected data points for Np=100subscript𝑁𝑝100N_{p}=100italic_N st... | However, the bound presented in [31] depends on the input dimension d𝑑ditalic_d and focuses on the case k=1𝑘1k=1italic_k = 1 only.
[32] slightly improves Assumption 1(II) into light tail conditions, but constants presented in the sample complexity bound are not characterized explicitly, | The computation of projected Wasserstein distance was recently studied in [43, 32, 34].
We use the Riemannian gradient method discussed in [32, Algorithm 3] to compute the projected Wasserstein distance, where the details of the corresponding algorithm are summarized in Appendix B. | C |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | B |
Furthermore, we propose Simulation Metric (DFS) based on deep-first search (DFS) that enables easy implementation and testing of complex structural computer Circuits. This confirmed the feasibility of this study in an experiment based on an XOR gate produced by combining NAND, AND and OR gates.
| The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si... | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... | Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... |
And it is expected that this research can be applied to the development of artificial intelligence technologies such as deep learning in the future. In other words, it is expected that the idea of structural computers will be applied to semiconductors that generate a lot of heat, such as Computer Vision task[8][9][10]... | D |
=3(x)+3(x3+2x2+3x+3)+4(2x3+3x2+4x+2)absent3𝑥3superscript𝑥32superscript𝑥23𝑥342superscript𝑥33superscript𝑥24𝑥2\displaystyle=3(x)+3(x^{3}+2x^{2}+3x+3)+4(2x^{3}+3x^{2}+4x+2)= 3 ( italic_x ) + 3 ( italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + 2 italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 3 ... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... | We now show that whenever f𝑓fitalic_f is a permutation function in 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, the inverse function can be represented similarly over the same space S𝑆Sitalic_S. First, we prove a condition of invertibility of f𝑓fitalic_f in terms of the ... | In this section, we focus on additional results on the linear representation of f𝑓fitalic_f when f𝑓fitalic_f is a monomial function. The following theorem re-establishes the condition invertibility of a monomial while adding additional results on the linear complexity.
| The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat... | C |
The code used to perform nonnegative forward selection is based on stepAIC from MASS 7.3-47 (Venables \BBA Ripley, \APACyear2002). The optimization required for fitting the interpolating predictor is performed using the package lsei 1.2-0 (Y. Wang \BOthers., \APACyear2017). After optimization, coefficients smaller than... |
Table 1: Standardized measures of effect size (partial η2superscript𝜂2\eta^{2}italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) for the interactions between the choice of meta-learner and the other experimental factors, for each of the four outcome measures of true positive rate, false positive rate, false discov... |
Although we can average over the 100 replications within each condition, with 7 different meta-learners and 48 experimental conditions, this would still lead to 336 averages for each of the outcome measures. In our reporting of the results we will therefore focus only on the most important interactions between the met... | The values of partial η2superscript𝜂2\eta^{2}italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT obtained from the mixed ANOVAs for each of the four outcome measures are given in Table 1. Note that we are primarily interested in the extent to which differences between the meta-learners are moderated by the experiment... |
Large or moderate effect sizes can be observed across all four outcome measures for the main effect of the meta-learner, as well as for the interactions with sample size and correlation structure. When accuracy or TPR is used as the outcome, the three-way interaction between meta-learner, sample size and correlation s... | C |
The mainstream anomaly detection methods are based on proximity, including distance-based and density-based methods [1, 2, 3]. They assume that normal objects are in a dense neighborhood, while anomalies stay far away from other objects or in a sparse neighborhood. |
Despite that research [7, 4] has shown the promise of dependency-based anomaly detection, there are still certain research gaps in this area that need attention. Firstly, existing dependency-based methods represent only a fraction of a much larger potential combinations of supervised methods and scoring functions for ... |
To interpret an anomaly detected by DepAD, we begin by identifying variables with substantial dependency deviations. This is achieved by comparing the observed values of variables with their corresponding expected values. A larger deviation indicates a higher contribution of that variable to the anomaly. Furthermore, ... |
The dependency-based approach works under the assumption that anomalies deviate from the normal dependency among variables, and the extend of anomalousness is evaluated based on this deviation. While the proximity-based approach that focuses on relationships among objects, the dependency-based approach emphasizes on t... | Another line of research in anomaly detection exploits the dependency among variables, assuming normal objects follow the dependency while anomalies do not. Dependency-based methods [4, 5] evaluate the anomalousness of objects through how much they deviate from normal dependency possessed by the majority of objects.
| D |
Figure 1: Illustration of the impact of the κ𝜅\kappaitalic_κ parameter (logistic case, multinomial logit case closely follows): A representative plot of the derivative of the reward function. The x-axis represents the linear function x⊤θsuperscript𝑥top𝜃x^{\top}\thetaitalic_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSC... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe... | We note that Ou et al. [2018] also consider a similar problem of developing an online algorithm for the MNL model with linear utility parameters. Though they establish a regret bound that does not depend on the aforementioned parameter κ𝜅\kappaitalic_κ, they work with an inaccurate version of the MNL model. More speci... |
Motivated by these issues, we consider the dynamic assortment optimization problem. In every round, the retailer offers a subset (assortment) of products to a consumer and observes the consumer response. Consumers purchase (at most one product from each assortment) products that maximize their utility, and the retaile... | C |
The training batch size is 32 for both datasets. We train 10 epochs at learning rate 0.00005 for THUMOS and 15 epochs at learning rate 0.0001 for ActivityNet. We directly predict the 20 action categories for THUMOS; we conduct binary classification and then fuse our prediction scores with video-level classification sc... | 3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t... |
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce c... | Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s... | We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp... | D |
Various automatic ML methods [FH19] and practical frameworks [Com, NNI] have been proposed to deal with the challenge of hyperparameter search. However, their output is usually a single model, which is frequently underpowered when compared to an ensemble of ML models [SR18].
Ensemble methods—such as bagging and boostin... |
G2: Migration of powerful and alternative models to the majority-voting ensemble. In continuation of the preceding goal, our VA tool should allow the users to pick the best (and most diverse) models for the ensemble from different areas in the projection (R2). |
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th... | The authors of a recent survey [SR18] state that users should understand how to tune models and, in extension, choose hyperparameters for selecting the appropriate ML ensemble. Consequently, another open question is: (RQ2) how to find which particular hyperparameter set is suitable for each model in a majority-voting e... | In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algo... | C |
Therefore, the total probability of the transient states becomes zero in a finite time.
In [7], it is shown that the condition ρ(M1)<1𝜌subscript𝑀11\rho(M_{1})<1italic_ρ ( italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) < 1 is satisfied using the properties of M-matrices, which are shown in Theorem 2.5.3 (parts 2.... | and a complex communication architecture is not required for the estimation of the distribution.
By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially f... | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
| Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... |
In this section, we introduce a shortest-path algorithm that is proposed as a modification to the Metropolis-Hastings algorithm in [7, Section V-E] and integrated with the Markov chain synthesis methods described in [14] and [15]. This algorithm can also be integrated with the DSMC algorithm to further increase the co... | B |
⟨U⊤𝚽Q,U⊤𝚽Q⟩superscript𝑈top𝚽𝑄superscript𝑈top𝚽𝑄\displaystyle\langle U^{\top}\mathbf{\Phi}Q,U^{\top}\mathbf{\Phi}Q\rangle⟨ italic_U start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_Φ italic_Q , italic_U start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_Φ italic_Q ⟩
| The overall optimisation is performed with respect to U𝑈Uitalic_U and Q𝑄Qitalic_Q, with the constraints U∈ℙ𝑈ℙU\in\mathbb{P}italic_U ∈ blackboard_P and Q∈𝕆𝑄𝕆Q\in\mathbb{O}italic_Q ∈ blackboard_O.
As such, our isometric multi-shape matching formulation reads | The optimisation alternates between updating U𝑈Uitalic_U and Q𝑄Qitalic_Q.
Each update step involves simple matrix multiplications, as well as the Euclidean projection onto the sets ℙℙ\mathbb{P}blackboard_P and 𝕆𝕆\mathbb{O}blackboard_O. For permutations, as well as different objective functions, a similar strategy h... | We denote the Euclidean projections as projℙ(⋅)subscriptprojℙ⋅\operatorname{proj}_{\mathbb{P}}(\cdot)roman_proj start_POSTSUBSCRIPT blackboard_P end_POSTSUBSCRIPT ( ⋅ ) and proj𝕆(⋅)subscriptproj𝕆⋅\operatorname{proj}_{\mathbb{O}}(\cdot)roman_proj start_POSTSUBSCRIPT blackboard_O end_POSTSUBSCRIPT ( ⋅ ).
Each Euclide... | Similar as in the U𝑈Uitalic_U-update, the result for each block of Q𝑄Qitalic_Q in (15) is independent, and can thus be optimised separately, as shown in (16).
Therefore, we can solve k𝑘kitalic_k independent singular value decompositions (SVDs), each for a small matrix of size b×b𝑏𝑏b{\times}bitalic_b × italic_b. | B |
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some parti... | On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary ... | The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prov... | interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path %
graphs $\subset$ path graphs $\subset$ chordal graphs}.interva... | Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati... | B |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. | In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi... | Table 2 records the error rates on the four real-world networks. The numerical results suggest that Mixed-SLIM methods enjoy satisfactory performances compared with SCORE, SLIM, OCCAM, Mixed-SCORE, and GeoNMF when detecting the four empirical datasets. Especially, the number error for Mixed-SLIM on the Polblogs network... | In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-network... | In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random... | A |
In addition to gradient-based MCMC, variational transport also shares similarity with Stein variational gradient descent (SVGD) (Liu and Wang, 2016), which is a more recent particle-based algorithm for Bayesian inference.
Variants of SVGD have been subsequently proposed. See, e.g., | Departing from MCMC where independent stochastic particles are used, it leverages interacting deterministic particles to approximate the probability measure of interest. In the mean-field limit where the number of particles go to infinity, it can be viewed as the gradient flow of the KL-divergence with respect to a mod... | we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe... | In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward... |
Such a modified version of variational transport can also be viewed as Wasserstein gradient descent method for minimizing the functional F𝐹Fitalic_F in (4.1). Here the bias incurred in the estimation of the Wasserstein gradient stems from the statistical error of f~k∗superscriptsubscript~𝑓𝑘\widetilde{f}_{k}^{*}over... | A |
2) MetaVIM shows good generalization for different scenarios and configurations. MetaVIM performs the second best in Hangzhou with the mixedl configuration, Jinan with the real configuration and Shenzhen with the mixedl configuration, and performs best in other scenarios. Overall, MetaVIM has the best mean performance... |
The method is evaluated in two modes: (1) Common Testing Mode: the model trained on one scenario with one traffic flow configuration is tested on the same scenario with the same configuration. It is used to validate the ability of the RL algorithm to find the optimal policy. | Except MaxPressure analysed above, GeneraLight achieves the best in Hangzhou with the mixedl configuration, while performs poorly in other scenarios. The reason is that GeneraLight trains several models on diverse generated traffic flows, and select the model in testing by matching the flow. Hence, it limits the genera... |
1) In general, RL methods perform better than conventional methods, and it indicates the advantage of the RL. The reason is that the conventional methods often rely on prior knowledge which may fails in some cases. A typical case is MaxPressure. It shows good performances on several cases including Hangzhou with the r... |
2) MetaVIM shows good generalization for different scenarios and configurations. MetaVIM performs the second best in Hangzhou with the mixedl configuration, Jinan with the real configuration and Shenzhen with the mixedl configuration, and performs best in other scenarios. Overall, MetaVIM has the best mean performance... | B |
,\,\tilde{X},\,A+E)( over~ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , over~ start_ARG italic_X end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , over^ start_ARG italic_A end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ( over~ start_ARG italic_λ end_ARG , over~ start_ARG italic_X end_ARG , ... | The residual ‖𝐟(𝐱j,tj)‖2subscriptnorm𝐟subscript𝐱𝑗subscript𝑡𝑗2\big{\|}\mathbf{f}(\mathbf{x}_{j},t_{j})\big{\|}_{2}∥ bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT reduces from 10−4superscript1041... | the residual of the equation (8.10) from
2.99×10−72.99superscript1072.99\times 10^{-7}2.99 × 10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT to 1.17×10−151.17superscript10151.17\times 10^{-15}1.17 × 10 start_POSTSUPERSCRIPT - 15 end_POSTSUPERSCRIPT. | does not have a solution as
the residual ‖𝐟(uj,vj,wj)‖2subscriptnorm𝐟subscript𝑢𝑗subscript𝑣𝑗subscript𝑤𝑗2\|\mathbf{f}(u_{j},v_{j},w_{j})\|_{2}∥ bold_f ( italic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT italic_j end_PO... | 𝐟(𝐱,1)= 0𝐟𝐱1 0\mathbf{f}(\mathbf{x},1)\,=\,\mathbf{0}bold_f ( bold_x , 1 ) = bold_0 with an error in the order of
|t~−t∗|= 10−4~𝑡subscript𝑡superscript104|\tilde{t}-t_{*}|\,=\,10^{-4}| over~ start_ARG italic_t end_ARG - italic_t start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPE... | B |
For any ϵ∈(0,0.5]italic-ϵ00.5\epsilon\in(0,0.5]italic_ϵ ∈ ( 0 , 0.5 ], H𝐻Hitalic_H-Aware using algorithm A𝐴Aitalic_A has competitive ratio min{cA,1+(2+5ϵ)ηk+ϵ}subscript𝑐𝐴125italic-ϵ𝜂𝑘italic-ϵ\min\{c_{A},1+(2+5\epsilon)\eta k+\epsilon\}roman_min { italic_c start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , 1 + ... | The following result shows that we can express the competitive ratio of Hybrid(λ𝜆\lambdaitalic_λ) in Theorem 5 so that the capacity k𝑘kitalic_k is replaced by the consolidation ratio r𝑟ritalic_r. We can thus exploit the fact that typically r𝑟ritalic_r is much smaller than k𝑘kitalic_k, and improve the
theoretical a... |
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n... | An important application of online bin packing is Virtual Machine (VM) placement in large data centers. Here,
each VM corresponds to an item whose size represents the resource requirement of the VM, and each bin corresponds to a physical machine (i.e., host) of a given capacity k𝑘kitalic_k. In the context of this appl... |
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin... | C |
where Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are weights of ϕitalic-ϕ\phiitalic_ϕ produced by the hypernetwork directly from the point cloud embedding and [⋅,⋅]⋅⋅[\cdot,\cdot][ ⋅ , ⋅ ] is a concatenation operator. |
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of WT𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFl... | Table 1: Generation results. MMD-CD scores are multiplied by
103superscript10310^{3}10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT; MMD-EMD and JSD scores are multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencoder in LoCondA, and (HF) - the HyperFlow... | Table 2: Shape auto-encoding on the ShapeNet dataset. The best results are highlighted in bold. CD is multiplied by 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and EMD is multiplied by 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. (HC) denotes the HyperCloud autoencod... |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | B |
By using the standard restarts or regularization arguments, all the results of this paper have convex-concave or strongly convex-concave analogues. Unfortunately, optimalilty w.r.t. ε𝜀\varepsilonitalic_ε take places only for the convex-concave case not for the strongly convex-concave one.222The analysis developed in ... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... |
By using the standard restarts or regularization arguments, all the results of this paper have convex-concave or strongly convex-concave analogues. Unfortunately, optimalilty w.r.t. ε𝜀\varepsilonitalic_ε take places only for the convex-concave case not for the strongly convex-concave one.222The analysis developed in ... | Our paper technique can be generalized to non-smooth problems by using another variant of sliding procedure [34, 15, 23]. By using batching technique, the results can be generalized to stochastic saddle-point problems [15, 23]. Instead of the smooth convex-concave saddle-point problem we can consider general sum-type s... | Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem... | C |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... | where L^=D^tD^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... | D |
For any simplicial complex K𝐾Kitalic_K and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ), there exists an integer t=t(b,K,m)𝑡𝑡𝑏𝐾𝑚t=t(b,K,m)italic_t = italic_t ( italic_b , italic_K , italic_m ) with the following property: If ℱℱ\mathcal{F}caligraphic_F is an m𝑚mita... | We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor.
The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m... | In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma... |
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37]. This technique, which we briefly outline here, was specifically designed for complete intersection patterns. A major part of this paper, all of... | a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K.
So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ... | C |
All these processes are iterative and could happen in any other order. The final outcome is the generated knowledge acquired from the extracted features. Note that the typical workflow, which is followed in the two use cases and the case study in Section 5, is linear to the design of the views of FeatureEnVi (i.e., fro... | We concentrated on the conjunction of those automatic approaches with the statistical measures offered by FeatureEnVi. Specifically, F4 resembled an unimportant feature for the Worst subspace, as shown in Fig. 7(c.1). Albeit that, when closely explored in the whole data space, it was more impactful than other features ... | Visualization and interaction.
E1 and E2 were surprised by the promising results we managed to achieve with the assistance of our VA system in the red wine quality use case of Section 4. Initially, E1 was slightly overwhelmed by the number of statistical measures mapped in the system’s glyphs. However, after the interv... |
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati... |
All visual encodings designed for the panels of FeatureEnVi are summarized in Table II. On the right-hand side, we can observe the optimal states for the available statistical measures. However, in reality, many of the statistical measures will be contradictory to each other, and human decisions are essential on such ... | D |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... | The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system.
We leverage the repeatability of the system, which is higher than the integrated encoder error of 3μm3𝜇𝑚3\mu m3 italic_μ italic_m, | We first optimize the performance of the simulated positioning system by adding a receding horizon MPCC stage where we pre-optimize the position and velocity references provided to the low level controller. This is enabled by the high repeatability of the system which results in run-to-run deviations of 3μm3𝜇𝑚3\mu ... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | C |
We can measure the robustness to such tendencies by intentionally introducing covariate shift e.g., with a test dataset distribution that differs from training or a metric that balances performance across groups. For our study, we use the mean per group accuracy/unbiased accuracy, which weighs all the groups equally. F... | Explicit bias mitigation techniques directly access the bias variables: bexpl.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT during training to develop invariance to them. Based on the way these variables are utilized during training, we choose five d... | Results. As shown in Table. 1, no method performs universally well across datasets; however, the implicit methods LFF and SD obtain high unbiased accuracies on most datasets. This shows that implicit methods can deal with multiple bias sources without explicit access. Explicit methods work well on CelebA but fail on Bi... |
Recently, many methods have been proposed to make neural networks bias resistant. These methods can be grouped into two types: 1) those that assume the bias variables e.g., the gender label in CelebA, are explicitly annotated and can be accessed during training [55, 55, 69, 37] and, 2) those that do not require expli... | Without bias mitigation mechanisms, standard models (StdM) often use spurious bias variables for inference, rather than developing invariance to them, which often results in their inability to perform well on minority patterns [27, 11, 3, 61]. To address this, several bias mitigation mechanisms have been proposed, and ... | D |
A number of CNN architectures that have been proposed for typical computer vision tasks also show great success in gaze estimation task, e.g., LeNet [17], AlexNet [50], VGG [49], ResNet18 [43] and ResNet50 [66].
Besides, some well-designed modules also help to improve the estimation accuracy [53, 56, 93, 94]. Chen et a... | They build a multi-branch network to extract the features of each view and concatenate them to estimate 2222D gaze position on the screen. Wu et al. collect gaze data using near-eye IR cameras [123]. They use CNN to detect the location of glints, pupil centers and corneas from IR images. Then, they build an eye model u... | They synthesize dense multi-view eye images by recovering the 3D shape of eye regions, where they use a patch-based multi-view stereo algorithm [98] to reconstruct the 3D shape from eight multi-view images.
Wood et al. propose to synthesize the close-up eye images for a wide range of head poses, gaze directions and ill... |
The head-mounted device usually employs near-eye cameras to capture eye images. Tonsen et al. embed millimetre-sized RGB cameras into a normal glasses frame [147]. In order to compensate for the low-resolution captured images, they use multi-cameras to capture multi-view images and use a neural network to regress gaze... | Some works seek to decompose the gaze into multiple related features and construct multi-task CNNs to estimate these feature. Yu et al. introduce a constrained landmark-gaze model for modeling the joint variation of eye landmark locations and gaze directions [119]. As shown in Fig. 9, they build a multi-task CNN to est... | B |
Occlusion is a key limitation of real-world 2D face recognition methods. Generally, it comes out from wearing hats, eyeglasses, masks as well as any other objects that can occlude a part of the face while leaving others unaffected. Thus, wearing a mask is considered the most difficult facial occlusion challenge since ... | Matching approach: Aims to compare the similarity between images using a matching process. Generally, the face image is sampled into a number of patches of the same size. Feature extraction is then applied to each patch. Finally, a matching process is applied between probe and gallery faces. The advantage of this appro... |
Other methods detect the keypoints from the face image, instead of local patches. For instance, Weng et al. weng2016robust proposed to recognize persons of interest from their partial faces. To accomplish this task, they firstly detected keypoints and extract their textural and geometrical features. Next, point set m... |
Occlusion removal approach: In order to avoid a bad reconstruction process, these approaches aim to detect regions found to be occluded in the face image and discard them completely from the feature extraction and classification process. Segmentation based approach is one of the best methods that detect firstly the oc... | This deep quantization technique presents many advantages. It ensures a lightweight representation that makes the real-world masked face recognition process a feasible task. Moreover, the masked regions vary from one face to another, which leads to informative images of different sizes. The proposed deep quantization a... | A |
Our system keeps constraints implicit but arithmetic data explicit at the process level in agreement with observations made about constraint and arithmetic term reconstruction in a session-typed calculus [DP20c]. On the other hand, systems like CICℓ^CIC^ℓ\textsf{CIC}\widehat{\phantom{}{}_{\ell}}CIC over^ start_ARG sta... | Implementation: we are interested in developing a convenient surface language (perhaps a functional one [PP20]) for SAX and implementing our type system, following Rast [DP20a], an implementation of resource-aware session types that includes arithmetic refinements. Perhaps various validity conditions of infinite proofs... | Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was... |
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to c... | Validity conditions of infinite proofs have been developed to keep cut elimination productive, which correspond to criteria like the guardedness check [BDS16, BT17, DP19, DP20d]. Although we use infinite typing derivations, we explicitly avoid syntactic termination checking for its non-compositionality. Nevertheless, w... | D |
A watermarking technique being able to safeguard the user’s rights while maintaining the owner’s copyright is called Asymmetric Fingerprinting (AFP) [9, 10, 11, 12, 13, 14]. AFP mainly relies on cryptographic tools including public key cryptosystem and homomorphic encryption, in which the embedding operation is perform... | In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ... |
As discussed above, AFP seems to solve Problems 2 and 3 perfectly. However, this is no longer the case when media contents are remotely hosted by the cloud since existing AFP schemes were designed without taking the cloud’s involvement into consideration. Thus it remains to be further explored how to develop a novel A... |
In this paper, facing these problems and challenges, we set out to solve them. First, to achieve data protection and access control, we adopt the lifted-ElGamal based PRE scheme, as discussed in [16, 17, 18, 19, 20], whose most prominent characteristic is that it satisfies the property of additive homomorphism. Then t... |
By delegating the management of the media content to the cloud, FairCMS-I and FairCMS-II can also be seen as an instantiation of privacy-preserving outsourcing of AFP, thereby solving the problem caused by insufficient local resources of the owner in media sharing. | B |
We find that in the first layer, which models the second order feature interactions, these feature fields are hard to distinguish when selecting the beneficial interactions. This suggests that almost all the second-order feature interactions are useful, which also why we sample all of them in the first layer, i.e., m1=... | This proves that our model can indeed select meaningful feature combination and model feature interactions of increasing orders with multiple layers in most cases, rather than select the redundant feature combinations of same feature fields.
We can also find some meaningful feature combinations in common cases. For exa... | The selected feature interactions of order-3 and order-4 are mostly not overlapped in the correctly predicted instance (a). In instance (a), our model selects relevant feature fields (Gender, Age, ReleaseTime, WatchTime) for Genre in order-3, while selects the other two feature fields (Occupation, Gender) in order-4.
H... |
Figure 4: Heat maps of estimated edge weights of correctly predicted instance (a) and wrongly predicted instance (b) on MovieLens-1M dataset, where positive edge weights indicate beneficial feature interactions. The axises represent feature fields (Gender, Age, Occupation, Zipcode, ReleaseTime, WatchTime, Genre). | Since the features along with selected beneficial feature interactions are treated as a graph, it can provide human readable interpretations on the prediction. Here we visualize heat maps of estimated edge weights of two cherry-pick instances on MovieLens-1M dataset in Fig. 4. We show the measured edge weights of each ... | B |
where Q𝑄Qitalic_Q is a symmetric positive definite matrix with log-normally distributed eigenvalues and φℝ+(⋅)subscript𝜑subscriptℝ⋅\varphi_{\mathbb{R}_{+}}(\cdot)italic_φ start_POSTSUBSCRIPT blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ ) | In practice, a halving strategy for the step size is preferred for the
implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps | The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy,
the per iteration cost of the stateless step-size is about three times that of the simple step-size. | The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless
version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... | C |
One option is that ai+1←←subscript𝑎𝑖1\overleftarrow{a_{i+1}}over← start_ARG italic_a start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT end_ARG is part of the active path.
The other option is that some other free vertex has an active path of length at most i+1𝑖1i+1italic_i + 1 to ai+1subscript𝑎𝑖1a_{i+1}italic_a st... | Our main challenge is that on the path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β, there can be many events by active paths of many distinct free vertices, where some active paths are blocked by other active paths and others form odd cycles.
Our main technical contribution is to sort this mess and show that certain positiv... | We show that before the paths to all {a1,…,aj}subscript𝑎1…subscript𝑎𝑗\{a_{1},\ldots,a_{j}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } have been found and the corresponding active paths have backtracked without finding an alternating path of leng... | Informally speaking, the key observations are that in the former case, by Lemma 4.8, (a suffix of) the active path must form an odd cycle.
A very convenient property of odd cycles is that as soon as they are discovered by the algorithm, their arcs can never belong to two distinct structures of the free vertices. | From this, we can inductively derive that eventually, either all {a1,…,ak}subscript𝑎1…subscript𝑎𝑘\{a_{1},\ldots,a_{k}\}{ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_a start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } form an odd cycle or an augmentation has been found involving some of these arcs.
O... | C |
For directed networks, however, constructing a doubly stochastic mixing matrix usually requires a weight-balancing step, which could be costly when carried out in a distributed manner.
Therefore, the push-sum technique [17] was utilized to overcome this issue. | Specifically, the methods proposed in [12, 21, 22, 23] employ gradient tracking to achieve linear convergence for strongly convex and smooth objective functions, where the work in [21, 23, 22] particularly considered combining gradient tracking with the push-sum technique to accommodate directed graphs.
The methods can... | Specifically, the push-sum based subgradient method in [18] can be implemented over time-varying directed graphs, and linear convergence rates were achieved in [19, 20] for minimizing strongly convex and smooth objective functions by applying the push-sum technique to EXTRA.
| In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... | For minimizing strongly convex and smooth objectives, the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method not only enjoys linear convergence over fixed graphs [24, 25], but also works well under time-varying graphs and asynchronous settings [24, 26, 27].
| B |
Unlike classical distributed learning methods, the FL approach assumes that data is not stored within a centralized computing cluster but is stored on clients’ devices, such as laptops, phones, and tablets. This formulation of the training problem gives rise to many additional challenges, including the privacy of clien... | Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre... | Predicting the next word written on a mobile keyboard [3] is a typical example when the performance of a local (personalized) model is significantly ahead of the classical FL approach that trains only the global model.
Improving the local models using this additional knowledge may need a more careful balance, consideri... | Figure 5: Average accuracy in during process of learning with different average parameters p𝑝pitalic_p and T𝑇Titalic_T. The first line presents the results of Algorithm 1, the second - Algorithm 3.
Red line – accuracy of the local model on local train data, blue line - accuracy of the local model on local test data, ... | Unlike classical distributed learning methods, the FL approach assumes that data is not stored within a centralized computing cluster but is stored on clients’ devices, such as laptops, phones, and tablets. This formulation of the training problem gives rise to many additional challenges, including the privacy of clien... | B |
We evaluate a number of (C)CE MSs in JPSRO on pure competition, pure cooperation, and general-sum games (Section H). All games used are available in OpenSpiel (Lanctot et al., 2019). More thorough descriptions of the games used can be found in Section F. We use an exact BR oracle, and exactly evaluate policies in the m... | An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has... |
We compare against common MS including uniform, α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019; Muller et al., 2020), Projected Replicator Dynamics (PRD) (Lanctot et al., 2017) which is an NE approximator, and random vertex (coarse) correlated equilibrium (RV(C)CE) which randomly selects a solution on the vertices o... |
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t... |
There is a rich polytope of possible equilibria to choose from, however, an MS must pick one at each time step. There are three competing properties which are important in this regard, exploitation, robustness, and exploration. For exploitation, maximum welfare equilibria appear to be useful. However, to prevent JPSRO... | B |
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a seri... | Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a nume... | recently established a formal framework for understanding and analyzing adaptivity in data analysis, and introduced a general toolkit for provably preventing the harms of choosing queries adaptively—that is, as a function of the results of previous queries. This line of work has established that enforcing that computat... | Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay... | The similarity function serves as a measure of the local sensitivity of the issued queries with respect to the replacement of the two datasets, by quantifying the extent to which they differ from each other with respect to the query q𝑞qitalic_q. The case of noise addition mechanisms provides a natural intuitive interp... | A |
In fact, we prove a slightly stronger statement. If a graph G𝐺Gitalic_G can be reduced to a graph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by iteratively removing z𝑧zitalic_z-antlers, each of width at most k𝑘kitalic_k, and the sum of the widths of this sequence of antlers is t𝑡... |
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets). We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler... | As described in Section 1, our algorithm aims to identify vertices in antlers using color coding. To allow a relatively small family of colorings to identify an entire antler structure (C,F)𝐶𝐹(C,F)( italic_C , italic_F ) with |C|≤k𝐶𝑘|C|\leq k| italic_C | ≤ italic_k, we need to bound |F|𝐹|F|| italic_F | in terms of... | The remainder of the paper is organized as follows. After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3. We present structural properties of antlers and how they combine in Section 4. In Section 5 we show how color coding can be used to find a large feedb... |
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr... | A |
We report the results of Poisson image blending [121], GP-GAN [172], Zhang et al. [198], and MLF [194]. We also report the ground-truth composite image obtained using ground-truth alpha matte for comparison. From Fig. 9, it can be seen that the obtained composite images using predicted alpha mattes are very close to t... |
Figure 10: In the left subfigure, we summarize three ways to construct image harmonization dataset and list the corresponding datasets: RealHM [60], iHarmony4 [9] (HCOCO, HFlickr, HAdobe5k, Hday2night), ccHarmony [113], GMS [140], HVIDIT [45], RdHarmony [9]. We also mark the dataset based on real (resp., rendered) ima... |
Figure 11: In the first (resp., second, third) row, we show two examples from RealHM [60] (resp., HFlickr in iHarmony4 [9], HVIDIT [45]) dataset. From left to right in each example, we show the composite image, the foreground mask, and the ground-truth harmonized image. | Backward adjustment: In contrast with manually adjusting the foreground of composite image to create harmonized image, some other works [156, 22, 18] adopted an inverse approach, i.e., adjusting the foreground of real image to create synthetic composite image. Specifically, they treat a real image as harmonized image, ... | Training deep learning models requires abundant pairs of composite images and ground-truth harmonized images. Existing works have designed different schemes to construct image harmonization dataset. We categorize the existing schemes into three groups: forward adjustment, backward adjustment, and replacement. Note that... | A |
Multi-task or Not: Out of the 22 tasks examined, multi-task models exhibit the lowest RMSE in 15 (68.2%) tasks and the lowest MAE in 19 (86.4%) tasks. Our findings suggest that a simple multi-task learning approach, utilizing weight sharing, can enhance taxi service predictions by establishing connections among divers... | To address this problem, we utilize LSTM as the base model, which is similar to ST-net in MetaST [5], and adopt a multi-task learning approach. We select Beijing and Shanghai as the source cities for transfer learning tasks in cities with large map sizes, and Xi’an as the source city for the transfer learning tasks in ... | TABLE VII: The results of inter-city transfer learning from source domains (Beijing, Shanghai, and Xi’an) to target domains (Shenzhen, Chongqing, and Chengdu). The lowest RMSE/MAE using limited target data is highlighted in bold. The results under full data and 3-day data represent the lower and upper bounds for the er... |
Graph Models or Not: Among the 16 tasks conducted in Beijing, Shanghai, Shenzhen, Chongqing, GNN models exhibit the lowest RMSE in 8 (50%) tasks and the lowest MAE in all tasks except for Xi’an and Chengdu where CNN models outperform all other models in all tasks. Our analysis, as presented in Table II, reveals that X... |
Multi-task or Not: Out of the 22 tasks examined, multi-task models exhibit the lowest RMSE in 15 (68.2%) tasks and the lowest MAE in 19 (86.4%) tasks. Our findings suggest that a simple multi-task learning approach, utilizing weight sharing, can enhance taxi service predictions by establishing connections among divers... | C |
Prediction intervals are constructed as in the previous section, i.e. a (conditionally) normal distribution is assumed and the intervals are given by Eq. (22). It was observed that this architecture shows improved modelling capabilities and robustness for uncertainty estimation. In fort2019deep the improved performan... | Without the adversarial training, this model is similar to the one introduced by Khosravi et al. khosravi2014constructing . However, instead of training an ensemble of mean-variance estimators, an ensemble of point estimators is trained to predict y𝑦yitalic_y and in a second step a separate estimator σ^^𝜎\hat{\sigma}... | Every ensemble allows for a naive construction of a prediction interval heskes1997practical when the aggregation strategy in Algorithm 2 is given by the arithmetic mean. By treating the predictions of the individual models in the ensemble as elements of a data sample, one can calculate the empirical mean and variance ... | The class of direct interval estimators consists of all methods that are trained to directly output a prediction interval. Instead of modelling a distribution or extracting uncertainty from an ensemble, they are trained using a loss function that is specifically tailored to the construction of prediction intervals. The... | The idea behind deep ensembles lakshminarayanan2017simple is the same as for any ensemble technique: training multiple models to obtain a better and more robust prediction. The loss functions of most (deep) models have multiple local minima and by aggregating multiple models one hopes to take into account all these mi... | A |
In the literature of machine learning, a prominent approach to overcome the labelled data scarcity issue is to adopt “transfer learning” and divide the learning problem into two stages \parencitehan2021pretrained: a pre-training stage that establishes a model capturing general knowledge from one or multiple source task... | The latter is in particular popular in the field of natural language processing (NLP), where pre-trained models (PTMs) using Transformers \parencitevaswani2017attention have achieved state-of-the-art results on almost all NLP tasks, including generative and discriminative ones \parencitehan2021pretrained.
| Tab. 2 lists the testing accuracy achieved by the baseline models and the proposed ones for four downstream tasks.
We see that “our model (score)” outperforms the Bi-LSTM or Bi-LSTM-Attn baselines in all tasks consistently, using either the REMI or CP representation. | In our experiments, we will use the same pre-trained model parameters to initialise the models for different downstream tasks. During fine-tuning, we fine-tune the parameters of all the layers, including the self-attention and token embedding layers.
| In particular, inspired by the growing trend of treating MIDI music as a “language” in deep generative models for symbolic music \parencitehuang2018music,payne2019musenet,huang2020pop,musemorphose,musecoco,
we employ a Transformer-based network pre-trained by a self-supervised training strategy called “masked language ... | A |
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSU... | Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the... | To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and findi... | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen... |
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of t... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.