context stringlengths 250 3.86k | A stringlengths 250 5.11k | B stringlengths 250 3.39k | C stringlengths 250 8.2k | D stringlengths 250 5.02k | label stringclasses 4
values |
|---|---|---|---|---|---|
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | (x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC... | \frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita... |
g2(x)fn′(x)=g1(x)fn(x)+g0(x)fn−1(x);subscript𝑔2𝑥superscriptsubscript𝑓𝑛′𝑥subscript𝑔1𝑥subscript𝑓𝑛𝑥subscript𝑔0𝑥subscript𝑓𝑛1𝑥\displaystyle g_{2}(x)f_{n}^{\prime}(x)=g_{1}(x)f_{n}(x)+g_{0}(x)f_{n-1}(x);italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_... | a1,n−1fn(x)=(a2,n−1+a3,n−1x)fn−1(x)−a4,n−1fn−2(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST... | D |
These include the recognition tree of Leedham-Green and O’Brien [9],
which, for example, allows the computation of the composition factors of a finite matrix group in polynomial time [25] and is implemented in the computational algebra package Magma [14], as well as a data structure |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and application... | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in... | One important task in this context is writing elements of classical groups as words in standard generators using SLPs. This is done in Magma [14] using the results of Elliot Costi [6] and in GAP using the results of this paper see Section 6. Other rewriting algorithms also exist, for example Cohen et al. [26] present a... | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali... | C |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are
related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the... | D |
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle
take nearly linear time [6, 1, 2, 3, 23], that is, O(nlogn)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O(nlog2n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP... | in the Rotate-and-Kill process,
and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2). | The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that
an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim... | Using a Rotate-and-Kill process (which is shown in Algorithm 5),
we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead. | Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-... | B |
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work... |
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t... |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; TS−SVM𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it... | B |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th... | A |
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even... | To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i... | We use the same dataset described in Section 4.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. As a rumor is often of a long circurlating story (friggeri2014rumor, ), this results in a rather long time span. In this w... |
Training data for single tweet classification. An event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless events from the above dataset. In the end, we used 90 rumors and 90 news assoc... | The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999htt... | B |
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | A |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST... | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,... | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | D |
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | C |
Table 6: A summary of the quantitative results for the models with ⊕direct-sum\oplus⊕ and without ⊖symmetric-difference\ominus⊖ an ASPP module. The evaluation was carried out on five eye tracking datasets respectively. Each network was independently trained 10 times resulting in a distribution of values characterized b... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which... |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the... |
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was rep... | A |
For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if th... | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into grap... |
A repetitive structure often leads to high locality. For example, note that tutustuttu from above is nearly a repetition. Regarding the question of how repetitions of a word affect its locality number, we can show the following result (see the Appendix for a proof). | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local... | Regarding the locality of Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, note that marking x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT leads to 2i−2superscript2𝑖22^{i-2}2 start_POSTSUPERSCRIPT italic_i - 2 end_POSTSUPERSCRIPT marked blocks; further, marking x1subsc... | B |
Then, they segmented the RR intervals to 30 samples each and fed them to a network with two layers followed by a pooling layer and a LSTM layer with 100 units.
The method was validated on MITDB and NSRDB achieving an accuracy that indicates its generalizability. | In their article Luo et al.[79] utilized quality assessment to remove low quality heartbeats, two median filters for removing power line noise, high-frequency noise and baseline drift.
Then, they used a derivative-based algorithm to detect R-peaks and time windows to segment each heartbeat. | Yang et al.[81] normalized the ECG and then fed it to a Stacked Sparse AE (SSAE) which they fine-tuned.
They classify on six types of arrhythmia achieving accuracy of 99.5% while also demonstrating the noise resilience of their method with artificially added noise. | At each iteration the expert annotates the most uncertain ECG beats in the test set, which are then used for training, while the output of the network assigns the confidence measures to each test beat.
Experiments performed on MITDB, INDB, SVDB indicate the robustness and computational efficiency of the method. | In[90] the authors added noise signals from the NSTDB to the MITDB and then used scale-adaptive thresholding WT to remove most of the noise and a denoised AE to remove the residual noise.
Their experiments indicated that increasing the number of training data to 1000 the signal-to-noise ratio increases dramatically aft... | D |
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ... | We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a).
Our results are poor with 20202020K interactions. For 50505050K th... | Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training.
Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was... | The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good pol... |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, ... | B |
A high level overview of these combined methods is shown in Fig. 1.
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem. | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
| Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
An example of the respective outputs of some of the m𝑚mita... | The two layer module consists of two 1D convolutional layers (kernel sizes of 3333 with 8888 and 16161616 channels) with the first layer followed by a ReLU activation function and a 1D max pooling operation (kernel size of 2222).
The feature maps of the last convolutional layer for both modules are then concatenated al... | B |
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... |
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we ... |
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | B |
In other words, the algorithm designer can hedge against untrusted advice, by a small sacrifice in the trusted performance. Thus we can interpret r𝑟ritalic_r as the “risk” for trusting the advice: the smaller the r𝑟ritalic_r, the bigger the risk.
Likewise, for the list update problem, our (r,f(r))𝑟𝑓𝑟(r,f(r))( ita... |
All the above results pertain to deterministic online algorithms. In Section 6, we study the power of randomization in online computation with untrusted advice. First, we show that the randomized algorithm of Purohit et al. [29] for the ski rental problem Pareto-dominates any deterministic algorithm, even when the lat... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem.
Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size. | As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online alg... | A |
category𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦categoryitalic_c italic_a italic_t italic_e italic_g italic_o italic_r italic_y.DICTIONARY[word𝑤𝑜𝑟𝑑worditalic_w italic_o italic_r italic_d] ←←\leftarrow← category𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦categoryitalic_c italic_a italic_t italic_e italic_g italic_o italic_r italic_y.DICTIONA... |
Our approach to calculating gv𝑔𝑣gvitalic_g italic_v, as we will see later, tries to overcome some problems arising from the valuation of words only based on local information to a category. This is carried out by, firstly, computing a word local value (lv𝑙𝑣lvitalic_l italic_v) for every category, and secondly, c... | However, this instance of SS3 would not effectively fulfill our goals since terms would be valued simply and solely by their local raw frequency, which is precisely the problem that gv𝑔𝑣gvitalic_g italic_v computation tries to overcome.
For instance, under this “local view” of words, highly discriminatory words for ... | That is, when gv𝑔𝑣gvitalic_g italic_v is only applied to a word it outputs a vector in which each component is the global value of that word for each category cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
For instance, following the above example, we have: | global value (green) in relation to the local value (orange) for the “depressed” category. The abscissa represents individual words arranged in order of frequency.
Note that the zone in which stop words are located (close to 0 in the abscissa) the local value is very high (since they are highly frequent words) but the ... | A |
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝdF(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bol... | Since the server is typically the busiest node in parameter server architecture, we consider the communication cost on the server in our experiments.
For DMSGD which doesn’t use any communication compression techniques, the communication cost on the server includes receiving vectors from the K𝐾Kitalic_K workers and se... | In the experiments of (Lin et al., 2018), DGC gets far better performance on both accuracy and communication cost than quantization methods. Hence, we do not compare with quantization methods in this paper.
We don’t use the warm-up strategy in the experiments. The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90... | We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
We ado... | C |
From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | An advantage of SANs compared to Sparse Autoencoders [37] is that the constrain of activation proximity can be applied individually in each example instead of requiring the computation of forward-pass of all examples.
Additionally, SANs create exact zeros instead near-zeros, which reduces co-adaptation between instance... | From the point of view of Sparse Dictionary Learning, SANs kernels could be seen as the atoms of a learned dictionary specializing in interpretable pattern matching (e.g. for Electrocardiogram (ECG) input the kernels of SANs are ECG beats) and the sparse activation map as the representation.
The fact that SANs are wide... | In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14].
Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler... | B |
In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... |
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ... | Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increase... | In the large-scale UAV ad-hoc networks, the number of UAVs is another feature that should be investigated. Since the demanding channel’s capacity should not be more than the channel’s size we provide, we limit the number of UAVs in the tolerance range which satisfies that each UAV’s channel selection is contented. In t... |
where A𝐴Aitalic_A, B𝐵Bitalic_B and C𝐶Citalic_C are balance indices that balance three utilities on the basis of post-disaster scenario. The ultimate goal for enlarging the utility of the networks is to enlarge the summation of utility function (9) of each UAV, and we define the global utility function as the goal f... | A |
are standard. The boundary conditions and closure for this model (namely,
definitions of thermal fluxes 𝐪isubscript𝐪𝑖\mathbf{q}_{i}bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪esubscript𝐪𝑒\mathbf{q}_{e}bold_q start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT, | \underbrace{\overline{Q}_{\pi}}_{-\underline{\boldsymbol{\pi}}:\nabla\mathbf{v%
}}+\overline{Q}_{\zeta}\right]= - over¯ start_ARG bold_v end_ARG ⋅ ( over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - italic_γ over¯ start_ARG italic_p end... | \gamma-1)\left(-\nabla\cdot\mathbf{q}_{i}+Q_{ie}-\underline{\boldsymbol{\pi}}:%
\nabla\mathbf{v}\right)over˙ start_ARG italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG = - bold_v ⋅ ∇ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_γ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT... | viscous stress tensor 𝝅¯¯𝝅\underline{\boldsymbol{\pi}}under¯ start_ARG bold_italic_π end_ARG and ion-electron
heat exchange rate Qiesubscript𝑄𝑖𝑒Q_{ie}italic_Q start_POSTSUBSCRIPT italic_i italic_e end_POSTSUBSCRIPT) will be discussed in section 3.2. | the species heat exchange term Q¯iesubscript¯𝑄𝑖𝑒\overline{Q}_{ie}over¯ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i italic_e end_POSTSUBSCRIPT, the resistive
diffusion coefficient η¯¯𝜂\overline{\eta}over¯ start_ARG italic_η end_ARG, and the heat flux density | C |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | A |
This phenomenon introduces a positive bias that may lead to asymptotically sub-optimal policies, distorting the cumulative rewards. The majority of analytical and empirical studies suggest that overestimation typically stems from the max operator used in the Q-learning value function. Additionally, the noise from appro... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | C |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ... | Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b... | V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images. Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Simil... |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ... | Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al., 2015) by adding a new branch to predict th... | A |
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task.
The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to impro... | Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
| Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing.
On the contrary, the topological pooling methods obtain an accuracy close to a classical CNN, with NDP significantly outperforming the other two techniques. | In Fig. 7, we report the training time for the five different pooling methods.
As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods. |
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods. | C |
Fernández-Delgado et al. (2014) conduct extensive experiments comparing 179 classifiers on 121 UCI datasets (Dua & Graff, 2017). The authors show that random forests perform best, followed by support vector machines with a radial basis function kernel. Therefore, random forests are often considered as a reference for n... | Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018).
The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ... | Neural networks have become very popular in many areas, such as computer vision (Krizhevsky et al., 2012; Reinders et al., 2022; Ren et al., 2015; Simonyan & Zisserman, 2015; Zhao et al., 2017; Qiao et al., 2021; Rudolph et al., 2022; Sun et al., 2021), speech recognition (Graves et al., 2013; Park et al., 2019; Sun et... | Mapping random forests into neural networks is already used in many applications such as network initialization (Humbird et al., 2019), camera localization (Massiceti et al., 2017), object detection (Reinders et al., 2018, 2019), or semantic segmentation (Richmond et al., 2016).
State-of-the-art methods (Massiceti et a... | The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes.
Olson et al. (2018) evaluate the performance of modern... | D |
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu... | The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s... |
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... | A |
Compared to ResNets, DenseNets achieve similar performance, allow for even deeper architectures, and they are more parameter and computation efficient.
However, the DenseNet architecture is highly non-uniform which complicates the hardware mapping and ultimately slows down training. | Section 5.1 explored the impact of several network quantization approaches and structured pruning on the prediction quality.
In this section. we use the well-performing LQ-Net approach for quantization and PSP (for channel pruning) to measure the inference throughput of the quantized and pruned models separately on an ... | In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1.
Note that this requires observing overall constraints such as pre... | By using depthwise-separable convolutions, the number of trainable parameters as well as the number of multiply-accumulate operations (MACs) can be substantially reduced.
It is empirically shown that this has little to no negative impact on prediction quality. | The challenge is to reduce the number of bits as much as possible while at the same time keeping the prediction accuracy close to that of a well-tuned full-precision DNN.
Subsequently, we provide a literature overview of approaches that train reduced-precision DNNs, and, in a broader view, we also consider methods that... | C |
(iλ,λ′)∗(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( ita... |
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by |
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by | ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
| and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\m... | B |
In our use case, we chose the Pima Indian Diabetes data set [62] to illustrate how t-viSNE can lead to a better overview, quality of the results, dimension understanding, and even performance improvements. The data set includes 768 female patients of Pima Indian heritage, aged between 21 to 81. The main task in this e... |
The main goal of the Shepard Heatmap is to offer a broad, simplified overview of the accuracy of the projection in terms of distance preservation: cells close to the main diagonal of the heatmap indicate that the respective pairs of instances have been represented in the 2222-D space with distances that are comparable... |
Adaptive Parallel Coordinates Plot Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (... | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ... | C |
When should a new nature-inspired algorithm be introduced?: The authors analyze the cases in which it is necessary to create novel algorithms. In their words, “They could be used as global optimizers, while a heuristic algorithm could be added for acting as local search technique for the solutions provided by the natur... |
A critical point of reflection associated with this explosion of proposals has been that novel metaphors do not lead to new solvers, and that comparisons undergo serious methodological problems. Although there are increasingly more bio-inspired algorithms, many of them rely on so-claimed novel metaphors that do not cr... |
In Section 7, we pay attention from a triple critical position as it was pointed out in [2], highlighting the good (a present and future plenty of exciting applications), the bad (novel metaphors not leading to innovative solvers, going deeper into the group of works that criticize the lack of novelty of the new propo... | The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b... | Due to “useless metaphors”, “lack of novelty” and “poor experimental validation and comparison”, in [16] authors took the decision in this letter to “call upon all editors-in-chief in the field to adapt their editorial policies” to reject the publication of novel metaphor-based metaheuristics. More than 80 important re... | D |
In this paper, matrices and vectors are represented by uppercase and lowercase letters respectively.
A graph is represented as 𝒢=(𝒱,ℰ,𝒲)𝒢𝒱ℰ𝒲\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})caligraphic_G = ( caligraphic_V , caligraphic_E , caligraphic_W ) and |⋅||\cdot|| ⋅ | is the size of some set. Vectors whose ... | Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent y... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... | C |
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20... | Requirements on Internet studies. The key requirements for conducting Internet studies upon which conclusions can be drawn include scalable measurement infrastructure, good coverage of the Internet and a representative selection of measurement’s vantage points. We summarise the limitations of the previous studies below... |
∙∙\bullet∙ Limited representativeness. Volunteer or crowd-sourcing studies, such as the Spoofer Project (Lone et al., 2018), are inherently limited due to bias introduced by the participants. These measurements are performed using a limited number of vantage points, which are set up in specific networks, and hence are... |
Our work provides the first comprehensive view of ingress filtering in the Internet. We showed how to improve the coverage of the Internet in ingress filtering measurements to include many more ASes that were previously not studied. Our techniques allow to cover more than 90% of the Internet ASes, in contrast to best ... | B |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ... |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regio... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify use... | D |
We use the same definition for A(1)[i,B]superscript𝐴1𝑖𝐵A^{(1)}[i,B]italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] for all B∈ℬi(1)𝐵superscriptsubscriptℬ𝑖1B\in\mathcal{B}_{i}^{(1)}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) e... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | D |
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belong... |
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cann... | The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup.
If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup. | A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1…qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic... | A |
As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1%percent11\%1 % of the dataset, b) Varying subset covering 1%percent11\%1 % of the dataset, where a new random subset is sampled every epoch and c) 100%percent100100\%100 % of the dataset. Confirming our hypothesis, all varian... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible... |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the p... | While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented... | A |
We downloaded the URL dump of the May 2019 archive.333https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-22/cc-index.paths.gz Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. We applied a selection cr... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... | We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... | B |
Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. | Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re... |
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c... |
To illustrate how to choose different metrics (and with which weights), we start our exploration by selecting the heart disease data set in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a). Knowing that the data set is balanced, we pick accuracy (weight... | Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ... | D |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr... | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | C |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | B |
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm... |
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce... |
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da... | In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... | A |
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
are as required by Theorem 3.7. | Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A
is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever... | a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ.
The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI... | We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en... | Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition
is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital... | D |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
|
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | B |
Multilingual translation uses a single model to translate between multiple language pairs Firat et al. (2016); Johnson et al. (2017); Aharoni et al. (2019). Model capacity has been found crucial for massively multilingual NMT to support language pairs with varying typological characteristics Zhang et al. (2020); Xu et ... | For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et... | It is a common problem that increasing the depth does not always lead to better performance, whether with residual connections Li et al. (2022b) or other previous studies on deep Transformers Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a), and the use of wider models is the usual method of choice for furthe... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... |
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin... | D |
topology ττ\uptauroman_τ whenever ∀U∈τ,∀A∈U,∃V∈ℬ,A∈V⊆Uformulae-sequencefor-all𝑈τformulae-sequencefor-all𝐴𝑈formulae-sequence𝑉ℬ𝐴𝑉𝑈\forall U\in\uptau,\forall A\in U,\exists V\in\mathcal{B},A\in V\subseteq U∀ italic_U ∈ roman_τ , ∀ italic_A ∈ italic_U , ∃ italic_V ∈ caligraphic_B , italic_A ∈ italic_V ⊆ italic_U.
A ... | and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT are the same, i.e.,
⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct(σ)⟩=⟨⟦𝖤𝖥𝖮[σ]⟧Struct(σ)⟩\left\langle\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{F... | \llbracket\varphi\rrbracket_{\operatorname{Struct}(\upsigma)}\subseteq Uitalic_A ∈ ⟦ italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ ⟦ italic_φ ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⊆ italic_U. Therefore, ⟦𝖤𝖥𝖮[σ]⟧St... | ⟦ψA⟧Struct(σ)∈⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\psi_{A}\rrbracket_{\operatorname{Struct}(\upsigma)}\in\llbracket%
\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ∈ ⟦ sansserif_EFO ... | ⟦𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
and ⟦𝖤𝖥𝖮[σ]⟧Struct(σ)\llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ sansserif_EFO [ roman_σ ] ⟧ sta... | D |
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify... |
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... |
The comparison results of the real distorted image are shown in Fig. 13. We collect the real distorted images from the videos on YouTube, captured by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. 13, our approach generates the best rect... | D |
Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Table 3 shows the training time per epoch of SNGM with different batch sizes. When B=128𝐵128B=128italic_B = 128, SNGM has to execute communication frequently and each GPU only computes a mini-batch gradient with the size of 16, which can not fully utilize the computation power. Hence, compared to other results, SNGM r... | Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate ... |
A direct corollary is that the batch size is constrained by the smoothness constant L𝐿Litalic_L, i.e., B≤𝒪(1/L)𝐵𝒪1𝐿B\leq{\mathcal{O}}(1/L)italic_B ≤ caligraphic_O ( 1 / italic_L ). Hence, we cannot increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate, and ... | argued that SGD with a large batch size needs to increase the number of iterations. Further, authors in [32]
observed that gradients at different layers of deep neural networks vary widely in the norm and proposed the layer-wise adaptive rate scaling (LARS) method. A similar method that updates the model parameter in a... | B |
Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... |
We remark that if we make an additional assumption that the stage-II cost is at most some polynomial value ΔΔ\Deltaroman_Δ, we can use standard SAA techniques without discarding scenarios; see Theorem 2.6 for full details. However, this assumption is stronger than is usually used in the literature for two-stage stocha... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... |
Unfortunately, standard SAA approaches [26, 7] do not directly apply to radius minimization problems. On a high level, the obstacle is that radius-minimization requires estimating the cost of each approximate solution; counter-intuitively, this may be harder than optimizing the cost (which is what is done in previous ... | D |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the d... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... | B |
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t... |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics... | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi... | In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ... | D |
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | As shown in Table 3, all PointRend models achieve promising performance. Even without ensemble, our PointRend baseline, which yields 77.38 mAP, has already achieved 1st place on the test leaderboard. Note that several attempts, like BFP Pang et al. (2019) and EnrichFeat, give no improvements against PointRend baseline,... | D |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | A |
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh... |
After showing the action-value function estimate is the optimistic upper bound of the optimal action-value function, we can derive the dynamic regret bound within one epoch via recursive regret decomposition. The dynamic regret within one epoch for Algorithm 1 with the knowledge of B𝜽,ℰsubscript𝐵𝜽ℰB_{\bm{\theta},\m... |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202... |
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bo... |
Our proposed algorithm LSVI-UCB-Restart has two key ingredients: least-squares value iteration with upper confidence bound to properly handle the exploration-exploitation trade-off (Jin et al., 2020), and restart strategy to adapt to the unknown nonstationarity. Our algorithm is summarized in Algorithm 1. From a high-... | D |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3