Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
7.19k
A
stringlengths
250
3.38k
B
stringlengths
250
5.14k
C
stringlengths
250
5.47k
D
stringlengths
250
5.47k
label
stringclasses
4 values
\Big{]}.+ divide start_ARG ( italic_n - italic_m ) ( italic_n - italic_m - 2 ) ( italic_D + italic_n + italic_m ) ( italic_D + italic_n + italic_m + 2 ) end_ARG start_ARG 8 ( italic_D + 2 italic_m ) ( italic_D + 2 italic_m + 2 ) end_ARG italic_x start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT + ⋯ ] .
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
Rnm⁢(x)=(−1)(n−m)/2⁢xm⁢P(n−m)/2(m+1−D/2,0)⁢(1−2⁢x2)=(n+1−D/2(n−m)/2)⁢xm⁢G−a⁢(2+m−D/2,2+m−D/2,x2).superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2superscript𝑥𝑚superscriptsubscript𝑃𝑛𝑚2𝑚1𝐷2012superscript𝑥2binomial𝑛1𝐷2𝑛𝑚2superscript𝑥𝑚subscript𝐺𝑎2𝑚𝐷22𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}x^{m}P_{(n-m)...
^{2}-m^{2}\right]x^{2}\\ +D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP...
m+D/2\end{array}\mid x^{2}\right),italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ( - 1 ) start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_D + italic_m + italic_n end_ARG start_ARG 2...
B
For example, computing the Bruhat decomposition of a random matrix in GL⁢(250,2)GL2502\textrm{GL}(250,2)GL ( 250 , 2 ) resulted in an SLP of length 353 969353969353\;969353 969. During the evaluation, our MSLP required 32323232 memory slots and it was easily possible to evaluate this MSLP on the standard generators of...
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
We note that after applying the function SlotUsagePattern, the resulting SLP only required 12121212 memory slots and could be evaluated in the same time as our MSLP. This is due to the fact that SlotUsagePattern was handed a well-designed SLP. When faced with an SLP not designed to be memory efficient, one might not ex...
D
The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are related to ideas of Variational Multiscale Methods [MR1660141, MR2300286]. In the...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
The main bottle-neck in dealing with high-contrast coefficients is that the decay is slower, albeit still exponential, forcing the use of larger patches. To deal with this situation, we use a subspace of Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h en...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
D
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
B
We consider two types of Ensemble Features: features accumulating crowd wisdom and averaging feature for the Tweet credit Scores. The former are extracted from the surface level while the latter comes from the low dimensional level of tweet embeddings; that in a way augments the sparse crowd at early stage.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new...
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
C
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
C
For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). For the experiments, we implement the 3 non-neural network models with Scikit-learn library111111scikit-learn.org/. Furthermore,...
The experiments’ results of the testing models are shown in Table 3. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are even worse. So the cl...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
A
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
A
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance ...
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali...
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
C
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal...
For example, the correlation between blood glucose and carbohydrate for patient 14 was higest (0.47) at no lagging time step (ref. 23(c)). Whereas for the correlation between blood glucose and insulin was highest (0.28) with the lagging time = 4 (ref. 24(d)).
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
A
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met...
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantita...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
B
Even though the reduction from MinLoc to MinPathwidth yields an O⁡(log⁡(opt)⁢log⁡(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm for MinLoc, it is also important to directly investigate ...
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21,...
Expecting an improvement of cutwidth approximation – a heavily researched area – by translating the problem into a string problem and then investigating the approximability of this string problem seems naive. This makes it even more surprising that linking cutwidth with pathwidth via the locality number is in fact hel...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
Even though the reduction from MinLoc to MinPathwidth yields an O⁡(log⁡(opt)⁢log⁡(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm for MinLoc, it is also important to directly investigate ...
B
They applied selective data sampling on a CNN which increased the speed of the training by dynamically selecting misclassified negative samples during training. Weights are assigned to the training samples and informative samples are included in the next training iteration.
First, optimal paths in a computed flow field are found and then a CNN classifier is used for removing extraneous paths in the detected centerlines. The method was enhanced using a model-based detection of coronary specific territories and main branches to constrain the search space.
They applied the mean of a series of Gabor filters with varying frequencies and sigma values to the output of the network to determine whether a pixel represents a vessel or not. Besides finding that the optimal filters vary between channels, the authors also state the ‘need’ of enforcing the networks to align with hum...
A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label. The CNN classification was propagated through the minimum spanning tree of the graph.
The arrows at the bottom denote the flow of the backpropagation starting after the calculation of the loss using the cost function J𝐽Jitalic_J, the original output y𝑦yitalic_y and the predicted output y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG. This loss is backpropagated through the filters of the network adjustin...
C
We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a). Our results are poor with 20202020K interactions. For 50505050K th...
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (...
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score...
Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training. Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was...
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data. Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed ...
D
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
A
The paper’s organization is as follows. The second section presents the two main locomotion methods employed by the Cricket robot, rolling and walking, along with a description of two gaits designed for negotiating steps. In the third section, we outline the mathematical framework used for quantifying energy expenditur...
This section describes the primary locomotion modes, rolling and walking locomotion of our hybrid track-legged robot named Cricket shown in Fig. 2. It also introduces two proposed gaits designed specifically for step negotiation in quadrupedal wheel/track-legged robots.
The paper’s organization is as follows. The second section presents the two main locomotion methods employed by the Cricket robot, rolling and walking, along with a description of two gaits designed for negotiating steps. In the third section, we outline the mathematical framework used for quantifying energy expenditur...
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ...
This paper introduced an energy-centric method for automatic transitioning between locomotion modes in quadruped track-legged robots during step negotiation. Exhibiting flexibility, our methodology could be applied to a diverse range of wheel/track-legged robots, deriving transition thresholds from energy assessments d...
A
Suppose that you have an investment account with a significant amount in it, and that your financial institution advises you periodically on investments. One day, your banker informs you that company X will soon receive a big boost, and advises to use the entire account to buy stocks. If you were to completely trust th...
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would...
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ...
In this work we focus on the online computation with advice. Our motivation stems from observing that, unlike the real world, the advice under the known models is often closer to “fiat” than “recommendation”. Our objective is to propose a model which allows the possibility of incorrect advice, with the objective of ob...
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of ...
C
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual...
Note that this allows us to compare words across different categories since their values are all normalized in relation to stop words, which should have a similar frequency across all the categories111111Note that we are assuming here that we are working with textual information in which there exist highly frequent ele...
This subsection describes how classification is carried out. However, before we illustrate the overall process and for the sake of simplicity, we are going to assume there exist a function g⁢v⁢(w,c)𝑔𝑣𝑤𝑐gv(w,c)italic_g italic_v ( italic_w , italic_c ) to value words in relation to categories —and whose formal defini...
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actual...
In Subsection 4.2 we will introduce the time-aware metric used to evaluate the effectiveness of the classifiers, in relation to the time taken to make the decision. Finally, Subsection 4.4 describes the different types of experiments carried out and the obtained results.
B
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter...
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai...
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and al...
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red...
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )...
B
An advantage of SANs compared to Sparse Autoencoders [37] is that the constrain of activation proximity can be applied individually in each example instead of requiring the computation of forward-pass of all examples. Additionally, SANs create exact zeros instead near-zeros, which reduces co-adaptation between instance...
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations. In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ...
Regarding the φ𝜑\varphiitalic_φ metric and considering Eq. 17 our target is to estimate an as accurate as possible representation of 𝒙𝒙\bm{x}bold_italic_x through 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic...
Olshausen et al. [43] presented an objective function that considers subjective measures of sparseness of the activation maps, however in this work we use the direct measure of compression ratio. Previous work by [44] have used a weighted combination of the number of neurons, percentage root-mean-squared difference and...
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
D
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods fo...
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ it...
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
C
as ∇¯^⋅𝐏^⋅^¯∇^𝐏\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{P}}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_P end_ARG, where 𝐏^=∇^¯⁢U¯^𝐏¯^∇¯𝑈\widehat{\mathbf{P}}=\overline{\widehat{\nabla}}\,\,\overline{U}over^ start_ARG bold_P end_ARG = over¯ start_ARG over^ start_ARG ∇ end_ARG end...
\,\overline{\psi}\right)+\overline{\eta}\,\,\left(\overline{\overline{\Delta^{% {}^{*}}}}\,\,\overline{\psi}\right)= - over¯ start_ARG bold_v end_ARG ⋅ ( over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG over¯ start_ARG italic_ψ end_ARG ) + over¯ start_ARG italic_η end_ARG ( over¯ start_ARG over¯ start_ARG roman_Δ star...
The Δ¯¯¯¯Δ\overline{\overline{\Delta}}over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG and Δ∗¯¯¯¯superscriptΔ\overline{\overline{\Delta^{{}^{*}}}}over¯ start_ARG over¯ start_ARG roman_Δ start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG end_ARG
d⁢V¯∗T[Δ¯¯U¯]\displaystyle\overline{dV}{}^{T}*\left[\overline{\overline{\Delta}}\,\,% \overline{U}\right]over¯ start_ARG italic_d italic_V end_ARG start_FLOATSUPERSCRIPT italic_T end_FLOATSUPERSCRIPT ∗ [ over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG over¯ start_ARG italic_U end_ARG ]
and U¯=η¯⁢(Δ∗¯¯⁢ψ¯)¯𝑈¯𝜂¯¯superscriptΔ¯𝜓\overline{U}=\overline{\eta}\,\,\left(\overline{\overline{\Delta^{{}^{*}}}}\,% \,\overline{\psi}\right)over¯ start_ARG italic_U end_ARG = over¯ start_ARG italic_η end_ARG ( over¯ start_ARG over¯ start_ARG roman_Δ start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRI...
C
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
D
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t...
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
A
Table 3: A summary of medical image segmentation papers along with their type of proposed improvement. * indicates the count at the highest level. For example, if a paper reports counts of patients, volumes, slices, etc., we report the count of patients.
Table 2 lists a summary of selected papers from this review, the nature of their proposed contributions, and the datasets that they were evaluated on. For the papers that evaluated their models on the PASCAL VOC 2012 dataset (Everingham et al., 2012), one of the most popular image semantic segmentation dataset for natu...
PASCAL Context: The PASCAL Context dataset (Mottaghi et al., 2014) extended the PASCAL VOC 2010 Challenge dataset by providing pixel-wise annotations for the images, resulting in a much larger dataset with 19,740 annotated images and labels belonging to 540 categories.
Kim and Hwang (2016) proposed a weakly supervised semantic segmentation network using unpooling and deconvolution operations, and used feature maps from the deconvolutions layers to learn scale-invariant features, and evaluated their model on the PASCAL VOC and chest X-ray image datasets. Lee et al. (2019) used dropout...
Pascal VOC datasets: The PASCAL Visual Object Classes (VOC) Challenge (Everingham et al., 2010) was an annual challenge that ran from 2005 through 2012 and had annotations for several tasks such as classification, detection, and segmentation. The segmentation task was first introduced in the 2007 challenge and featured...
D
As a result the graph collapses, becoming densely connected and losing its original structure. On the other hand, topological pooling methods can preserve the graph structure by operating on the whole adjacency matrix at once to compute the coarsened graphs and are not affected by uninformative node features.
Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation. After training, we extract the embedding vector of each word in the vocabulary and construct a...
Interestingly, the GNNs configured with GRACLUS and NDP always achieve better results than the Dense network, even if the latter generates the word embeddings used to build the graph on which the GNN operates. This can be explained by the fact that the Dense network immediately overfits the dataset, whereas the graph s...
We use a graph that encodes the similarity of all words in the vocabulary. Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise.
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
C
Sparse connectivity maintains the tree structures and has fewer weights to train. In practice, sparse weights require a special differentiable implementation, which can drastically decrease performance, especially when training on a GPU. Full connectivity optimizes all parameters of the fully connected network. Massice...
In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models. We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t...
These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption. In this work, we address this issue by proposing an imitation learning-based me...
For training, we generate input-target pairs (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) as described in the last section. These training examples are fed into the training process to teach the network to predict the same results as the random forest. To avoid overfitting, the data is generated on-the-fly so that each traini...
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim...
B
Theoretically, we establish the sample efficiency of OPPO in an episodic setting of Markov decision processes (MDPs) with full-information feedback, where the transition dynamics are linear in features (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020). In particular, we allow the trans...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Moreover, we prove that, even when the reward functions are adversarially chosen across the episodes, OPPO attains the same regret in terms of competing with the globally optimal policy in hindsight (Cesa-Bianchi and Lugosi, 2006; Bubeck and Cesa-Bianchi, 2012). In comparison, existing algorithms based on value iterati...
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
B
Inspired by ResNets whose skip connections have shown to reduce the vanishing gradient problem, densely connected CNNs (DenseNets) introduced by Huang et al. (2017) drive this idea even further by connecting each layer to all previous layers. DenseNets are conceptually very similar to ResNets—instead of adding the outp...
This reduces the number of features at the transition drastically, and by having the same number of channels as there are classes, it can also be used to completely remove fully connected layers. Secondly, they used 1×1111\times 11 × 1 convolutions with weight kernels 𝐖∈ℝ1×1×C×D𝐖superscriptℝ11𝐶𝐷\mathbf{W}\in\mathbb...
We use a DenseNet architecture (Huang et al., 2017) consisting of 100 layers with bottleneck and compression layers, i.e., a DenseNet-BC-100. We select the default growth rate of k=12𝑘12k=12italic_k = 12 for the model, i.e., the number of feature maps added per layer.
The spatial size of features detected within an image is bounded by the receptive field, i.e., the section of the input image that influences the value of a particular spatial location in some hidden layer. The receptive field is increased by stacking multiple convolutional layers, e.g., performing two consecutive 3×33...
Since this stacking necessarily increases the number of feature maps with each layer, the number of new feature maps computed by each layer is typically small. Furthermore, it is proposed to use compression layers after downscaling the spatial dimension with pooling, i.e., a 1×1111\times 11 × 1 convolution is used to r...
D
Since VRr⁢(X)⊆VRs⁢(X)subscriptVR𝑟𝑋subscriptVR𝑠𝑋\mathrm{VR}_{r}(X)\subseteq\mathrm{VR}_{s}(X)roman_VR start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X ) ⊆ roman_VR start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_X ) for all 0<r≤s0𝑟𝑠0<r\leq s0 < italic_r ≤ italic_s, this construction then naturally...
In particular, one can apply the homology functor to the Vietoris-Rips filtration of a metric space X𝑋Xitalic_X. This induces a persistence module (with T=ℝ>0𝑇subscriptℝabsent0T=\mathbb{R}_{>0}italic_T = blackboard_R start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT) where the morphisms are those induced by inclusions. As a...
The persistent homology of the Vietoris-Rips filtration of a metric space provides a functorial way111Where for metric spaces X𝑋Xitalic_X and Y𝑌Yitalic_Y morphisms are given by 1111-Lipschitz maps ϕ:X→Y:italic-ϕ→𝑋𝑌\phi:X\rightarrow Yitalic_ϕ : italic_X → italic_Y, and for persistence modules V∗subscript𝑉V_{*}itali...
One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding...
The notion of persistent homology arose from work by Frosini, Ferri, and Landi [40, 41], Robins [74], and Edelsbrunner [27, 37] and collaborators. After that, considering the persistent homology of the simplicial filtration induced from Vietoris-Rips complexes was a natural next step. For example, Carlsson and de Silv...
D
Nevertheless, while it may not reflect reality in the same way as, e.g., a large-scale field study performed with real-world experts in their actual working environment [81], the positive results from the study showed that our approach is promising and deserves to be developed and tested further, which will be done in ...
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q...
The second option of the Visual Mapping panel, the Remaining Cost, indicates (in the points’ sizes, by default) the final value of K⁢L⁢D⁢(Pi∥Qi)𝐾𝐿𝐷conditionalsubscript𝑃𝑖subscript𝑄𝑖KLD(P_{i}\|Q_{i})italic_K italic_L italic_D ( italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ italic_Q start_POSTSUBSCRIPT ...
The remaining costs are one aspect of estimating the projection quality. This means that projected points with high remaining costs can be moved by an additional optimization step. Akin to this idea, t-viSNE might show a preview of the data points in the next optimization step. In consequence, users could determine whe...
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main ...
C
The first analysis focuses on taxonomies. Specifically, we provide several recommendations to improve research practices in this area. The growing number of nature-inspired proposals could be seen as a symptom of the active status of this field; however, its sharp evolution suggests that research efforts should be als...
Both taxonomies and the analysis provide a full overview of the situation of the bio-inspired optimization field. However, Figure 1 reflects the interest of research in this field, as the number of papers is in continuous growth of interest. We believe that it is essential to highlight and reflect on what is expected ...
The above statement is quantitatively supported by Figure 1, which depicts the increasing number of papers/book chapters published in the last years with bio-inspired optimization and nature-inspired optimization in their title, abstract and/or keywords. We have considered both bio-inspired and nature-inspired optimiz...
The second analysis delves into a critical perspective on bio-inspired optimization. It discusses the strengths, weaknesses, and challenges that have been identified in the field in recent years, while it also highlights the potential held for future developments in bio-inspired optimization.
The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b...
C
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
C
These findings show that SMap offers benefits over the existing methods, providing better coverage of the ASes in the Internet and not requiring agents or conditions for obtaining traceroute loops, hence improving visibility of networks not enforcing ingress filtering.
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route serve...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
In order to understand if there are differences in enforcement of ingress filtering between different network types and different countries, we perform characterisation of the networks that we found to not be filtering spoofed packets. Specifically, we ask the following questions: Does business type of networks or geo-...
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we ...
C
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification ...
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
A
Third, we gave a 2O⁢(δ1−1/d)⁢nsuperscript2𝑂superscript𝛿11𝑑𝑛2^{O(\delta^{1-1/d})}n2 start_POSTSUPERSCRIPT italic_O ( italic_δ start_POSTSUPERSCRIPT 1 - 1 / italic_d end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n expected time algorithm for random point sets.
The proof also gives a way to relate the expected running times of algorithms for any problem on two different kinds of random point sets: a version where the x𝑥xitalic_x-coordinates of the points are taken uniformly at random from [0,n]0𝑛[0,n][ 0 , italic_n ], and a version where the differences between two consecut...
Let Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the x𝑥xitalic_x-coordinates of the points are taken independently uniformly at random fro...
Let Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be a random point set of n𝑛nitalic_n points in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, where the spacings Δi=xi+1−xisubscriptΔ𝑖subscript𝑥𝑖1subscript𝑥𝑖\Delta_{i}=x_{i+1}-x_{i}roman...
Random point sets. In the third scenario the points in P𝑃Pitalic_P are drawn independently and uniformly at random from the hypercylinder [0,n]×Balld−1⁢(δ/2)0𝑛superscriptBall𝑑1𝛿2[0,n]\times\mathrm{Ball}^{d-1}(\delta/2)[ 0 , italic_n ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / ...
A
If S𝑆Sitalic_S and T𝑇Titalic_T are automaton semigroups such that there exist automata for S𝑆Sitalic_S and T𝑇Titalic_T with state sets P𝑃Pitalic_P and Q𝑄Qitalic_Q respectively and maps ϕ:P→Q:italic-ϕ→𝑃𝑄\phi:P\rightarrow Qitalic_ϕ : italic_P → italic_Q and ψ:Q→P:𝜓→𝑄𝑃\psi:Q\rightarrow Pitalic_ψ : italic_Q → it...
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing ...
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the c...
In this paper, we extend the idea of the constructions used for these results in multiple directions. First, we generally consider partial automata for all of our results, i. e. we do not require the generating automaton to be complete (in contrary to many other results in the literature, for example those mentioned ab...
However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least very close to being an auto...
C
Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization Grand and Belinkov (2019); Ramakrishnan et al. (2018) or to re-scale the loss based on the difficulty o...
HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers ag⁢tsubscript𝑎𝑔𝑡a_{gt}italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT are different from the ranks computed from the human-based attention maps.
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
Both Human Importance Aware Network Tuning (HINT) Selvaraju et al. (2019) and Self Critical Reasoning (SCR) Wu and Mooney (2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss betwe...
To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following Wu and Mooney (2019), we define the sensitivity of an answer a𝑎aitalic_a with respect to a visual region visubscript𝑣𝑖v_{i...
C
We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents. We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross ver...
Figure 3 shows how the number of topics in privacy policies vary with respect to the PageRank value. The whiskers in the plot represent the 95% confidence interval of the means of the number of topics in the privacy policies in each PageRank value bin. The PageRank values were binned with a constant value of 0.25 such ...
Topic modelling showed the distribution of themes of privacy practices in policies, corresponding to the expectations of legal experts in some ways, but differing in others. The positive relationship between PageRank of a domain and the number of topics covered in its policy indicates that more popular domains have a s...
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as P...
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ...
B
Considering all that, E3 noted that our system could be useful in solving competition problems, e.g., on Kaggle, and for her team to run tests before applying specific models to their huge data sets. Progressive VA workflows [53] could also be useful for improving the scalability of our approach for larger data sets.
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c...
Figure 7: The exploration of the models’ and predictions’ spaces and the metamodel’s results. (a) presents the initial models’ space and how it can be simplified with the removal of unnecessary models. The predictions’ space is then updated, and the user is able to select instances that are not well classified by the ...
Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the re...
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ...
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr...
B
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met...
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization lear...
B
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm...
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV da...
In such mission-driven UAV networks, high-data-rate inter-UAV communications play a pivotal role. MmWave band has abundant spectrum resource, and is considered as a potential avenue to support high-throughput data transmission for UAV networks [9, 10, 7]. If the Line-of-Sight (LoS) propagation is available, mmWave comm...
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduce...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
C
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT are as required by Theorem 3.7.
a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ. The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI...
We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en...
Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever...
Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital...
D
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTS...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalize...
B
Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose a densel...
Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose a densel...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
As for the costs, the decoder depth has a strong impact on inference speed, as the decoder has to be computed once for each decoding step during auto-regressive decoding Kasai et al. (2021); Xu et al. (2021c), and the use of only deep encoders Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a); Chai et al. (20...
For the convergence of deep Transformers, Bapna et al. (2018) propose the Transparent Attention mechanism which allows each decoder layer to attend weighted combinations of all encoder layer outputs. Wang et al. (2019) present the Dynamic Linear Combination of Layers approach that additionally aggregates shallow layers...
D
through the map f:∏i∈IXi→Struct⁡(σ):𝑓→subscriptproduct𝑖𝐼subscript𝑋𝑖Structσf\colon\prod_{i\in I}X_{i}\to\operatorname{Struct}(\upsigma)italic_f : ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT → roman_Struct ( roman_σ ) that associates to each (Ai...
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_P...
the disjoint union of the structures Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with εisubscript𝜀𝑖\varepsilon_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT true on the structure Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT
x)\wedge\neg(x=y)italic_ψ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≜ ∃ italic_x . ∃ italic_y . italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) ∧ ¬ ( italic_x = italic_y ) for i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and θi,j≜∃x.∃y...
The set X𝑋Xitalic_X is the union (disjoint by construction) ⋃i∈Ifi⁢(Xi)subscript𝑖𝐼subscript𝑓𝑖subscript𝑋𝑖\bigcup_{i\in I}f_{i}(X_{i})⋃ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) where...
B
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective...
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
Figure 5: Comparison of two learning representations for distortion estimation, distortion parameter (left) and ordinal distortion (right). In contrast to the ambiguous relationship between the distortion distribution and distortion parameter, the proposed ordinal distortion displays an evident positive correlation to ...
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimate...
D
We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others. Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes.
showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1 shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed
Figure 3 shows the validation perplexity of the three methods with a small batch size of 20 and a large batch size of 2000. In small-batch training, SNGM and LARS achieve validation perplexity comparable to that of MSGD. Meanwhile, in large-batch training, SNGM achieves better performance than MSGD and LARS.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
D
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d⁢(j,S)>...
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C...
        do FA←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i star...
  FAs¯←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{% \pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s...
Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awa...
D
This together with the convergence of {‖X⁢(k,ω)−𝟏N⊗z∗⁢(ω)‖,k≥0}norm𝑋𝑘𝜔tensor-productsubscript1𝑁superscript𝑧𝜔𝑘0\{\|X(k,\omega)-\mathbf{1}_{N}\otimes z^{*}(\omega)\|,k\geq 0\}{ ∥ italic_X ( italic_k , italic_ω ) - bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ⊗ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUP...
k)}|\mathcal{F}(k-1)\right]\succeq O_{N\times N}\ \mbox{a.s.},\ \mathcal{G}(k|% k-1)\text{ is balanced}\ \mbox{a.s.},\ k\geq 0\Big{\}}.roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = { { caligraphic_G ( italic_k ) , italic_k ≥ 0 } | italic_E [ caligraphic_A start_POSTSUBSCRIPT caligraphic_G ( italic_k ) end_POSTSUBSC...
From the definition of Γ2subscriptΓ2\Gamma_{2}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we know that Γ2⊆Γ1subscriptΓ2subscriptΓ1\Gamma_{2}\subseteq\Gamma_{1}roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Then, similar to the proof of Theorem 2 in [25], we get ...
At first, we suppose {𝒢⁢(k),k≥0}𝒢𝑘𝑘0\{\mathcal{G}(k),k\geq 0\}{ caligraphic_G ( italic_k ) , italic_k ≥ 0 } is a Markov chain with countable state space. For this case, Condition (b.1) of Theorem III.1 becomes more intuitive and Condition (b.2) is weakened.
The proof of Theorem III.2 is similar to that of Theorem 3.1 and is omitted here. For details, see Appendix A. The only difference is that by the independence between ℒ𝒢⁢(i)subscriptℒ𝒢𝑖\mathcal{L}_{\mathcal{G}(i)}caligraphic_L start_POSTSUBSCRIPT caligraphic_G ( italic_i ) end_POSTSUBSCRIPT and ℒ𝒢⁢(j)subscriptℒ𝒢𝑗...
C
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
D
Figure 2: Example of segmentation results on validation dataset from three best single models: (a)(d) HTC, (b)(e) SOLOv2 and (c)(f) PointRend. PointRend predicts masks with substantially finer details around object boundaries. All figures are best viewed digitally with zoom.
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
As shown in Figure 2, we compare HTC, SOLOv2 and PointRend by visualizing their predictions. It can be seen that PointRend generates much finer and smoother segmentation boundaries than HTC and SOLOv2, it also handles overlapped instances gradely (see top-left corner in Figure 2). Meanwhile, PointRend succeeds in disti...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
C
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
A
Compared to OPT-WLSVI and MASTER, our proposed algorithms achieve comparable empirical performance. More specifically, MASTER outperforms our proposed algorithm which agrees with its dynamic regret upper bound. However, the variance of MASTER is larger due to the random scheduling of multiple base algorithms. Our algo...
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are gen...
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dy...
\mathcal{S})+\frac{k-100i}{100}\bm{\mu}_{h}^{(i+1)\mod 5}(\mathcal{S}),bold_italic_μ start_POSTSUBSCRIPT italic_h , italic_k end_POSTSUBSCRIPT ( caligraphic_S ) = ( 1 - divide start_ARG italic_k - 100 italic_i end_ARG start_ARG 100 end_ARG ) bold_italic_μ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSC...
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3