Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
250
4.36k
A
stringlengths
250
4.12k
B
stringlengths
250
5.11k
C
stringlengths
250
5.1k
D
stringlengths
250
4.17k
label
stringclasses
4 values
x2⁢(x2−1)⁢d2d⁢x2⁢Rnm⁢(x)=[n⁢x2⁢(n+D)−m⁢(D−2+m)]⁢Rnm⁢(x)+x⁢[D−1−(D+1)⁢x2]⁢dd⁢x⁢Rnm⁢(x).superscript𝑥2superscript𝑥21superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥delimited-[]𝑛superscript𝑥2𝑛𝐷𝑚𝐷2𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑥delimited-[]𝐷1𝐷1superscript𝑥2𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥x^{2}(x^{2}-...
}\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m% }}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s...
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic...
^{2}-m^{2}\right]x^{2}\\ +D^{2}+D(m-1)-2m+m^{2}\Big{\}}\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUP...
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
D
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices. Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar...
Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre...
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec...
Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r. Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero.
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left...
D
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscrip...
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien...
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
B
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle take nearly linear time [6, 1, 2, 3, 23], that is, O⁢(n⁢log⁡n)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O⁢(n⁢log2⁡n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP...
Using a Rotate-and-Kill process (which is shown in Algorithm 5), we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead.
Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-...
The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim...
in the Rotate-and-Kill process, and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2).
C
Due to the importance of information propagation for rumors and their detection, there are also different simulation studies [25, 27] about rumor propagations on Twitter. Those works provide relevant insights, but such simulations cannot fully reflect the complexity of real networks. Furthermore, there are recent work...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
D
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
B
Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content. Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before...
The performance of Twitter features are stable over time from the beginning to the end. The 3 best of Twitter Features are all based on contained URLs in tweets: ContainNEWS, UrlRankIn5000, WotScore, as shown in Table 8. It is quite reasonable that the news event would have higher probability to be reported by news or ...
Widely spreading rumors can be harmful to the government, markets and society and reduce the usefulness of social media channel such as Twitter by affecting the reliability of their content. Therefore, effective method for detecting rumors on Twitter are crucial and rumors should be detected as early as possible before...
The city police had to warn the population to refrain from spreading related news on Twitter as it was getting out of control: “Rumors are wildfires that are difficult to put out and traditional news sources or official channels, such as police departments, subsequently struggle to communicate verified information to t...
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news even...
C
Discounted Cumulative Gain (NDCG) and r⁢e⁢c⁢a⁢l⁢l⁢@⁢k𝑟𝑒𝑐𝑎𝑙𝑙@𝑘recall@kitalic_r italic_e italic_c italic_a italic_l italic_l @ italic_k (r⁢@⁢k𝑟@𝑘r@kitalic_r @ italic_k). We measure the retrieval effectiveness of each metric at 3 and 10 (m𝑚mitalic_m@3 and m𝑚mitalic_m@10, where m∈𝑚absentm\initalic_m ∈ {N⁢D⁢C⁢G,...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of b...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we...
B
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance ...
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | cali...
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
D
This very low threshold for now serves to measure very basic movements and to check for validity of the data. Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity).
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
A
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
We further evaluated the model complexity of all relevant deep learning approaches listed in Table 1. The number of trainable parameters was computed based on either the official code repository or a replication of the described architectures. In case a reimplementation was not possible, we faithfully estimated a lowe...
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trai...
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones ba...
Table 5: Details regarding the hardware and software specifications used throughout our evaluation of computational efficiency. The system ran under the Debian 9 operating system and we minimized usage of the computer during the experiments to avoid interference with measurements of inference speed.
A
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate s...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
A reason why this direct reduction from cutwidth to pathwidth has been overlooked might be that the literature on cutwidth and pathwidth approximation is focussed on more general approximation techniques (i. e., vertex and edge separators), which then yield approximation algorithms for these graph parameters. Another r...
The relationship between cutwidth and pathwidth revealed by this direct reduction is best illustrated via a third graph parameter that we call second order cutwidth. To the best of our knowledge, this parameter has not explicitly been studied before.
In this work, we have answered several open questions about the string parameter of the locality number. Our main tool was to relate the locality number to the graph parameters cutwidth and pathwidth via suitable reductions. As an additional result, our reductions also pointed out an interesting relationship between th...
C
Shallow learning models such as decision trees and Support Vector Machines (SVMs) are ‘inefficient’; meaning that they require a large number of computations during training/inference, large number of observations for achieving generalizability and significant human labour to specify prior knowledge in the model[8].
The literature phrase search is the combined presence of each one of the cardiology terms indicated by (∗*∗) in Table I with each one of the deep learning terms related to architecture, indicated by (+++) in Table II using Google Scholar111https://scholar.google.com, Pubmed222https://ncbi.nlm.nih.gov/pubmed/ and Scopus...
The network architecture was adapted from VGG-16, similar to the DeepLab architecture[164] while the final segmentation was refined using a Conditional Random Field (CRF). The authors show that the introduction of unlabelled data improves segmentation performance when the training set is small.
At each iteration the expert annotates the most uncertain ECG beats in the test set, which are then used for training, while the output of the network assigns the confidence measures to each test beat. Experiments performed on MITDB, INDB, SVDB indicate the robustness and computational efficiency of the method.
An one hidden layer network was used for the initial testing of all voxels to obtain a small number of candidates, followed by a more accurate classification with a deep network. The learned image features are further combined with Haar wavelet features to increase the detection accuracy.
A
We presented SimPLe, a model-based reinforcement learning approach that operates directly on raw pixel observations and learns effective policies to play games in the Atari Learning Environment. Our experiments demonstrate that SimPLe learns to play many of the games with just 100100100100K interactions with the envir...
In this paper our focus was to demonstrate the capability and generality of SimPLe only across a suite of Atari games, however, we believe similar methods can be applied to other environments and tasks which is one of our main directions for future work. As a long-term challenge, we believe that model-based reinforcem...
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world mod...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
C
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for ...
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
Deep learning is emerging as a powerful solution for a wide range of problems in biomedicine achieving superior results compared to traditional machine learning. The main advantage of methods that use deep learning is that they automatically learn hierarchical features from training data making them scalable and genera...
A
As depicted in Fig. 10, for the step negotiation operation with a height of hℎhitalic_h, both ER⁢w<EC⁢wsubscript𝐸𝑅𝑤subscript𝐸𝐶𝑤E_{Rw}<E_{Cw}italic_E start_POSTSUBSCRIPT italic_R italic_w end_POSTSUBSCRIPT < italic_E start_POSTSUBSCRIPT italic_C italic_w end_POSTSUBSCRIPT and ER⁢r<EC⁢rsubscript𝐸𝑅𝑟subscript𝐸𝐶...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
To assess the efficacy of the suggested autonomous locomotion mode transition strategy, simulation experiments featuring step heights of h, 2h, and 3h were conducted. These simulations involved continuous tracking of energy consumption for both total body negotiation (ER⁢wsubscript𝐸𝑅𝑤E_{Rw}italic_E start_POSTSUBSCR...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
B
In this work, we address what is a significant drawback in the online advice model. Namely, all previous works assume that advice is, in all circumstances, completely trustworthy, and precisely as defined by the algorithm. Since the advice is infallible, no reasonable online algorithm with advice would choose to ignor...
Furthermore, we show an interesting difference between the standard advice model and the model we introduce: in the former, an advice bit can be at least as powerful as a random bit, since an advice bit can effectively simulate any efficient choice of a random bit. In contrast, we show that in our model, there are situ...
Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily a...
We begin in Section 2 with a simple, yet illustrative online problem as a case study, namely the ski rental problem. Here, we give a Pareto-optimal algorithm with only one bit of advice. We also show that this algorithm is Pareto-optimal even in the space of all (deterministic) algorithms with advice of any size.
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution....
D
Traditionally, expert systems have been used to deal with complex problems that require the ability of human experts to be solved. These intelligent systems usually need knowledge engineers to manually code all the facts and rules acquired from human experts through interviews, for the system’s knowledge base (KB).
Traditionally, expert systems have been used to deal with complex problems that require the ability of human experts to be solved. These intelligent systems usually need knowledge engineers to manually code all the facts and rules acquired from human experts through interviews, for the system’s knowledge base (KB).
However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579292929Note that this is the same subject who was prev...
A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as “early classification”, in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy.
Nonetheless, This manual process is very expensive and error-prone since the KB of a real expert system includes thousands of rules. This, added to the rise of big data and cheaper GPU-powered computing hardware, are causing a major shift in the development of these intelligent systems in which machine learning is incr...
D
In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019). There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (...
𝐦t,k,k∈[K]subscript𝐦𝑡𝑘𝑘delimited-[]𝐾{\bf m}_{t,k},k\in[K]bold_m start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT , italic_k ∈ [ italic_K ] is called local momentum since it only accumulates local gradient information from worker k𝑘kitalic_k.
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information.
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse commun...
C
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstru...
Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
During validation we selected the models with the kernel size that achieved the best φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG out of all epochs. During testing we feed the test data into the selected model and calculate C⁢R−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIP...
C
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
We construct a UAV ad-hoc network in a post-disaster scenario with M𝑀Mitalic_M identical UAVs being randomly deployed, in which M𝑀Mitalic_M is a huge number compared with normal Multi-UAV system. All the UAVs have the same volume of battery E𝐸Eitalic_E and communication capability. The topological structure of Mult...
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, ...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
B
\right)\,/\,\widehat{r}^{2}\right)\right]= under¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT [ - over¯ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ ( over¯ start_ARG italic_f end_ARG over¯ start_ARG bold_v end_ARG / over¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIP...
(\overline{\widehat{\nabla}}\,\,\overline{\omega}\right)\right)^{2}= over^ start_ARG over¯ start_ARG italic_W end_ARG end_ARG ∗ [ over^ start_ARG italic_μ end_ARG { 2 ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) s...
\eta}}/\mu_{0}\right)\,\,\left(\left(\overline{\widehat{\nabla}}\,\,\overline{% f}\right)\,/\,\widehat{r}\right)^{2}\right)}}\biggr{]}+ start_UNDERACCENT italic_η start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_J start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_UNDERACCE...
\widehat{\mathbf{\eta}}\,\,\left(\overline{\widehat{\nabla}}\,\,\overline{f}% \right)\,/\,\widehat{r}^{2}\right)\right\}= divide start_ARG 1 end_ARG start_ARG 2 italic_π end_ARG over¯ start_ARG italic_d italic_V end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { - over¯ start_ARG over¯ start_ARG ∇ end_ARG e...
Here, η^=<η¯>e^𝜂superscriptexpectation¯𝜂𝑒\widehat{\mathbf{\eta}}=<\overline{\eta}>^{e}over^ start_ARG italic_η end_ARG = < over¯ start_ARG italic_η end_ARG > start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT, and ω^=<v¯ϕ/r¯>e^𝜔superscriptexpectationsubscript¯𝑣italic-ϕ¯𝑟𝑒\widehat{\mathbf{\omega}}=<\overline{v}_...
D
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
C
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene...
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is u...
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requirin...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft...
B
Medical images, both 2D and volumetric, have in general, larger file sizes than natural images, which inhibits the ability to load them entirely onto the memory for processing. As such, they need to be processed either as patches or sub-volumes, making it difficult for the segmentation models to capture spatial relati...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add ...
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important pr...
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information...
D
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yield...
Fig. 4 illustrates how the size of the cut γ⁢(𝐳)𝛾𝐳\gamma({\mathbf{z}})italic_γ ( bold_z ) induced by the spectral partition 𝐳𝐳{\mathbf{z}}bold_z changes as more edges are added and the original structure of the graph is corrupted (blue line). The figure also reports the size of the random cut (orange line) and the...
B
Neural random forest imitation enables an implicit transformation of random forests into neural networks. Usually, data samples are propagated through the individual decision trees and the split decisions are evaluated during inference. We propose a method for generating input-target pairs by reversing this process and...
We now compare the proposed method to state-of-the-art methods for mapping random forests into neural networks and classical machine learning classifiers such as random forests and support vector machines with a radial basis function kernel that have shown to be the best two classifiers across all UCI datasets (Fernán...
Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages. Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART ...
The generalization performance has been widely studied. Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes. Olson et al. (2018) evaluate the performance of modern...
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
B
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al....
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
C
In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements. We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza...
Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs. While quantization approaches obviously reduce the memory footprint of a DNN, the selected weight representation potentially also facilitates faster inference using cheaper arithmetic operations.
Furthermore, representations using fewer bits often facilitate faster computation. For instance, when quantization is driven to the extreme with binary weights w∈{−1,1}𝑤11w\in\{-1,1\}italic_w ∈ { - 1 , 1 } and binary activations x∈{−1,1}𝑥11x\in\{-1,1\}italic_x ∈ { - 1 , 1 }, floating-point or fixed-point dot products...
Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations. The reduction in memory requirements are obvious: Using fewer bits for the weights results in a lower memory overhead for storing the corresponding model, and using fewer bits for the activ...
The results for number of floating-point operations (FLOPs), parameters, activations, and memory (=== parameters +++ activations) are reported in Figure 4. When considering the number of FLOPs and parameters, which are the main metrics in the literature on resource-efficient DNNs, it is clear that channel and group pru...
C
\mathrm{VR}_{*}(X);\mathbb{F})\oplus\mathrm{PH}_{*}(\mathrm{VR}_{*}(Y);\mathbb% {F}).roman_PH start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( roman_VR start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_X ∨ italic_Y ) ; blackboard_F ) ≅ roman_PH start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( roman_VR start_POSTSUBSCRIPT ∗ end_POSTSU...
By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ...
Note that the tensor product of two simple persistence modules corresponding to intervals I𝐼Iitalic_I and J𝐽Jitalic_J is the simple persistence module corresponding to the interval I∩J𝐼𝐽I\cap Jitalic_I ∩ italic_J. Therefore, the first part of Theorem 8 implies that
We will first prove the claim surrounding Equation (5). By [23, Lemma 3.16 and Proposition 3.29], the multiplicity of (r,s]𝑟𝑠(r,s]( italic_r , italic_s ] in barckVR⁢(X;𝔽)subscriptsuperscriptbarcVR𝑘𝑋𝔽\mathrm{barc}^{\mathrm{VR}}_{k}(X;\mathbb{F})roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_PO...
The second part of the proposition above, equation (3), implies that the right endpoint of any interval I𝐼Iitalic_I (often referred to as the death time of I𝐼Iitalic_I) cannot exceed the radius rad⁢(X)rad𝑋\mathrm{rad}(X)roman_rad ( italic_X ) of X𝑋Xitalic_X (cf. Remark 9.1).
B
The rest of this paper is organized as follows. In the next two sections, we discuss literature that is related to visual, interactive assessment and interpretation of t-SNE projections as well as the necessary background information on how the t-SNE algorithm works. Section 4 presents our visualization approach inclu...
The problem that DR tries to solve is, in general, to find a low-dimensional representation of a high-dimensional data set that retains—as much as possible—its original structure. When used for visualization, the output is set to two or three dimensions, and the results are commonly visualized with scatterplots, where ...
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q...
A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible. Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a triv...
Most similarly to one of our proposed interactions (the Dimension Correlation, Subsection 4.4), in AxiSketcher [47] (and its prior version InterAxis [48]) the user can draw a polyline in the scatterplot to identify a shape, which results in new non-linear high-dimensional axes to match the user’s intentions. Since the...
C
Guided learning strategy: A novel update mechanism for metaheuristic algorithms design and improvement - 2024 [30]: This work provides guidelines for improving the performance of metaheuristics. The authors have developed a strategy for recalling the algorithm’s requirements based on the current population. Authors ann...
A Simple statistical test against origin-biased metaheuristics - 2024 [31]: The authors have developed a test to determine algorithm bias. The test is based on the idea that an unbiased algorithm can choose either direction for one of two different local optima in a function. If there is a difference in behavior betwee...
Benchmarks: The choice of benchmarking in algorithm evaluation can vary between real-world scenarios and comparisons against existing algorithms. Selecting the right benchmark is crucial, as study conclusions heavily rely on the test environment. However, chosen benchmarks often exhibit biases that can unfairly advanta...
In [21], authors perform a comparison between seven bio-inspired algorithms with various benchmarks and discovered that “these (algorithms) contain a centre-bias operator that lets them find optima in the centre of the benchmark set with ease”. The conclusion is that making more “comparison with other methods (that do ...
Metaheuristic optimization algorithms: a comprehensive overview and classification of benchmark test functions - 2024 [37]: This work focuses on the practical scenario of developing a new metaheuristic. It reviews over 200 mathematical test functions and more than 50 real-world engineering design problems. For each fu...
A
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]. The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly. In recent y...
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21...
C
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ...
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested networ...
Statistics of IPID values distribution among tested servers are plotted in Figure 2. When ICMP is filtered, it results in ERROR, when run with TCP, the IPID values are often zero (i.e., ZERO IPID in graph) in Figure 2. To improve coverage of the IPID technique we merge the ICMP&TCP and ICMP&UDP results for each server ...
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this eva...
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the...
B
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
The second comparison is between the weighted ensembles of SVMs, i.e., the state of the art [7], and the weighted ensembles of neural networks. For each batch, an SVM and a neural network were trained with that batch as the training set. Weighted ensembles were constructed for each batch T𝑇Titalic_T by assigning weig...
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer ...
The context pathway is based on a recurrent neural network (RNN) approach. It reuses weights and biases across the steps of a sequence and can thus process variable-length sequences. The alternative was to use a long-short term memory (LSTM), which employs gating variables to better remember information in long sequen...
D
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Algorithm ℬℬ\mathcal{B}caligraphic_B is simply algorithm 𝒜𝒜\mathcal{A}caligraphic_A, but after every step it waits as long as necessary to make its expected running time for that step equal to the bound calculated for this step. To be precise, there are two types of waiting, best explained by an example.
3:Compute the sets ℬ1(1),…,ℬ|𝒯|+1(1)superscriptsubscriptℬ11…superscriptsubscriptℬ𝒯11\mathcal{B}_{1}^{(1)}\!,\ldots,\mathcal{B}_{|\mathcal{T}|+1}^{(1)}caligraphic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … , caligraphic_B start_POSTSUBSCRIPT | caligraphic_T | + 1 end_...
Note that the time waited is independent of Q𝑄Qitalic_Q. Together, these two types of waiting ensure that (i) the time needed by ℬℬ\mathcal{B}caligraphic_B is monotone in |Q|𝑄|Q|| italic_Q | and (ii) the total expected time needed by ℬℬ\mathcal{B}caligraphic_B equals the calculated upper bound for 𝒜𝒜\mathcal{A}cali...
B
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belong...
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem...
A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1⁢…⁢qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic...
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cann...
D
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating ...
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi...
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p...
Following Selvaraju et al. (2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT trained on human attention maps...
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu...
D
Table 3 shows the results for the answer sentence selection task comparing the performance between BERT and PrivBERT. Results from BERT are as reported by Ravichander et al. (2019). PrivBERT achieves state of the art results improving on the results of BERT by about 6%. PrivBERT therefore has been shown to achieve sta...
Duplicate and Near-Duplicate Detection. Examination of the corpus revealed that it contained many duplicate and near-duplicate documents. We removed exact duplicates by hashing all the raw documents and discarding multiple copies of exact hashes. Through manual inspection, we found that a number of privacy policies fro...
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents. We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross ver...
D
G4: Facilitate human interaction and offer guidance. During development of any VA tool, it is key to decide on concrete visual representations and interaction technologies between multiple coordinated views. It is not uncommon to find gaps between visualization design guidelines and their applicability in implemented ...
G5: Reveal and reduce cognitive biases. Visualizations should be carefully chosen in order to reduce cognitive biases. Cognitive bias is, in simple terms, a human judgment that drifts away from the actual information that should be conveyed by a visualization, i.e., it “involves a deviation from reality that is predic...
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand ...
The use of visualization for ensemble learning could possibly introduce further biases to the already blurry situation based on the different ML models involved. Thus, the thorough selection of both interaction techniques and visual representations that highlight and potentially overcome any cognitive biases is a major...
A
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
D
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor...
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in d...
B
Although the GP-based UAV’s position and attitude prediction results fit well with the position and attitude data, the prediction performance is effected by UAV’s mobility. When the UAV has higher mobility such as the more random trajectory and high velocity, the prediction error may influence the beam tracking. The c...
In this subsection, two beam tracking schemes with different types of antenna array are illustrated by simulation results. One is the proposed DRE-covered CCA scheme where all the t-UAVs are equipped with the CCA of the size Nt=64subscript𝑁𝑡64N_{t}=64italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 64, Mt=16...
The SEs of two array schemes against the transmit power with K=2𝐾2K=2italic_K = 2 t-UAVs are illustrated in Fig. 13. The TE-aware codeword selection uses the proposed Algorithm 2 and Algorithm 3. Serving as a reference, the minimum-beamwidth scheme always select the minimum beamwidth, i.e., the maximum number of anten...
Without loss of generality, let us focus on the TE-aware codeword selection for the k𝑘kitalic_k-th t-UAV at the r-UAV side. The beam gain is selected as the optimization objective, and the problem of beamwidth control is translated to choose the appropriate subarray size, which corresponds to the appropriate layer in ...
As shown in Fig. 11, the SE of the CCA codebook scheme and the traditional codebook scheme is compared. The proposed DRE-covered CCA codebook is used in the CCA codebook scheme. In the traditional codebook scheme, the codebook without subarray partition is used. The CCA on the r-UAV is equally partitioned into K𝐾Kital...
B
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the ri...
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the argument...
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping. This completes the proof for case 2 when the assumptions (a1) and (a2) hold.
A
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
B
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ...
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study ...
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduce...
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the ne...
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in...
B
commute. As I𝐼Iitalic_I is non empty, consider some i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, we have gi=idi∘g=idi∘g′subscript𝑔𝑖subscriptid𝑖𝑔subscriptid𝑖superscript𝑔′g_{i}=\mathrm{id}_{i}\circ g=\mathrm{id}_{i}\circ g^{\prime}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_id start_POSTSUBSCRIPT italic_i...
uniqueness in 𝐓𝐨𝐩𝐓𝐨𝐩\mathbf{Top}bold_Top. This shows that {fi:X→Xi}i∈Isubscriptconditional-setsubscript𝑓𝑖→𝑋subscript𝑋𝑖𝑖𝐼\{f_{i}\colon X\to X_{i}\}_{i\in I}{ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT ita...
that the identity map idi:X→Yi:subscriptid𝑖→𝑋subscript𝑌𝑖\mathrm{id}_{i}:X\to Y_{i}roman_id start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a spectral map for all i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I.
in 𝐓𝐨𝐩𝐓𝐨𝐩\mathbf{Top}bold_Top. Since the maps idi:X→Xi:subscriptid𝑖→𝑋subscript𝑋𝑖\mathrm{id}_{i}\colon X\to X_{i}roman_id start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_X → italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are spectral, Lemma 7.2
Assume that {gi:Z→Yi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑍subscript𝑌𝑖𝑖𝐼\{g_{i}\colon Z\to Y_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Z → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT is a collection o...
C
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, and it solves a more straightforward estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the l...
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in distortion rectification. However, two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene l...
Second, the ordinal distortion is homogeneous as all its elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task anymore. Compared to the distortion parameters wi...
D
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
C
Given a newly arriving scenario A𝐴Aitalic_A, we can set (HA,πA)←←subscript𝐻𝐴superscript𝜋𝐴absent(H_{A},\pi^{A})\leftarrow( italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_π start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ) ←GreedyCluster(A,R,−R)𝐴𝑅𝑅(A,R,-R)( italic_A , italic_R , - italic_R ),...
5555-approximation for homogeneous 2S-MuSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and runtime poly(n,m,Λ)poly𝑛𝑚Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
There is a polynomial-time 3333-approximation for homogeneous RW-MatSup. There is a 3333-approximation algorithm for RW-MuSup, with runtime poly(n,m,Λ)normal-poly𝑛𝑚normal-Λ\operatorname*{poly}(n,m,\Lambda)roman_poly ( italic_n , italic_m , roman_Λ ).
We now describe a generic method of transforming a given 𝒫𝒫\mathcal{P}caligraphic_P-Poly problem into a single-stage deterministic robust outlier problem. This will give us a 5-approximation algorithm for homogeneous 2S-MuSup and 2S-MatSup instances nearly for free; in the next section, we also use it obtain our 11-a...
In this section we tackle the simplest problem setting, designing an efficiently-generalizable 3333-approximation algorithm for homogeneous 2S-Sup-Poly. To begin, we are given a list of scenarios Q𝑄Qitalic_Q together with their probabilities pAsubscript𝑝𝐴p_{A}italic_p start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT,...
B
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
C
Comparing to generalization, bucketization technique [33, 18] maintains excellent information utility because it preserves all the original QI values. However, most existing approaches cannot prevent identity disclosure, and the existence of individuals in published table is likely to be disclosed [27]. Furthermore, t...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
D
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (20...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
A
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subsc...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
B
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni...
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi...
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t...
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
C
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
The survey was written in English and made available to anyone with the hyperlink. Participation was fully voluntary. For dissemination, various channels were employed including a mailing list of students from a local Singapore university, an informal Telegram supergroup joined by students, alumni, and faculty of the ...
In this study, we seek to answer these research questions. RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collectio...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
75 of the 104 responses fulfilled the criterion of having respondents who were currently based in Singapore. This set was extracted for further analysis and will be henceforth referred to as ‘SG-75’. The details on the participant demographics of SG-75 are shown in Table 1. From SG-75, two more subsets were formed via ...
A
Conventional KG embedding approaches broadly fall into two types: Triplet-based and GNN-based methods. Triplet-based methods include translational methods [11, 27, 28], semantic matching methods [29, 30, 31, 32], and neural methods [33, 34]. For a detailed understanding, interested readers can refer to surveys [3, 35, ...
The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ...
GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no...
The proposed DAN is compatible with most existing GNN-based methods, allowing these methods to leverage our DAN as the GNN module for entity encoding. Furthermore, the computational cost is comparable to that of existing methods. Therefore, we offer an efficient and general GNN architecture for KG embedding.
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti...
B
9:     Taking stochastic gradient ascent tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times to maximize LVDMsubscript𝐿VDML_{\rm VDM}italic_L start_POSTSUBSCRIPT roman_VDM end_POSTSUBSCRIPT and update parameters (φ,ψ,θ)𝜑𝜓𝜃(\varphi,\psi,\theta)( italic_φ , italic_ψ , italic_θ...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics mode...
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The ...
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch...
C
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
2