context
stringlengths
250
4.99k
A
stringlengths
250
5.11k
B
stringlengths
250
3.8k
C
stringlengths
250
8.2k
D
stringlengths
250
3.9k
label
stringclasses
4 values
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2 Gaussian integrations for moments xD−1+n−2⁢ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 e...
Gaussian integration rules for integrals ∫01xD−1⁢Rnm⁢(x)⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 ...
rules for the lifted integrals ∫01xD−1⁢[1+Rnm⁢(x)]⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1delimited-[]1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}[1+R_{n}^{m}(x)]f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT ita...
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic...
C
This is achieved by using specific upper and lower triangular transvections to avoid using a discrete logarithm oracle. Building on Lemma 3.2 we construct transvections which are upper triangular matrices. Here, as per Section 3.1, ω𝜔\omegaitalic_ω denotes a primitive element of 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboar...
Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r. Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero.
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left...
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec...
Let i∈{1,…,d−1}𝑖1…𝑑1i\in\{1,\dotsc,d-1\}italic_i ∈ { 1 , … , italic_d - 1 }. Getting the diagonal entry of hℎhitalic_h at position (i,i)𝑖𝑖(i,i)( italic_i , italic_i ) to 1111 requires the following number of operations. We start by adding the column i+1𝑖1i+1italic_i + 1 to column i𝑖iitalic_i as in Line 5. We alre...
B
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85...
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T...
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis...
B
We remark that the previously best known algorithms for finding the minimum area / perimeter all-flush triangle take nearly linear time [6, 1, 2, 3, 23], that is, O⁢(n⁢log⁡n)𝑂𝑛𝑛O(n\log n)italic_O ( italic_n roman_log italic_n ) or O⁢(n⁢log2⁡n)𝑂𝑛superscript2𝑛O(n\log^{2}n)italic_O ( italic_n roman_log start_POSTSUP...
in the Rotate-and-Kill process, and we are at the beginning of another iteration (b′,c′)superscript𝑏′superscript𝑐′(b^{\prime},c^{\prime})( italic_b start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) satisfying (2).
Then, during the Rotate-and-Kill process, the pair (eb,ec)subscript𝑒𝑏subscript𝑒𝑐(e_{b},e_{c})( italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) will meet all pairs that are not DEAD, which implies that the algorithm finds the minimum perimeter (all-...
The inclusion / circumscribing problems usually admit the property that the set of locally optimal solutions are pairwise interleaving [6]. Once this property is admitted and k=3𝑘3k=3italic_k = 3, we show that an iteration process (also referred to as Rotate-and-Kill) can be applied for searching all the locally optim...
Using a Rotate-and-Kill process (which is shown in Algorithm 5), we find out all the edge pairs and vertex pairs in 𝖴r,s,tsubscript𝖴𝑟𝑠𝑡\mathsf{U}_{r,s,t}sansserif_U start_POSTSUBSCRIPT italic_r , italic_s , italic_t end_POSTSUBSCRIPT that are not G-dead.
C
Single Tweet Model Settings. For the evaluation, we shuffle the 180 selected events and split them into 10 subsets which are used for 10-fold cross-validation (we make sure to include near-balanced folds in our shuffle). We implement the 3 non-neural network models with Scikit-learn444scikit-learn.org/. Furthermore, ne...
Single Tweet Classification Results. The experimental results of are shown in Table 2. The best performance is achieved by the CNN+LSTM model with a good accuracy of 81.19%. The non-neural network model with the highest accuracy is RF. However, it reaches only 64.87% accuracy and the other two non-neural models are eve...
We tested all models by using 10-fold cross validation with the same shuffled sequence. The results of these experiments are shown in Table 4. Our proposed model (Ours) is the time series model learned with Random Forest including all ensemble features; T⁢S−S⁢V⁢M𝑇𝑆𝑆𝑉𝑀TS-SVMitalic_T italic_S - italic_S italic_V it...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit afte...
A
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen...
C
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi...
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi...
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to...
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W...
A
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU...
to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector of ℳL⁢R2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
C
\right)\;.\\ \end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT...
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ...
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
—i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag— one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein.
More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states, as the Monte Carlo error of pM⁢(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( ital...
C
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t...
This very low threshold for now serves to measure very basic movements and to check for validity of the data. Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity).
D
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity
The categorical organization of the CAT2000 database also allowed us to quantify the improvements by the ASPP module with respect to individual image classes. Table 7 lists the four categories that benefited the most from multi-scale information across the subset of evaluation metrics on the validation set: Noisy, Sat...
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity
Figure 3: A visualization of four example images from the CAT2000 validation set with the corresponding fixation heat maps, our best model predictions, and estimated maps based on the ablated network. The qualitative results indicate that multi-scale information augmented with global context enables a more accurate es...
To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result...
A
Many existing algorithms constructing path decompositions are of theoretical interest only, and this disadvantage carries over to the possible algorithms computing the locality number or cutwidth (see Section 6) based on them. However, the reduction of 5.7 is also applicable in a purely practical scenario, since any ki...
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several u...
C
In[148] the authors adopt a 3D multi-scale CNN to identify pixels that belong to the RV. The network has two convolutional pathways and their inputs are centered at the same image location, but the second segment is extracted from a down-sampled version of the image.
In their article Hong et al.[201] trained a DBN using image patches for the detection, segmentation and severity classification of Abdominal Aortic Aneurysm region in CT images. Liu et al.[202] used an FCN with twelve layers for left atrium segmentation in 3D CT volumes and then refined the segmentation results of the ...
In their article Tran et al.[142] trained a four layer FCN model for LV/RV segmentation on SUN09, STA11. They compared previous state-of-the-art methods along with two initializations of their model: a fine-tuned version of their model using STA11 and a Xavier initialized model with the former performing best in almost...
In their article Acharya et al.[85] trained a four layer CNN on AFDB, MITDB and CREI, to classify between normal, AF, atrial flutter and ventricular fibrillation. Without detecting the QRS they achieved comparable performance with previous state-of-the-art methods that were based on R-peak detection and feature enginee...
This model compared with vanilla conv-deconv and u-net performs better by an average of 5% in terms of Dice. Patravali et al.[140] trained a model based on u-net using Dice combined with cross entropy as a metric for LV/RV and myocardium segmentation.
B
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster...
Oh et al. (2015) and Chiappa et al. (2017) show that learning predictive models of Atari 2600 environments is possible using appropriately chosen deep learning architectures. Impressively, in some cases the predictions maintain low L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error over timespans...
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using...
have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control. Our video models of Atari en...
B
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification. Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke.
Figure 1: High level overview of a feed-forward pass of the combined methods. xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base ...
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable para...
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
The names of the classes are depicted at the right along with the predictions for this example signal. The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram h...
B
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra...
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi...
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga...
D
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O⁢(log⁡n)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration.
Formally, on the arrival of a critical item, the algorithm places it in a critical bin, opening a new one if necessary. Each arriving tiny item x𝑥xitalic_x is packed in the first critical bin which has enough space, with the restriction that the tiny items do not exceed a fraction 1/3 in these bins. If this fails, the...
bins include two items of weight 1/2 (except possibly the last one) which gives a total weight of 1 for the bin. Critical bins all include a critical item of weight 1. So, if wℓsubscript𝑤ℓw_{\ell}italic_w start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT, wssubscript𝑤𝑠w_{s}italic_w start_POSTSUBSCRIPT italic_s end_POSTS...
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\...
The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm...
D
In the context of online environments such as social media, an ADD scenario that is gaining interest, as we will see in Subsection 2.2, is the one known as early depression detection (EDD). In EDD the task is, given users’ data stream, to detect possible depressive people as soon and accurate as possible.
Algorithms capable of dealing with this scenario are said to support incremental learning and/or incremental classification. In the present article we will focus on incremental classification since, so far, it is the only EDD scenario we have data to compare to, as we will see in Subsection 2.2.
However, EDD poses really challenging aspects to the “standard” machine learning field. The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its ration...
In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD). The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification...
At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability. Unfortunately, to the best of our knowledge, there is no text classifier able to support thes...
B
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
process. As for global momentum, the momentum term −(𝐰t−𝐰t−1)/ηsubscript𝐰𝑡subscript𝐰𝑡1𝜂-({\bf w}_{t}-{\bf w}_{t-1})/\eta- ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) / italic_η contains global information from all the workers. Since we are...
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ...
We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than loca...
We find that due to the momentum factor masking (mfm) in DGC (Lin et al., 2018), DGC (w/ mfm) will degenerate to DSGD rather than DMSGD if sparse communication is not adopted, while GMC will degenerate to DMSGD if sparse communication is not adopted.
D
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the ...
The Extrema activation function (defined at Algorithm 3) detects candidate extrema using zero crossing of the first derivative, then sorts them in an descending order and gradually eliminates those extrema that have less absolute amplitude than a neighboring extrema within a predefined minimum extrema distance (m⁢e⁢d�...
The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error. However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the data...
Imposing a m⁢e⁢d𝑚𝑒𝑑meditalic_m italic_e italic_d on the extrema detection algorithm makes 𝜶𝜶\bm{\alpha}bold_italic_α sparser than the previous cases and solves the problem of double extrema activations that Extrema-Pool indices have (as shown in Fig. 1LABEL:sub@subfig:extrema). The sparsity parameter in this case ...
Figure 1: Visualization of the activation maps of five activation functions (Identity, ReLU, top-k absolutes, Extrema-Pool indices and Extrema) for 1D and 2D input in the top and bottom row respectively. The 1D input to the activation functions is denoted with the continuous transparent green line using an example from...
A
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch...
In this part, we investigate the influence of environment dynamic on the network states. With different scenarios’ dynamic degree τ∈(0,∞)𝜏0\tau\in(0,\infty)italic_τ ∈ ( 0 , ∞ ), PBLLA and SPBLLA will converge to the maximizer of goal function with different altering strategy probability. Fig. 6 presents the influence...
We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative ga...
We establish a multi-factor system model based on large-scale UAV networks in highly dynamic post-disaster scenarios. Considering the limitations in existing algorithms, we devise a novel algorithm which is capable of updating strategies simultaneously to fit the highly dynamic environments. The main contributions of ...
We propose a novel UAV ad-hoc network model with the aggregative game which is compatible with the large-scale highly dynamic environments, in which several influences are coupled together. In the aggregative game, the interference from other UAVs can be regarded as the integral influence, which makes the model more pr...
C
nodal locations), is equal to one. This property also hold for the pyramid side functions, i.e.,formulae-sequence𝑖𝑒i.e.,italic_i . italic_e . , Σn=1Nn⁢ϕn⁢(𝐫)=Σn=1Nn⁢ψn⁢(𝐫)=1subscript𝑁𝑛𝑛1Σsubscriptitalic-ϕ𝑛𝐫subscript𝑁𝑛𝑛1Σsubscript𝜓𝑛𝐫1\overset{N_{n}}{\underset{n=1}{\Sigma}}\phi_{n}(\textbf{$\mathbf{r}$})=%
ϕn⁢(𝐫)=Σen⁢ψne⁢(𝐫)subscriptitalic-ϕ𝑛𝐫subscript𝑒𝑛absentΣsuperscriptsubscript𝜓𝑛𝑒𝐫\phi_{n}(\textbf{$\mathbf{r}$})=\overset{e_{n}}{\underset{}{\Sigma}}\,\psi_{n}% ^{e}(\textbf{$\mathbf{r}$})italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( bold_r ) = start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_n ...
u⁢(𝐫)≈U⁢(𝐫)=Σn=1Nn⁢Un⁢ϕn⁢(𝐫)𝑢𝐫𝑈𝐫subscript𝑁𝑛𝑛1Σsubscript𝑈𝑛subscriptitalic-ϕ𝑛𝐫u(\mathbf{r})\approx U(\mathbf{r})=\overset{N_{n}}{\underset{n=1}{\Sigma}}\,U_% {n}\,\phi_{n}(\mathbf{r})italic_u ( bold_r ) ≈ italic_U ( bold_r ) = start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVER...
In total, there are K𝐾Kitalic_K tilted planes defined in the solution domain, where K=Σn=1Nn⁢en=3⁢Ne𝐾subscript𝑁𝑛𝑛1Σsubscript𝑒𝑛3subscript𝑁𝑒K=\overset{N_{n}}{\underset{n=1}{\Sigma}}e_{n}=3N_{e}italic_K = start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDER...
\overset{N_{n}}{\underset{n=1}{\Sigma}}\psi_{n}(\textbf{$\mathbf{r}$})=1start_OVERACCENT italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT italic_n = 1 end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( bold_r ) = ...
D
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP...
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}...
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT...
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
D
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b...
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim...
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
C
Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b...
The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pix...
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized ...
Figure 13: Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., ...
Two popular overlap-based measures used to evaluate segmentation performance are the Sørensen–Dice coefficient (also known as the Dice coefficient) and the Jaccard index (also known as the intersection over union or IoU). Given two sets 𝒜𝒜\mathcal{A}caligraphic_A and ℬℬ\mathcal{B}caligraphic_B, these metrics are def...
A
We use a graph that encodes the similarity of all words in the vocabulary. Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise.
In the following, we compare NDP with GRACLUS [11], NMF [38], DiffPool [40], and Top-K𝐾Kitalic_K [41]. In each experiment we adopt a fixed network architecture, MP(32)-P(2)-MP(32)-P(2)-MP(32)-AvgPool-Softmax, where MP(32) stands for a MP layer as described in (1) configured with 32 hidden units and ReLU activations, P...
We use a graph that encodes the similarity of all words in the vocabulary. Each graph signal represents a review and consists of a binary vector with size equal to the vocabulary, which assumes value 1 in correspondence of a word that appears at least once in the review, and 0 otherwise.
The first (LSTM), is a network where the dense hidden layer is replaced by an LSTM layer [55], which allows capturing the temporal dependencies in the sequence of words in the review. The other baseline (TCN) is a network where the hidden layers are 1D convolutions with different dilation rates [56].
Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation. After training, we extract the embedding vector of each word in the vocabulary and construct a...
D
Sparse connectivity maintains the tree structures and has fewer weights to train. In practice, sparse weights require a special differentiable implementation, which can drastically decrease performance, especially when training on a GPU. Full connectivity optimizes all parameters of the fully connected network. Massice...
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to sim...
These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption. In this work, we address this issue by proposing an imitation learning-based me...
In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models. We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t...
For training, we generate input-target pairs (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) as described in the last section. These training examples are fed into the training process to teach the network to predict the same results as the random forest. To avoid overfitting, the data is generated on-the-fly so that each traini...
B
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p...
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;...
Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2⁢|𝒜|⁢T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG |...
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient...
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting.
D
The computational cost of performing inference should match the (usually limited) resources in deployed systems and exploit the available hardware optimally in terms of time and energy. Computational efficiency, in particular, also includes mapping the representational efficiency to available hardware structures.
While the previous section indicates that quantized DNNs do not provide throughput improvements on general-purpose processors without explicit hardware support, there are other hardware platforms where quantization is mandatory. Data-flow architectures, as found typically on FPGAs, where the main objective is to keep a...
In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1. Note that this requires observing overall constraints such as pre...
A pre-trained full-precision DNN is then quantized using these bit widths and fine-tuned for one epoch to obtain a reward signal that is subsequently used to update the controller. Their method incorporates hardware-specific constraints, such as latency and energy consumption, that must be met by the controller.
This is in contrast to theoretical inference costs, such as numbers of parameters and required mathematical operations, that often do not reflect inference running time on real hardware well. Furthermore, power constraints are key for autonomous and embedded systems, as the device lifetime for a given battery charge ne...
D
Choose the canonical inclusion map X⁢\longhookrightarrow⁢X′𝑋\longhookrightarrowsuperscript𝑋′X\longhookrightarrow X^{\prime}italic_X italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT as ρ𝜌\rhoitalic_ρ, the identity map on ΛΛ\Lambdaroman_Λ as π𝜋\piitalic_π, and apply Theorem 5. ∎
By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ...
Finally, recently we became aware of [81, Lemma 5.1], which is similar to Theorem 5. The author considers spaces with numerable covers (i.e. the spaces admit locally finite partition of unity subordinate to the covers), whereas in our version that condition is automatically satisfied since we only consider paracompact ...
In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞⁢(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem...
A result which is similar to Corollary 4.1 was already proved in [24, Lemma 3.4] for finite index sets whereas in our version index sets can have arbitrary cardinality. In [75, Theorem 25, Theorem 26], the authors prove simplicial complex version of Corollary 4.1 for finite index sets and invoke a certain functorial v...
D
Some attempts to enrich scatterplots with automatically-derived statistical descriptions of patterns [38, 39, 40] have shown that static mappings may be useful in simple scenarios, but the complex relations between low- and high-dimensional space in non-linear projections cannot be well represented.
Labels   In order to better explain the contribution of t-viSNE, the data sets used in our use cases contain predefined labels, which is not the case in general when using unsupervised learning techniques, such as t-SNE. There is no restriction, however, to having labels when using t-viSNE; one might use the results of...
Cytosplore [11] is an example of tools that use t-SNE for visual data exploration within a specific domain: single-cell analysis with mass cytometry data. Apart from showing a t-SNE projection of the data, Cytosplore is also supported by a domain-specific clustering technique which serves as the base for the rest of th...
Other than the ones discussed so far, some interactive tools have been designed with either specific DR methods in mind, such as SIRIUS [49], and FocusChanger [50], or for specific domains, such as Cytosplore [11]. t-SNE can also be used to explore and judge different clustering partitions of the same data set, as in ...
In such cases, interactive visual interfaces are necessary, as noted by Sacha et al. [15] in their survey on interaction techniques for DR. Interactive solutions for specific domains such as text [19, 20] and images [41, 7] use inherent characteristics of the data in order to explain layouts, however, they are not easi...
D
Aquatic animals: This type of meta-heuristic algorithm focuses on aquatic animals. The aquatic ecosystem in which they live has inspired different exploration mechanisms. It contains popular algorithms such as Krill Herd (KH, [259]), Whale Optimization Algorithm (WOA, [380]), and algorithms based on the echolocation u...
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n...
Foraging: Rather than the movement strategy, in some other algorithmic variants it is the mechanism used to obtain their food what drives the behavior of the animal and, ultimately, the design of the meta-heuristic algorithm. This foraging behavior can in turn be observed in many flavors, from the tactics used by the a...
Microorganisms: Meta-heuristics based on microorganisms are related with the food search performed by bacteria. A bacteria colony performs a movement to search for food. Once they have found and taken it, they split to search again in the environment. Other types of meta-heuristics that can be part of this category ar...
Terrestrial animals: Meta-heuristics in this category are inspired by foraging or movements in terrestrial animals. The most renowned approach within this category is the classical ACO meta-heuristic [115], which replicates the stigmergic mechanism used by ants to locate food sources and inform of the existence of the...
C
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propo...
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
C
The downside of this approach is that the Spoofer Project requires users to download, compile and execute a software - which also needs administrative privileges to run - once per measurement. This requires not only technically knowledgeable volunteers that agree to run untrusted code, but also networks which agree to...
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 20...
In a recent longitudinal data analysis by the Spoofer Project (Luckie et al., 2019) the authors observed that despite increase in the coverage of ASes that do not perform ingress filtering in the Internet, the test coverage across networks and geo-locations is still non-uniform.
Our work provides the first comprehensive view of ingress filtering in the Internet. We showed how to improve the coverage of the Internet in ingress filtering measurements to include many more ASes that were previously not studied. Our techniques allow to cover more than 90% of the Internet ASes, in contrast to best ...
Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage poin...
B
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill...
An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for senso...
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal...
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
C
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
Note that the time waited is independent of Q𝑄Qitalic_Q. Together, these two types of waiting ensure that (i) the time needed by ℬℬ\mathcal{B}caligraphic_B is monotone in |Q|𝑄|Q|| italic_Q | and (ii) the total expected time needed by ℬℬ\mathcal{B}caligraphic_B equals the calculated upper bound for 𝒜𝒜\mathcal{A}cali...
Algorithm ℬℬ\mathcal{B}caligraphic_B is simply algorithm 𝒜𝒜\mathcal{A}caligraphic_A, but after every step it waits as long as necessary to make its expected running time for that step equal to the bound calculated for this step. To be precise, there are two types of waiting, best explained by an example.
Unfortunately, T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic...
3:Compute the sets ℬ1(1),…,ℬ|𝒯|+1(1)superscriptsubscriptℬ11…superscriptsubscriptℬ𝒯11\mathcal{B}_{1}^{(1)}\!,\ldots,\mathcal{B}_{|\mathcal{T}|+1}^{(1)}caligraphic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … , caligraphic_B start_POSTSUBSCRIPT | caligraphic_T | + 1 end_...
B
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment...
idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup. For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups:
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem...
B
P⁢(a|𝒬,ℐ)=fV⁢Q⁢A⁢(v,𝒬).𝑃conditional𝑎𝒬ℐsubscript𝑓𝑉𝑄𝐴𝑣𝒬\displaystyle P(a|\mathcal{Q},\mathcal{I})=f_{VQA}(v,\mathcal{Q}).italic_P ( italic_a | caligraphic_Q , caligraphic_I ) = italic_f start_POSTSUBSCRIPT italic_V italic_Q italic_A end_POSTSUBSCRIPT ( italic_v , caligraphic_Q ) .
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea...
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P⁢(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ...
Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t...
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende...
B
We crawled the 3.9 million selected URLs using Scrapy444https://scrapy.org/ for about 48 hours between the 4th and 10th of August 2019, for a few hours each day. 3.2 million URLs were successfully crawled, henceforth referred to as candidate privacy policies, while 0.4 million led to error pages and 0.3 million URLs w...
In order to address the requirement of a language model for the privacy domain, we created PrivBERT. BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019). It was pretrained on the masked language modelling and the next sentence prediction tasks an...
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier ...
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020)...
C
Furthermore, we allow the users to define specific weights for each metric and focus on the models that perform well for both the entire data space and specific instances. Finally, StackGenVis does not support direct manipulation of model ensembles [47], as it focuses on exploration of a large solution space before nar...
Stacking methods (or stacked generalizations) refer to a group of ensemble learning methods [45] where several base models are trained and combined into a metamodel with improved predictive power [63]. In particular, stacked generalization can reduce the bias and decrease the generalization error when compared to the u...
Visualization systems have been developed for the exploration of diverse aspects of bagging, boosting, and further strategies such as “bucket of models”. Stacking, however, has so far not received comparable attention by the InfoVis/VA communities: actually, we have not found any literature describing the construction ...
In a bucket of models, the best model for a specific problem is automatically chosen from a set of available options. This strategy is conceptually different to the ideas of bagging, boosting, and stacking, but still related to ensemble learning. Chen et al. [6] utilize a bucket of latent Dirichlet allocation (LDA) mod...
Predictions’ Space. The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances.
C
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
B
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personali...
We use Transformer [Vaswani et al., 2017] as the base model in dialogue generation experiment. In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
In the field of Natural Language Processing (NLP), the abundance of training data plays a crucial role in the performance of deep learning models [Dodge et al., 2021]. However, numerous NLP applications face a substantial challenge due to the scarcity of annotated data [Schick and Schütze, 2021]. For example, in person...
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance. In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r...
A
],j\in\mathcal{J}_{m_{s}}=\left[1,J=\lceil\frac{\pi}{BW_{\text{e}}}\rceil\right]italic_i ∈ caligraphic_I start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = [ 1 , italic_I = ⌈ divide start_ARG 2 italic_π end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT a end_POSTSUBSCR...
After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
Given the maximum resolution of the codebook, we continue to discuss the characteristic of the multi-resolution and the beamwidth with the CCA codebook. For the multi-resolution codebook, the variable resolution is tuned by the beamwidth, which is determined by the number of the activated elements [12]. Note that the ...
The conventional UPA/ULA codebook design mainly controls the beamwidth by the subarray activation/deactivation with different numbers of elements. In contrast, the codebook for DRE-covered CCA focuses on both the number of subarray elements and the specific subarray localization. The number of subarray elements determ...
D
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
C
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
B
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ...
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the de...
Table 6 shows that though the BLEU improvements start saturating with deep depth-wise LSTM Transformers of more than 12121212 layers, depth-wise LSTM is able to ensure convergence of up to 24242424 layer Transformers. The experiments also show that the size differences between these datasets did not lead to differences...
In our deep Transformer experiments, Table 6 shows that our depth-wise LSTM Transformer with fewer layers, parameters and computations can lead to competitive/better performance and faster decoding speed than vanilla Transformers with more layers but a similar BLEU score, and the depth-wise LSTM Transformer is in fact ...
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wis...
B
Ynsubscript𝑌𝑛Y_{n}italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT for some n𝑛nitalic_n, hence g−1⁢(U)=gn−1⁢(U)superscript𝑔1𝑈superscriptsubscript𝑔𝑛1𝑈g^{-1}(U)=g_{n}^{-1}(U)italic_g start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U ) = italic_g start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_P...
commute. As I𝐼Iitalic_I is non empty, consider some i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, we have gi=idi∘g=idi∘g′subscript𝑔𝑖subscriptid𝑖𝑔subscriptid𝑖superscript𝑔′g_{i}=\mathrm{id}_{i}\circ g=\mathrm{id}_{i}\circ g^{\prime}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_id start_POSTSUBSCRIPT italic_i...
fi∘g=gisubscript𝑓𝑖𝑔subscript𝑔𝑖f_{i}\circ g=g_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∘ italic_g = italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for every i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I. In particular, consider a compact open set of X𝑋Xitalic_X: it can written as fi−1⁢(K)superscripts...
We finally prove that X𝑋Xitalic_X is the limit in 𝐏𝐫𝐞𝐒𝐩𝐞𝐜𝐏𝐫𝐞𝐒𝐩𝐞𝐜\mathbf{PreSpec}bold_PreSpec. Consider {gi:Y→Xi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑌subscript𝑋𝑖𝑖𝐼\{g_{i}\colon Y\to X_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Y → italic_X start_POSTSUBSCRIP...
Assume that {gi:Z→Yi}i∈Isubscriptconditional-setsubscript𝑔𝑖→𝑍subscript𝑌𝑖𝑖𝐼\{g_{i}\colon Z\to Y_{i}\}_{i\in I}{ italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_Z → italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT is a collection o...
A
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation. However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem. These problems seriously limit...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify...
The ordinal distortion represents the image feature in terms of the distortion distribution, which is jointly determined by the distortion parameters and location information. We assume that the camera model is the division model, and the ordinal distortion 𝒟𝒟\mathcal{D}caligraphic_D can be defined as
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl...
B
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. ...
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b...
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
We can observe that for almost all batch sizes, the methods that adopt normalized gradients, including LARS, CLARS, and SNGM, achieve better performance than others. Compared to LARS and CLARS, SNGM achieves better test accuracy for different batch sizes.
D
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮...
For this guess R𝑅Ritalic_R, the polynomial-scenarios algorithm returns a stage-I set FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, and a stage-II set FAsubscript𝐹𝐴F_{A}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT for each A∈Q𝐴𝑄A\in Qitalic_A ∈ italic_Q. Our polynomial-sce...
Following the lines of [26], it may be possible to replace this dependence with a notion of dimension of the underlying convex program. However, such general bounds would lead to significantly larger complexities, consisting of very high order polynomials of n𝑛nitalic_n, m𝑚mitalic_m.
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
C
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian...
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio...
B
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces ...
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the origi...
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics...
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i...
C
3D-FUTURE dataset is a recently public large-scale indoor dataset with 34 categories. Following the official splits, we adopt 12,144 images for training, 2,024 for validation and 6,072 for testing. From the size distribution of bounding boxes in 3D-FUTURE and COCO shown in Figure 1, the medium object size of 3D-FUTURE ...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “...
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess...
C
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
B
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experi...
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the t...
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inh...
B
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst...
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et a...
D
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new enti...
Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggreg...
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For ex...
The results in Table 10 demonstrate that all variants of decentRL achieves state-of-the-art performance on Hits@1, empirically proving the superiority of using neighbor context as the query vector for aggregating neighbor embeddings. The proposed decentRL outperforms both decentRL w/ infoNCE and decentRL w/ L2, provid...
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and da...
A
We illustrate the results in Fig. 9. We observe that the episode length becomes longer over training time with the intrinsic reward estimated from VDM, as anticipated. We observe that our method reaches the episode length of 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with the minimum iterati...
At the beginning of each episode, we put three objects in the workspace. Using fewer objects makes the robot arm harder to interact with the objects by taking actions randomly. We use a set of 10 different objects for training and 5555 objects for testing. We follow [13] and use the Object-Interaction Frequency (OIF) ...
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which...
Finally, to evaluate our proposed method in real-world tasks, we conduct experiments on the real-world robot arm to train a self-supervised exploration policy. We highlight that policy learning in a real robot arm needs to consider both the stochasticity in the robot system and the different dynamics corresponding to d...
We demonstrate the setup of the experiment in Fig. 10. The equipment mainly includes an RGB-D camera that provides the image-based observations, a UR5 robot arm that interacts with the environment, and different objects in front of the robot arm. An example of the RGB-D image is shown in Fig. 11. We develop a robot en...
C
The number of coefficients |Am,n,1|=(m+nn)∈𝒪⁢(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali...
Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl...
Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
Whatsoever, any answer to Questions 2 that is to be of practical relevance must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1.
B
Typical examples include principal component analysis [27], linear discriminant analysis [28], etc. It is intuitive to understand the differences between two collections of high-dimensional samples by projecting those samples into low-dimensional spaces in some proper directions [29, 30, 31, 6, 32, 33, 34].
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized. Our proposed projected Wasserstein distance is similar to this framework, but we use 1111-Wasserst...
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ...
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
The computation of projected Wasserstein distance was recently studied in [43, 32, 34]. We use the Riemannian gradient method discussed in [32, Algorithm 3] to compute the projected Wasserstein distance, where the details of the corresponding algorithm are summarized in Appendix B.
A
Learning disentangled factors h∼qϕ⁢(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpreta...
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre...
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
A
Graph described in Fig.  4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma...
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the...
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab...
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized...
D
=3⁢(x)+3⁢(x3+2⁢x2+3⁢x+3)+4⁢(2⁢x3+3⁢x2+4⁢x+2)absent3𝑥3superscript𝑥32superscript𝑥23𝑥342superscript𝑥33superscript𝑥24𝑥2\displaystyle=3(x)+3(x^{3}+2x^{2}+3x+3)+4(2x^{3}+3x^{2}+4x+2)= 3 ( italic_x ) + 3 ( italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + 2 italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 3 ...
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat...
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b...
In this section, we focus on additional results on the linear representation of f𝑓fitalic_f when f𝑓fitalic_f is a monomial function. The following theorem re-establishes the condition invertibility of a monomial while adding additional results on the linear complexity.
We now show that whenever f𝑓fitalic_f is a permutation function in 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, the inverse function can be represented similarly over the same space S𝑆Sitalic_S. First, we prove a condition of invertibility of f𝑓fitalic_f in terms of the ...
C
Since feature selection often comes at a cost in terms of stability (Xu \BOthers., \APACyear2012), it is to be expected that view selection stability (Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end_ARG) is higher for meta-learners that select more views. The results of two meta-learners do not align with this pattern, name...
The results for the breast cancer data can be observed in Table 3. The interpolating predictor and the lasso are the best performing meta-learners in terms of all three classification measures, with the interpolating predictor having higher test accuracy and H, and the lasso having higher AUC. However, the interpolatin...
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge...
The results of applying MVS with the seven different meta-learners to the colitis data can be observed in Table 2. In terms of raw test accuracy the nonnegative lasso is the best performing meta-learner, followed by the nonnegative elastic net and the nonnegative adaptive lasso. In terms of AUC and H, the best performi...
Since feature selection often comes at a cost in terms of stability (Xu \BOthers., \APACyear2012), it is to be expected that view selection stability (Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end_ARG) is higher for meta-learners that select more views. The results of two meta-learners do not align with this pattern, name...
A
Another line of research in anomaly detection exploits the dependency among variables, assuming normal objects follow the dependency while anomalies do not. Dependency-based methods [4, 5] evaluate the anomalousness of objects through how much they deviate from normal dependency possessed by the majority of objects.
A common way of examining dependency deviations in the dependency-based approach is to check the difference between the observed value and the expected value of an object, where the expected value is estimated based on the underlying dependency between variables [7, 4, 5]. Thus, dependency-based approach naturally lead...
The dependency-based approach works under the assumption that anomalies deviate from the normal dependency among variables, and the extend of anomalousness is evaluated based on this deviation. While the proximity-based approach that focuses on relationships among objects, the dependency-based approach emphasizes on t...
This example highlights the fundamental difference between proximity-based and dependency-based methods. Dependency-based methods focus on identifying anomalies based on underlying relationships between variables, whereas proximity-based methods rely on object similarity in terms of proximity. In cases like this, where...
Dependency-based approach is fundamentally different from proximity-based approach because it considers the relationship among variables, while proximity-based approach examines the relationship among objects. We use an example to explain the difference between the two approaches.
D
Comparison with Oh & Iyengar [2021] While the authors in Oh & Iyengar [2021] provide sharper bounds by a factor of O~⁢(d)~O𝑑\tilde{\mathrm{O}}(\sqrt{d})over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_d end_ARG ), they still retain the κ𝜅\kappaitalic_κ multiplicative factor in their regret bounds. Thei...
Comparison with Abeille et al. [2021]  Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
A confidence set similar to Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in Eq (7) was recently proposed in Abeille et al. [2021] for the simpler logisitic bandit setting. Here, we extend its construction to the MNL setting. The set Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\...
A
3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t...
Fig. 2 demonstrates the overall architecture of our proposed Video self-Stitching Graph Network (VSGN). It is comprised of three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), scoring and localization (SoL), which will be elaborated in Sec. 3.2, 3.3, and 3.4, respectively. Before delv...
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates ...
Figure 2: Architecture of the proposed video self-stitching graph network (VSGN). Its takes a video sequence and generates detected actions with start/end time as well as their categories. It has three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), and scoring and localization (SoL)....
Inspired by FPN [22], which computes multi-scale features with different levels, we propose a cross-scale graph pyramid network (xGPN). It progressively aggregates features from cross scales as well as from the same scale at multiple network levels via a hybrid module of a temporal branch and a graph branch. As shown ...
C
Important contributions of this research include the formalization of primary concepts [CDM15], the identification of methods for assessing hyperparameter importance [JWXY16, PBB19, vRH17, HHLB13, HHLB14, vRH18], and resulting libraries and frameworks for specific hyperparameter optimization methods [KGG∗18, THHLB13]. ...
Despite the success of automatic approaches and their advancement through the years, it is important to note that such approaches require extensive computing power and may lack critical features. Automatically (or manually) set thresholds may discard different models which could be informative but theoretically seem t...
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring th...
Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t...
In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algo...
A
Consensus protocols, in contrast to Markov chains, operate without the limitations of non-negative nodes and edges or the requirement for the sum of nodes to equal one [18]. This broader scope enables consensus protocols to address a significantly wider range of problem spaces. Therefore, there is a significant interes...
Consensus protocols form an important field of research that has a strong connection with Markov chains [18]. Consensus protocols are a set of rules used in distributed systems to achieve agreement among a group of agents on the value of a variable [19, 20, 21, 22].
we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs. Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analy...
There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus prot...
Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliab...
C
A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so...
A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati...
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar...
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5. From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con...
A disadvantage of synchronisation-based multi-shape matching is that it is a two-stage procedure, where pairwise matchings are obtained in the first proceedure, and synchronization is assured in the second. With that, the matching results are often suboptimal – even if one reverts to an alternating procedure using a so...
C
The main goal of our paper is: given a graph G𝐺Gitalic_G, find a (directed) clique path tree of G𝐺Gitalic_G or say that G𝐺Gitalic_G is not a (directed) path graph. To reach our purpose, we follow the same way in [18], by decomposing recursively G𝐺Gitalic_G by clique separators.
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and the γisubscript𝛾𝑖\gamma_{i}italic_γ start_PO...
A clique is a clique separator if its removal disconnects the graph in at least two connected components. A graph with no clique separator is called atom. For example, every cycle has no clique separator, and the butterfly/hourglass graph has two cliques and it is an atom. In [18] it is proved that an atom is a path g...
A chordal graph G𝐺Gitalic_G is a path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a path graph and there exists f:ΓC→[s]normal-:𝑓normal-→subscriptnormal-Γ...
A chordal graph G𝐺Gitalic_G is a directed path graph if and only if G𝐺Gitalic_G is an atom or for a clique separator C𝐶Citalic_C each graph γ∈ΓC𝛾subscriptnormal-Γ𝐶\gamma\in\Gamma_{C}italic_γ ∈ roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a directed path graph, 𝑈𝑝𝑝𝑒𝑟C=(u1,u2,…,ur)subscript𝑈𝑝𝑝�...
B
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-network...
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the origi...
Table 2 records the error rates on the four real-world networks. The numerical results suggest that Mixed-SLIM methods enjoy satisfactory performances compared with SCORE, SLIM, OCCAM, Mixed-SCORE, and GeoNMF when detecting the four empirical datasets. Especially, the number error for Mixed-SLIM on the Polblogs network...
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random...
B
For any functional F:ℳ→ℝ:𝐹→ℳℝF\colon\mathcal{M}\rightarrow\mathbb{R}italic_F : caligraphic_M → blackboard_R, we let grad⁡Fgrad𝐹\operatorname{{\mathrm{grad}}}Froman_grad italic_F denote the functional gradient of F𝐹Fitalic_F with respect to the Riemannian metric g𝑔gitalic_g.
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilber...
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe...
Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity. Therefore, in this scenario, variational ...
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
A
Definition 3 (Average Travel Time) The travel time of a vehicle is the time discrepancy between entering and leaving a particular area. A vehicle from the origin to the destination (OD) is regarded as a travel. Average travel time of all vehicles in a road network is the most frequently used measure to evaluate the per...
The most straightforward RL baseline considers each intersection independently and models the task as a single agent RL problem [12]. However, the observation, received reward and dynamics of each traffic signal are closely related to its neighbors, and the coordination between signals should be modeled. Hence, optimiz...
We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compa...
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a...
Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards a...
C
A zero 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT of 𝐟𝐟\mathbf{f}bold_f is isolated if there is an open neighborhood ΛΛ\Lambdaroman_Λ of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT in the domain of 𝐟𝐟\mathbf{f}bold_f such that
𝐟−1⁢(𝟎)∩Λ={𝐱∗}superscript𝐟10Λsubscript𝐱\mathbf{f}^{-1}(\mathbf{0})\cap\Lambda\,=\,\{\mathbf{x}_{*}\}bold_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_0 ) ∩ roman_Λ = { bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT } or 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT is nonisol...
A nonisolated zero 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT of a smooth mapping 𝐟𝐟\mathbf{f}bold_f may belong to a curve, a surface or a higher dimensional subset of 𝐟−1⁢(𝟎)superscript𝐟10\mathbf{f}^{-1}(\mathbf{0})bold_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_0 ).
𝓇⁢𝒶⁢𝓃⁢𝓀⁢(𝐟𝐱⁢(𝐱))≥𝓃−𝓀𝓇𝒶𝓃𝓀subscript𝐟𝐱𝐱𝓃𝓀\mathpzc{rank}\left(\,\mathbf{f}_{\mathbf{x}}(\mathbf{x})\,\right)\,\geq\,n-kitalic_script_r italic_script_a italic_script_n italic_script_k ( bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x ) ) ≥ italic_script_n - italic_script_k for all 𝐱∈Δ∗𝐱subsc...
}\,\mathbf{f}(\mathbf{x}_{j})\,=\,\mathbf{0}bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( bold_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = bo...
A
For any ϵ∈(0,0.5]italic-ϵ00.5\epsilon\in(0,0.5]italic_ϵ ∈ ( 0 , 0.5 ], H𝐻Hitalic_H-Aware using algorithm A𝐴Aitalic_A has competitive ratio min⁡{cA,1+(2+5⁢ϵ)⁢η⁢k+ϵ}subscript𝑐𝐴125italic-ϵ𝜂𝑘italic-ϵ\min\{c_{A},1+(2+5\epsilon)\eta k+\epsilon\}roman_min { italic_c start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , 1 + ...
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin...
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n...
An important application of online bin packing is Virtual Machine (VM) placement in large data centers. Here, each VM corresponds to an item whose size represents the resource requirement of the VM, and each bin corresponds to a physical machine (i.e., host) of a given capacity k𝑘kitalic_k. In the context of this appl...
The following result shows that we can express the competitive ratio of Hybrid(λ𝜆\lambdaitalic_λ) in Theorem 5 so that the capacity k𝑘kitalic_k is replaced by the consolidation ratio r𝑟ritalic_r. We can thus exploit the fact that typically r𝑟ritalic_r is much smaller than k𝑘kitalic_k, and improve the theoretical a...
C
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou...
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level...
In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar...
D
{Q}}_{i}over¯ start_ARG caligraphic_X end_ARG × caligraphic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT × over¯ start_ARG caligraphic_Y end_ARG × caligraphic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (i=1,…,m𝑖1…𝑚i=1,\ldots,mitalic_i = 1 , … , italic_m).
Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problem...
Our paper technique can be generalized to non-smooth problems by using another variant of sliding procedure [34, 15, 23]. By using batching technique, the results can be generalized to stochastic saddle-point problems [15, 23]. Instead of the smooth convex-concave saddle-point problem we can consider general sum-type s...
Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t...
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ...
C
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the ...
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric...
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
B
N=N⁢(b,k,m,ℓ)𝑁𝑁𝑏𝑘𝑚ℓN=N(b,k,m,\ell)italic_N = italic_N ( italic_b , italic_k , italic_m , roman_ℓ ) such that for every n≥N𝑛𝑁n\geq Nitalic_n ≥ italic_N and any group homomorphism h:Ck⁢(G⁢[n]m)→(ℤ2)b:ℎ→subscript𝐶𝑘𝐺superscriptdelimited-[]𝑛𝑚superscriptsubscriptℤ2𝑏h:C_{k}(G[n]^{m})\to\left(\mathbb{Z}_{2}\right)...
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP...
1111111111111111001111001111111100111111110011110000111111110011110011110000111100111100001111111111111111001111001111000000001111111111110000000000001111111111111111001111111111110011110011111111001111111100000000000000000011111111111111110000000011110000111111111111001111001111111100001111000000000011111111111100111...
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma...
Two central problems in this line of research are to identify the weakest possible assumptions under which the classical theorems generalize, and to determine their key parameters, for instance the Helly number (d+1𝑑1d+1italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT i...
B
Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S...
To the best of our knowledge, little empirical evidence exists for choosing a particular measure over others. In general, target correlation and mutual information (both related to the influence between features and the dependent variable) may be good candidates for identifying important features [71]. After these firs...
The logarithmic transformation for F18_l1p in Fig. 6(a) boosts the correlation with the van class from 45% to 48%. For both features, the overall correlation with the target class increases with these transformations, while the others produce worse results, so we avoid them.
Figure 5: The process of features’ exploration in a vehicle recognition scenario. (a.1) to (a.4) depict the change of the thresholds for the different data slices to intensify the responses for borderline instances. In (b), the user excluded unimportant features and then validates the remaining features through the rad...
Transformation of features with guided decisions. For the feature transformation phase, the first step was to analyze every feature with a top-down approach according to the sorting implied in the table heatmap (Fig. 7(b)). FeatureEnVi facilitates the exploration of multiple contradictory criteria again, even while cho...
B
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ...
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
In machining, positioning systems need to be fast and precise to guarantee high productivity and quality. Such performance can be achieved by model predictive control (MPC) approach tailored for tracking a 2D contour [1, 2], however requiring precise tuning and good computational abilities of the associated hardware. ...
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou...
D
Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain compara...
CelebA. We show accuracy for each group of CelebA in Table. A3. SD and GDRO obtain the highest accuracies. As discussed previously, we observe trade-offs between blond and non-blond classes with the improvements in the rare blond class incurring degradations in the non-blond class.
For example, in systems trained to infer hair color on the CelebA dataset [43], the majority group of non-blond males occurs 50505050 times more than the minority group of blond males, resulting in systems incorrectly predicting non-blond as hair color for the minority group.
In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu...
Results on a simpler setting. We further study bias exploitation on CelebA. For this, we plot improvement over the standard model (I⁢O⁢S⁢M𝐼𝑂𝑆𝑀IOSMitalic_I italic_O italic_S italic_M) in Fig. 5, which is the accuracy gain over the standard model on each dataset group. The improvements in blond (minority group) incur...
D
In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms. As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform.
In this survey, we present a comprehensive overview of deep learning-based gaze estimation methods. Unlike the conventional gaze estimation methods that requires dedicated devices, the deep learning-based approaches regress the gaze from the eye appearance captured by web cameras. This makes it easy to implement the al...
In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms. As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform.
Convolutional neural networks have been widely used in many compute vision tasks [88]. They also demonstrate superior performance in the field of gaze estimation. In this section, we first review the existing gaze estimation methods from the learning strategy perspective, i.e., the supervised CNNs and the semi-/self-/u...
From the deep feature extraction perspective, we describe the strategies for extracting features from eye images, face images and videos. Under the deep neural network architecture design perspective, we first review methods based on the supervised strategy, containing the supervised, self-supervised, semi-supervised a...
D
Table 2 reports the classification rates on the SMFRD dataset. The highest recognition rate is achieved by the ResNet-50 through the quantization of DRF features by 88.9%. This performance is achieved using 70 codewords that feed an MLP classifier. AlexNet model realized good recognition rates comparing to the VGG-16 ...
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
Covariance-based features have been applied in hariri20163d and achieved high recognition performance on 3D datasets in the presence of occluded regions. We have employed this method using 2D-based features (texture, gray level, LBP) to extract covariance descriptors. The evaluation on the RMFRD and SMFRD datasets co...
We have tested the face recognizer presented in luttrell2018deep that achieved a good recognition accuracy on two subsets of the FERET database phillips1998feret . This technique is based on transfer learning (TL) which employs pre-trained models and fine-tuning them to recognize masked faces from RMFRD and SMFRD dat...
D
Note: this is an extended version of an eponymous paper that appeared in FSCD 2022 that includes further examples (Examples 1, 1, and 1), a more straightforward presentation of the metatheory (Section 4) based on Kripke logical relations [Plo73], and a representative set of the corresponding proofs (Sections 3 and 4).
Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was...
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata...
Moreover, some prior work, which is based on sequential functional languages, encodes recursion via various fixed point combinators that make both mixed inductive-coinductive programming [Bas18] and substructural typing difficult, the latter requiring the use of the ! modality [Wad12]. Thus, like Fωcopsuperscriptsubsc...
Adding (co)inductive types and terminating recursion (including productive corecursive definitions) to any programming language is a non-trivial task, since only certain recursive programs constitute valid applications of (co)induction principles. Briefly, inductive calls must occur on data smaller than the input and, ...
D
Figure 15: Comparison of computational efficiency with existing works. The bars and polyline correspond to the left and right Y-axes, respectively. (a) Efficiency comparison with the AFP scheme [10] on the owner side. (b) Efficiency comparison with [3, 27, 28] on the cloud side under different image pixels. (c) Effici...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
In this section, we bring forward two cloud media sharing schemes, namely FairCMS-I and FairCMS-II. FairCMS-I essentially delegates the re-encryption management of LUTs to the cloud, thus significantly reducing the overhead of the owner side. Nevertheless, FairCMS-I cannot achieve IND-CPA security for the media conten...
This paper solves the three problems faced by cloud media sharing and proposes two schemes FairCMS-I and FairCMS-II. FairCMS-I gives a method to transfer the management of LUTs to the cloud, enabling the calculation of each user’s D-LUT in the ciphertext domain and its subsequent distribution. However, utilizing the s...
Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of...
C
As deep neural networks (DNNs) have proven successful in a variety of fields, researchers have begun using them to learn high-order feature interactions due to their deeper structures and nonlinear activation functions. The general approach is to concatenate the representations of different feature fields and feed the...
Factorization-machine supported Neural Networks (FNNs) Zhang et al. (2016) use pre-trained factorization machines to create field embeddings before applying a DNN, while Product-based Neural Networks (PNNs) Qu et al. (2016) model both second-order and high-order interactions through the use of a product layer between t...
Modeling feature interactions is a crucial aspect of predictive analytics and has been widely studied in the literature. FM Rendle (2010) is a popular method that learns pairwise feature interactions through vector inner products. Since its introduction, several variants of FM have been proposed, including Field-aware ...
Neural Factorization Machines (NFM) He and Chua (2017) design a bi-interaction layer to learn the pairwise feature interaction and apply DNN to learn the higher-order ones. Wide&Deep Cheng et al. (2016) introduces a hybrid architecture containing both shallow and deep components to jointly learn low-order and high-orde...
One of the main limitations of FM is that it is not able to capture higher-order feature interactions, which are interactions between three or more features. While higher-order FM (HOFM) has been proposed Rendle (2010, 2012) as a way to address this issue, it suffers from high complexity due to the combinatorial expans...
A
The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run.
Note that Algorithm 7, presented above, is stateless since it is equivalent to Algorithm 2 with ϕtsubscriptitalic-ϕ𝑡\phi_{t}italic_ϕ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT reset to zero between every outer iteration. This resetting step also implies that the per-iteration convergence rate of the stateless step...
The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy, the per iteration cost of the stateless step-size is about three times that of the simple step-size.
The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run.
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is...
B
In the semi-streaming model, which is the most commonly established variant of the graph stream model, the algorithm is given O~⁢(n)~𝑂𝑛\widetilde{O}(n)over~ start_ARG italic_O end_ARG ( italic_n )222The O~~𝑂\widetilde{O}over~ start_ARG italic_O end_ARG hides poly-logarithmic terms, thus O~⁢(n)=n⋅poly⁡(log⁡n)~𝑂𝑛⋅𝑛...
Given a graph on n𝑛nitalic_n vertices, there is a deterministic (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximation algorithm for maximum matching that runs in poly⁡(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes in the semi-streaming model.
Let ΔΔ\Deltaroman_Δ be an upper bound on the structure size and h⁢(ε)⁢|M|ℎ𝜀𝑀h(\varepsilon)|M|italic_h ( italic_ε ) | italic_M | be the maximum number of active nodes at the end of a phase. Fix a phase. Let k≥3𝑘3k\geq 3italic_k ≥ 3 be an integer parameter. Let M𝑀Mitalic_M be the matching at the beginning of that ph...
In the first pass, we apply a simple greedy algorithm to find a maximal matching, hence a 2222-approximation. This 2222-approximate maximum matching is our starting matching. The rest of our algorithm is divided into multiples phases. In each phase, we iteratively improve the approximation ratio of our current matchin...
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu...
D
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
Recently, several compression methods have been proposed for distributed and federated learning, including [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40]. Recent works have tried to combine the communication compression methods with decentralized optimization.
Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols. The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a...
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compres...
Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16]. Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv...
A
The inclusion of noise ymsubscript𝑦𝑚y_{m}italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT in the optimization process adds an interesting dimension to the standard loss function of a machine learning model. While the primary objective is still the minimization of the loss, the maximization of the noise term pl...
Data and model. We consider the benchmark of image classification on the CIFAR-10 [46] dataset. It contains 50,0005000050,00050 , 000 and 10,0001000010,00010 , 000 images in the training and validation sets, respectively, equally distributed over 10101010 classes. To emulate the distributed scenario, we partition the ...
Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly fre...
Setting. To train ResNet18 in CIFAR-10, one can use stochastic gradient descent with momentum 0.90.90.90.9, the learning rate of 0.10.10.10.1 and a batch size of 128128128128 (40404040 batches = 1111 epoch). This is one of the default learning settings. Based on these settings, we build our settings using the intuitio...
Unlike classical distributed learning methods, the FL approach assumes that data is not stored within a centralized computing cluster but is stored on clients’ devices, such as laptops, phones, and tablets. This formulation of the training problem gives rise to many additional challenges, including the privacy of clien...
A
There is a rich polytope of possible equilibria to choose from, however, an MS must pick one at each time step. There are three competing properties which are important in this regard, exploitation, robustness, and exploration. For exploitation, maximum welfare equilibria appear to be useful. However, to prevent JPSRO...
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ...
We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games)...
B
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mecha...
Recent years have seen growing recognition of the role of adaptivity in causing overfitting and thereby reducing the accuracy of the conclusions drawn from data. Intuitively, allowing a data analyst to adaptively choose the queries that she issues potentially leads to misleading conclusions, because the results of prio...
This generalization guarantee, which nearly avoids dependence on the range of the queries, begs the question of whether it is possible to extend these results to handle unbounded queries. Clearly such a result would not be true without some bound on the tail distribution for a single query, so we focus in the next theo...
One small extension of the present work would be to consider queries with range ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. It would also be interesting to extend our results to handle arbitrary normed spaces, using appropriate noise such as perhaps the Laplace mechani...
recently established a formal framework for understanding and analyzing adaptivity in data analysis, and introduced a general toolkit for provably preventing the harms of choosing queries adaptively—that is, as a function of the results of previous queries. This line of work has established that enforcing that computat...
C
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
The goal of this paper is to open up a new research direction aimed at understanding the power of preprocessing in speeding up algorithms that solve NP-hard problems exactly [26, 31]. In a nutshell, this new direction can be summarized as: how can an algorithm identify part of an optimal solution in an efficient prepro...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
This line of investigation opens up a host of opportunities for future research. For combinatorial problems such as Vertex Cover, Odd Cycle Transversal, and Directed Feedback Vertex Set, which kinds of substructures in inputs allow parts of an optimal solution to be identified by an efficient preprocessing phase? Is i...
D
With the emergence of image harmonization datasets consisting of paired training data (see Section IV-E), abundant image harmonization methods [156, 18, 20, 97, 102, 146, 14] using paired supervision have been developed. Tsai et al. [156] proposed the first end-to-end CNN network for image harmonization and leveraged a...
Inoue et al. [57] developed a multi-task framework with two decoders accounting for depth map prediction and ambient occlusion map prediction respectively. ARShadowGAN [92] proposed an attention-guided residual network. The network predicts two attention maps for background shadow and occluder respectively, which are c...
Cun and Pun [22] designed an additional Spatial-Separated Attention Module to deal with foreground and background feature maps separately. Hao et al. [49] employed self-attention [165] mechanism to propagate relevant information from background to foreground.
Zhang et al. [202] proposed to make sequential decisions to produce a reasonable placement by using reinforcement learning. Azadi et al. [2] employed STN to warp the foreground and relative appearance flow network to change the viewpoint of foreground. Additionally, they investigated on self-consistency constraint, tha...
Blind image harmonization: Most image harmonization methods require the foreground mask as input, which means that the inharmonious region is known in advance. However, in real-world applications, we may not know the exact inharmonious region in advance. Image harmonization without foreground mask is called blind imag...
B
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
Table VIII presents the taxi dispatching results for Chengdu, where the completion rate denotes the ratio of completed requests within all requests, and accumulated revenue represents the total revenue earned by all taxis throughout the day. Based on the experimental results, we draw the following conclusions:
Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To...
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A⁢(i,j)�...
Efficient taxi allocation is crucial for the passenger transportation services in smart cities. To address this challenge, we leverage the data available in CityNet and present benchmarks for the taxi dispatching task. In this task, operators are responsible for dispatching available taxis to waiting passengers in rea...
B
The numbers r𝑟ritalic_r and λ𝜆\lambdaitalic_λ are respectively the range of the response variable and a constant determining how much a deviation from optimal coverage should be penalized. The main idea is that the penalty should be proportional to the size of the intervals and that the penalty should be greater if ...
For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc...
Although intuitively sensible, this method admits no true theoretical derivation. It is derived solely from heuristic arguments. Later, Pearce et al. introduced an alternative to the LUBE loss where some of the ad-hoc choices were formalised pearce2018high . The most important modifications are the replacement of MPIW ...
As stated before, there are two quantities that are mainly used to evaluate the performance of interval estimators: the degree of coverage (1) and the average size of the prediction intervals (3). The idea that optimal prediction intervals should saturate inequality (2) and minimize the average size was dubbed the Hig...
In this study several types of prediction interval estimators for regression problems were reviewed and compared. Two main properties were taken into account: the coverage degree and the average width of the prediction intervals. It was found that without post-hoc calibration the methods derived from a probabilistic mo...
B
It has been widely shown in NLP and related fields \parencitespeechbert,vilbert,videobert,proteinbert that, by storing knowledge in huge numbers of parameters and carrying out task-specific fine-tuning, the knowledge implicitly encoded in the parameters of a PTM can be transferred to benefit a variety of downstream tas...
Fig. 2(b) shows the fine-tuning architecture for note-level classification. While the Transformer uses the hidden vectors to recover the masked tokens during pre-training, it has to predict the label of an input token during fine-tuning, by learning from the labels provided in the training data of the downstream task ...
As a self-supervised method, MLM needs no labelled data relating to the downstream tasks for pre-training. Following BERT, among all the masked tokens, we replace 80% by MASK tokens, 10% by a randomly chosen token and leave the last 10% unchanged. Doing so has the effect of helping mitigate the mismatch between pre-tra...
For PTMs, an unsupervised or self-supervised, pre-training task is needed to set the objective function for learning. We employ the masked language modelling (MLM) pre-training strategy of BERT, randomly masking 15% tokens of an input sequence and the Transformer will reconstruct these masked tokens from the context of...
Figure 2: Illustration of the (a) pre-training procedure of our model for a CP sequence, where the model learns to predict (reconstruct) randomly-picked super tokens masked in the input sequence (each consisting of four tokens, as the example one shown in the middle with time step t𝑡titalic_t); and (b), (c) the fine-t...
A
And of course we have to use a different color for each vertex, so B⁢B⁢Cλ⁢(Kn,T)≥n𝐵𝐵subscript𝐶𝜆subscript𝐾𝑛𝑇𝑛BBC_{\lambda}(K_{n},T)\geq nitalic_B italic_B italic_C start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_T ) ≥ italic_n – thus B⁢B⁢Cλ⁢(Kn,T)...
To achieve the same result for forest backbones we only need to add some edges that would make the backbone connected and spanning. However, we can always make a forest connected by adding edges between some leaves and isolated vertices and we will not increase the maximum degree of the forest, as long as Δ⁢(F)≥2normal...
In this section we will proceed as follows: we first introduce the so-called red-blue-yellow (k,l)𝑘𝑙(k,l)( italic_k , italic_l )-decomposition of a forest F𝐹Fitalic_F on n𝑛nitalic_n vertices, which finds a set Y𝑌Yitalic_Y of size at most l𝑙litalic_l such that we can split V⁢(F)∖Y𝑉𝐹𝑌V(F)\setminus Yitalic_V ( it...
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hen...
A