context stringlengths 250 3.79k | A stringlengths 250 4.99k | B stringlengths 250 8.2k | C stringlengths 250 4.17k | D stringlengths 250 3.69k | label stringclasses 4
values |
|---|---|---|---|---|---|
F′(a,b;c;z)=abcF(a+1,b+1;c+1;z)superscript𝐹′𝑎𝑏𝑐𝑧𝑎𝑏𝑐𝐹𝑎1𝑏1𝑐1𝑧F^{\prime}(a,b;c;z)=\frac{ab}{c}F(a+1,b+1;c+1;z)italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_a , italic_b ; italic_c ; italic_z ) = divide start_ARG italic_a italic_b end_ARG start_ARG italic_c end_ARG italic_F ( italic_a + 1 ... | 2\frac{d}{dx}x^{m}\frac{d}{dx}F(a,b;c;z)( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCR... | )z}{c(c+1)}}{\frac{(a+1-b)z}{c+1}+1-\cdots}\,\frac{\frac{(a+2)(c+1-b)z}{(c+1)(%
c+2)}}{\frac{(a+2-b)z}{c+2}+1-\cdots}divide start_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) end_ARG start_ARG italic_F ( italic_a + 1 , italic_b + 1 ; italic_c + 1 ; italic_z ) end_ARG ≡ divide start_ARG - italic_b italic_z... |
F(a,b;c;z)=(−1)ax−m1(b−1−a)Rnm𝐹𝑎𝑏𝑐𝑧superscript1𝑎superscript𝑥𝑚1binomial𝑏1𝑎superscriptsubscript𝑅𝑛𝑚F(a,b;c;z)=(-1)^{a}x^{-m}\frac{1}{\binom{b-1}{-a}}R_{n}^{m}italic_F ( italic_a , italic_b ; italic_c ; italic_z ) = ( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT ... | Rnm(x)Rnm′(x)=xm+2zF′(a,b;c;z)F(a,b;c;z).superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑥𝑚2𝑧superscript𝐹′𝑎𝑏𝑐𝑧𝐹𝑎𝑏𝑐𝑧\frac{R_{n}^{m}(x)}{{R_{n}^{m}}^{\prime}(x)}=\frac{x}{m+2z\frac{F^{\prime}(a,b%
;c;z)}{F(a,b;c;z)}}.divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUB... | B |
Multiplying g𝑔gitalic_g on the left by the transvection tij(α)subscript𝑡𝑖𝑗𝛼t_{ij}(\alpha)italic_t start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_α ) effects an elementary row operation that adds α𝛼\alphaitalic_α times the j𝑗jitalic_jth row to the i𝑖iitalic_ith row.
Similarly, right multiplic... |
The key idea is to transform the diagonal matrix with the help of row and column operations into the identity matrix in a way similar to an algorithm to compute the elementary divisors of an integer matrix, as described for example in [23, Chapter 7, Section 3]. Note that row and column operations are effected by left... | Using the row operations, one can reduce g𝑔gitalic_g to a matrix with exactly one nonzero entry in its d𝑑ditalic_dth column, say in row r𝑟ritalic_r.
Then the elementary column operations can be used to reduce the other entries in row r𝑟ritalic_r to zero. | Continuing recursively, g𝑔gitalic_g can be reduced to a matrix with exactly one nonzero entry in each row and each column.
Moreover, at the end of the procedure, the products of the transvections tij(α)subscript𝑡𝑖𝑗𝛼t_{ij}(\alpha)italic_t start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_α ) on the... |
The idea is to eliminate all other entries in the c𝑐citalic_cth column, namely to apply elementary row operations to make the entries in rows i=r+1,…,d𝑖𝑟1…𝑑i=r+1,\ldots,ditalic_i = italic_r + 1 , … , italic_d of column c𝑐citalic_c equal to zero. Specifically, g𝑔gitalic_g is multiplied on the left by the transvec... | B |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscrip... | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficien... | A |
Moreover,
(iii) A back-stable edge (e.g. the one at ersubscript𝑒𝑟e_{r}italic_e start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT) remains back-stable when we change another edge (e.g. the one at essubscript𝑒𝑠e_{s}italic_e start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT or etsubscript𝑒𝑡e_{t}italic_e start_POSTSUBSCRIP... | It is easy to compute one 3-stable triangle in O(n)𝑂𝑛O(n)italic_O ( italic_n ) time; we show how to do this in section 4111Alg-DS fails to find one 3-stable triangle and so we introduce the algorithm in section 4. This algorithm in section 4 is not the same as and does not originate from Alg-DS (see appendix A.2)..
... | Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS.
First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | B |
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, [18] use an extensive list of bipolar sentiments with a set of combinational rules. In... | Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | B |
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease ... | Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_... | C |
To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i... | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... |
Training data for single tweet classification. An event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless events from the above dataset. In the end, we used 90 rumors and 90 news assoc... | To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest i... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | B |
\mathcal{C}_{k})\mathsf{f^{*}}_{m}(\bar{a})italic_s italic_c italic_o italic_r italic_e ( over¯ start_ARG italic_a end_ARG ) = ∑ start_POSTSUBSCRIPT italic_m ∈ italic_M end_POSTSUBSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_e , italic_t ) italic_P ( caligraphic_T start_POSTSU... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | to add additional features from ℳ1superscriptℳ1\mathcal{M}^{1}caligraphic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT. The feature vector
of ℳLR2superscriptsubscriptℳ𝐿𝑅2\mathcal{M}_{LR}^{2}caligraphic_M start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT consists of ... |
We propose two sets of features, namely, (1) salience features (taking into account the general importance of candidate aspects) that mainly mined from Wikipedia and (2) short-term interest features (capturing a trend or timely change) that mined from the query logs. In addition, we also leverage click-flow relatednes... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | C |
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | the fundamental operation in the proposed SMC-based MAB Algorithm 1
is to sequentially update the random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , itali... | we propagate forward the sequential random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : ... | SMC weights are updated based on the likelihood of the observed rewards:
wt,a(m)∝pa(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , it... | The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits.
The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making. | D |
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i... | C |
Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S... |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which... | Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect... | A |
On certain graph classes, the SSE conjecture is equivalent to the Unique Games Conjecture [35] (see [44, 45]), which, at its turn, was used to show that many approximation algorithms are tight (see [36]) and is considered a major conjecture in inapproximability. However, some works seem to provide evidence that could ... |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie... |
Since a marking sequence is just a linear arrangement of the symbols of the input word, computing marking sequences seems to be well tailored to greedy algorithms: until all symbols are marked, we choose an unmarked symbol according to some greedy strategy and mark it. Unfortunately, we can formally show that many nat... | This proposition points out that even simple words can have only optimal marking sequences that are not block-extending. In terms of greedy strategies however, Proposition 5.4 only shows a lower bound of roughly 2222 for the approximation ratio of any greedy algorithm that employs some block-extending greedy strategy (... | Even though the reduction from MinLoc to MinPathwidth yields an O(log(opt)log(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm for MinLoc, it is also important to directly investigate ... | B |
Application/Notes131313In parenthesis the databases used by paper or by papers in subsection. In the ‘PCG/Physionet 2016 Challenge’ subtable all papers use PHY besides[113], and in the ‘Other signals’ subtable all papers use private databases besides [118]. | In[119] the authors trained a semi-supervised, multi-task bi-directional LSTM on data from 14011 users of the Cardiogram app for detecting diabetes, high cholesterol, high BP, and sleep apnoea.
Their results indicate that the heart’s response to physical activity is a salient biomarker for predicting the onset of a dis... | DBNs have also been used in combination with structured data besides RNNs and AEs.
In[73] the authors first performed a statistical analysis of a dataset with 4244 records to find variables related to cardiovascular disease from demographics and lifestyle data (age, gender, cholesterol, high-density lipoprotein, SBP, D... |
Accuracy121212There is a wide variability in results reporting. The results of [77] is for ventricular/supraventricular ectopic beats, [78] is for three types of arrhythmias, [82] is for five types of arrhythmias, [84] report precision, [90] report SNR and multiple results depending on added noise, the result of [91] ... |
Accuracy141414There is a wide variability in results reporting. [109] report specificity, [115] report results for SBP and DBP, [117] report sensitivity, specificity, [118] report positive predictive value, [119] report AUC for diabetest, results are also reported for high cholesterol sleep apnea and high BP. | D |
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ... | Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster... |
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^... | As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen... | Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-... | B |
For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | For the ‘signal as image’ module, we normalized the amplitude of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to the range [1,178]1178[1,178][ 1 , 178 ].
The results were inverted along the y-axis, rounded to the nearest integer and then they were used as the y-indices for the pixels with... | The two layer module consists of two 1D convolutional layers (kernel sizes of 3333 with 8888 and 16161616 channels) with the first layer followed by a ReLU activation function and a 1D max pooling operation (kernel size of 2222).
The feature maps of the last convolutional layer for both modules are then concatenated al... | For the CNN modules with one and two layers, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is converted to an image using learnable parameters instead of some static procedure.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels). | Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
| B |
As depicted in Fig. 10, for the step negotiation operation with a height of hℎhitalic_h, both ERw<ECwsubscript𝐸𝑅𝑤subscript𝐸𝐶𝑤E_{Rw}<E_{Cw}italic_E start_POSTSUBSCRIPT italic_R italic_w end_POSTSUBSCRIPT < italic_E start_POSTSUBSCRIPT italic_C italic_w end_POSTSUBSCRIPT and ERr<ECrsubscript𝐸𝑅𝑟subscript𝐸𝐶... |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... |
To assess the efficacy of the suggested autonomous locomotion mode transition strategy, simulation experiments featuring step heights of h, 2h, and 3h were conducted. These simulations involved continuous tracking of energy consumption for both total body negotiation (ERwsubscript𝐸𝑅𝑤E_{Rw}italic_E start_POSTSUBSCR... | Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... | C |
We first show how to find a Pareto-optimal strategy, when the advice encodes the hidden value, and thus can have unbounded size.
Moreover, we study the competitiveness of the problem with only k𝑘kitalic_k bits of advice, for some fixed k𝑘kitalic_k, and |
An instance of the online bin packing problem consists of a sequence of items with different sizes in the range (0,1]01(0,1]( 0 , 1 ], and the objective is to pack these items into a minimum number of bins, each with a capacity of 1. For each arriving item, the algorithm must place it in one of the current bins or ope... |
In Sections 4 and 5, we study the bin packing and list update problems; these problems are central in the analysis of online problems and competitiveness, and have numerous applications in practice. For these problems, an efficient advice scheme should address the issues of “what constitutes good advice” as well as | Online bin packing finds applications in a broad range of practical problems, from server consolidation to cutting stock problems. We refer the reader to a survey by Coffman et al. [14] and a brief introduction by Johnson [19] for details on bin packing and its applications. Along with its practical significance, resea... | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as ... | B |
Additionally, the framework flexibility and incremental nature allow SS3 to be extended in very different ways. Some possible alternatives could be the implementation of more elaborate summary operators, ⊕jsubscriptdirect-sum𝑗\oplus_{j}⊕ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and more effective early stoppin... | In that context, our proposal is a potential tool with which systems could be developed in the future for large-scale passive monitoring of social media to help to detect early traces of depression by analyzing users’ linguistic patterns, for instance, filtering users and presenting possible candidates, along with rich... | Besides, with the aim of helping users to interpret more easily the reasons behind classification, for instance, for mental health professionals not familiar with the underlying computational aspects, we plan to continue working on better visualization tools.
| In this context, we focused here on the two first aspects with a remarkable performance of SS3 (lowest ERDEo𝐸𝑅𝐷subscript𝐸𝑜ERDE_{o}italic_E italic_R italic_D italic_E start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT measure) in the experimental work with a very simple criterion for early classification.
SS3 showed... | Finally, we believe it is appropriate to highlight another of the highly desirable aspects of our Framework: its descriptive capacity.
As mentioned previously, most standard and state-of-the-art classifiers act as black boxes (i.e. classification process is not self-explainable) and therefore humans are not able to nat... | B |
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1).
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameter... | GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT )... | Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model trai... | With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
These methods can be implemented on distributed frameworks like parameter server and al... | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red... | C |
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output.
DNNs were successful in utilizi... | Previous literature addressing this problem has focused on weight pruning from trained DNNs [11] and weight pruning during training [12].
Pruning minimizes the model capacity for use in environments with low computational capabilities, or low inference time requirements and helps reducing co-adaptation between neurons,... | In neural networks sparseness can be applied on the connections between neurons, or in the activation maps [14].
Although sparseness in the activation maps is usually enforced in the loss function by adding a L1,2subscript𝐿12L_{1,2}italic_L start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT regularization or Kullback-Leibler... | After training, we consider 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculated during the feed-forward pass from Eq. 11) and 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT (which is calculat... | Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24].
Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o... | A |
Fig. 12 shows the effect of m𝑚mitalic_m on the behavior of SPBLLA. Setting τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and m>0.028𝑚0.028m>0.028italic_m > 0.028, we choose 5555 values from 0.030.030.030.03 to 0.050.050.050.05. As m𝑚mitalic_m getting higher, SPBLLA needs more time for convergence. Since higher m𝑚mitalic_m ... |
For power selection of UAVisubscriptUAV𝑖{\rm UAV}_{i}roman_UAV start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, a large power does not necessarily result in high utility due to the large interference comes with it. Taking energy saving and longer lifetime into consideration, choosing the right amount of power that bal... |
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Neve... | Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increase... | When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggreg... | C |
nt0(z)=n0((nhigh−nlow)g~(z)+nlow)subscript𝑛𝑡0𝑧subscript𝑛0subscript𝑛ℎ𝑖𝑔ℎsubscript𝑛𝑙𝑜𝑤~𝑔𝑧subscript𝑛𝑙𝑜𝑤n_{t0}(z)=n_{0}\,\left((n_{high}-n_{low})\,\widetilde{g}(z)+n_{low}\right)italic_n start_POSTSUBSCRIPT italic_t 0 end_POSTSUBSCRIPT ( italic_z ) = italic_n start_POSTSUBSCRIPT 0 end_POSTSUBS... | \right)^{-\frac{3}{2}}\right)\,[\mbox{m}^{2}\mbox{/s}]over¯ start_ARG italic_η end_ARG = divide start_ARG italic_m start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_ARG start_ARG 1.96 italic_e start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG / ( italic_Z over¯ start_... | {\int\frac{g_{form}(z)}{r}dr\;dz}\right)italic_f start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT ( italic_z , italic_t ) = ( - italic_e start_POSTSUPERSCRIPT - divide start_ARG italic_t end_ARG start_ARG italic_τ start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT end_ARG end_POSTSUPERSCRI... | ^{-3}]}{Z^{2}}\right)\,[\mbox{s}]over¯ start_ARG italic_τ end_ARG start_POSTSUBSCRIPT italic_e italic_i end_POSTSUBSCRIPT = ( divide start_ARG 6 square-root start_ARG 2 end_ARG italic_π start_POSTSUPERSCRIPT 1.5 end_POSTSUPERSCRIPT italic_ϵ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSC... | where g~(z)=g(z)/max(g(z))~𝑔𝑧𝑔𝑧max𝑔𝑧\widetilde{g}(z)=g(z)/\mbox{max}(g(z))over~ start_ARG italic_g end_ARG ( italic_z ) = italic_g ( italic_z ) / max ( italic_g ( italic_z ) ). Here, g(z)=12πσn2exp(−(z−zgp)22σn2)𝑔𝑧12𝜋superscriptsubscript𝜎𝑛2𝑒𝑥𝑝superscript𝑧subscript𝑧𝑔𝑝22superscriptsubscript... | D |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}... | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT... | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIP... | C |
For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the
DQN loss, ADAM optimizer was used[25]. | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and aft... |
Figure 6 shows the loss metrics of the three algorithms in CARTPOLE environment, this implies that using Dropout-DQN methods introduce more accurate gradient estimation of policies through iterations of different learning trails than DQN. The rate of convergence of one of Dropout-DQN methods has done more iterations t... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... | B |
Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et... | Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G... | Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al., 1998; Simard et... | Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol... |
Neff et al. (2018) trained a Wasserstein GAN with gradient penalty (Gulrajani et al., 2017) to generate labeled image data in the form of image-segmenation mask pairs. They evaluated their approach on a dataset of chest X-ray images and the Cityscapes dataset, and found that the WGAN-GP was able to generate images wit... | D |
where 𝐳𝐳{\mathbf{z}}bold_z is the vector containing the optimization variables zisubscript𝑧𝑖z_{i}italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for i=1,…,N𝑖1…𝑁i=1,\dots,Nitalic_i = 1 , … , italic_N indicating to which side of the bi-partition the node i𝑖iitalic_i is assigned to; aijsubscript𝑎𝑖𝑗a_{i... | Since computing the optimal MAXCUT solution is NP-hard, it is generally not possible to evaluate the quality of the cut found by the proposed spectral method (Sect. III-A) in terms of discrepancy from the MAXCUT.
Therefore, to assess the quality of a solution we consider the following bounds | Problem (2) is NP-hard and heuristics must be considered to solve it.
The heuristic that gives the best-known MAXCUT approximation in polynomial time is the Goemans-Williamson algorithm, which is based on the Semi-Definite Programming (SDP) relaxation [20]. | The examples encompass the two extreme cases where the MAXCUT solution is known: a bipartite graph where MAXCUT is 1 and the complete graph where MAXCUT is 0.5.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ... | The best case is the bipartite graph, where the MAXCUT is known and it cuts all the graph edges.
The partition 𝐳𝐳{\mathbf{z}}bold_z found by our spectral algorithm on bipartite graphs is optimal, i.e., γ(𝐳)=MAXCUT/|ℰ|=1𝛾𝐳MAXCUTℰ1\gamma({\mathbf{z}})=\texttt{\small{MAXCUT}}{}/|\mathcal{E}|=1italic_γ ( bold_z ) = M... | B |
In this work, we present an imitation learning approach to generate neural networks from random forests, which results in very efficient models.
We introduce a method for generating training data from a random forest that creates any amount of input-target pairs. With this data, a neural network is trained to imitate t... | Our method significantly reduces the number of parameters of the generated networks while reaching the same or even slightly better accuracy.
The current best-performing methods generate networks with an average number of parameters of either 142 000142000142\,000142 000, if sparse processing is available, or 748 00074... | Compared to state-of-the-art methods, the presented implicit transformation significantly reduces the number of parameters of the networks while achieving the same or even slightly improved accuracy due to better generalization.
Our approach has shown that it scales very well and is able to imitate highly complex class... | Experiments demonstrate that the accuracy of the imitating neural network is equal to the original accuracy or even slightly better than the random forest due to better generalization while being significantly smaller.
To summarize, our contributions are as follows: |
The proposed method generates data from a random forest and trains a neural network that imitates the random forest. The goal is that the neural network approximates the same function as the random forest. This also implies that the network reaches the same accuracy if successful. | C |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... | C |
On the one hand, unstructured pruning is typically less sensitive to accuracy degradation, but special sparse matrix operations are required to obtain a computational benefit.
On the other hand, structured pruning is more delicate with respect to accuracy but the resulting data structures remain dense such that common ... | In the context of quantization, knowledge distillation has been used to reduce the accuracy gap between real-valued DNNs and quantized DNNs (Mishra and Marr, 2018; Polino et al., 2018).
In particular, a real-valued teacher DNN is used to improve the accuracy of a quantized student DNN. | Subsequently, the smaller student model is trained on data where the ground truth labels have been replaced by the soft labels obtained from the output of the teacher model, e.g., from the softmax output of a DNN.
It has been shown that this substantially increases the accuracy of the student model compared to directly... | Starting from a pre-trained teacher DNN, they first train an autoencoder which they call paraphraser to extract understandable factors from a selected intermediate layer of the teacher DNN.
The student DNN is extended by a regressor which they call translator whose purpose is to predict the paraphraser factors from the... | Knowledge distillation is an approach where a small student DNN is trained to mimic the behavior of a larger teacher DNN, which has been shown to yield improved results compared to training the small DNN directly.
The idea of weight sharing is to use a small set of weights that is shared among several connections of a ... | D |
≃γxi,p⋅γp,xi+1similar-to-or-equalsabsent⋅subscript𝛾subscript𝑥𝑖𝑝subscript𝛾𝑝subscript𝑥𝑖1\displaystyle\simeq\gamma_{x_{i},p}\cdot\gamma_{p,x_{i+1}}≃ italic_γ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_p end_POSTSUBSCRIPT ⋅ italic_γ start_POSTSUBSCRIPT italic_p , italic_x s... | In [80, Theorem 8.10], Z. Virk provided a proof of the Corollary below which takes place at the simplicial level. The proof we give below exploits the hyperconvexity properties of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ) and also our isomophism theorem, Theorem... |
Note that whereas the proof of Lemma 1 in [54] takes place at the level of L∞(X)superscript𝐿𝑋L^{\infty}(X)italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_X ), the proof of Proposition 9.1 given above takes place at the level of simplicial complexes and simplicial maps. | The following corollary was already established by Gromov (who attributes it to Rips) in [47, Lemma 1.7.A]. The proof given by Gromov operates at the simplicial level. By invoking Proposition 8.1 we obtain an alternative proof, which instead of operating the simplicial level, exploits the isometric embedding of X𝑋Xita... | See Section 5 for the proof of Theorem 1. As we already mentioned earlier, our proof of Theorem 1 does not depend on Crawley-Boevey’s theorem since we circumvented verifying the pointwise finite-dimensionality of PHk(VR∗(X);𝔽)subscriptPH𝑘subscriptVR𝑋𝔽\mathrm{PH}_{k}(\mathrm{VR}_{*}(X);\mathbb{F})roman_PH start_PO... | A |
On the other hand, t-viSNE obtained consistently higher scores for Tool Supportiveness, with a higher average in all the proposed tasks. The bulk of the distributions of the supportiveness scores from the two groups overlap little, mostly near outliers (the “N/A” option was chosen three times, all in the GEP group).
Wh... |
Adaptive Parallel Coordinates Plot Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (... | Task-Specific Qualitative Analysis
We proceed by comparing the results of the two groups in each task individually, using the task-specific histograms from the bottom row of Figure 9. Our goal here is to perform an informal and qualitative analysis of the results, using the data from the experiment as input, to obtai... | Figure 9 provides a summary of the data gathered during the experiment, more specifically: the task completion times, the reported supportiveness of the tools on each task, and the distributions of answers to each task. The analysis of the ICE-T results can be found further below in Subsection 6.3.
|
The results (i.e., relevances of each dimension) are finally shown in an interactive horizontal bar chart (Figure 1(j)), where the dimensions are sorted from top to bottom according to relevance (with the most relevant on the top). While the relevance is computed using the absolute value of the correlation, we decided... | B |
As we have mentioned in the abstract, this fifth and last version of this series of documents ends with an analysis that addresses the double vision of a wide range of proposals, which after five years of analysis must be indicated that they border on a lack of analysis of the real problems and useful proposals, and o... | The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy b... | This section corresponds to the integration and extension of Section 3 of the article published in [2] within this report. In Section 7.1, we extend the original analysis on the importance of applications, stressing on the numerous applications that leverage results from this research area (the good). In section 7.2, w... |
After reviewing the algorithms and both taxonomies, we have identified several key learned lessons which serve as recommendations to deal with in forthcoming years for that is working on nature- and bio-inspired optimization. The learned lessons gained from the taxonomies and research outlined in [1] form the foundati... |
Lastly, Section 9 presents an analysis of metaheuristics based on studies, guidelines, and other works of a more theoretical nature that help to solve the problems of the field. We perform a brief review of recent studies that address good practices for designing metaheuristics and discussions from this perspective, a... | A |
}).italic_Z = italic_φ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_φ start_POSTSUBSCRIPT italic_m - 1 end_POSTSUBSCRIPT ( ⋯ italic_φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_X italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ⋯ ) ita... |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... |
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph. | To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivi... | C |
In their recent measurement of ingress and egress filtering (Luckie et al., 2019) conclude that filtering of inbound spoofed packets is less deployed than filtering of outbound packets, despite the fact that spoofed inbound packets pose a threat to the receiving network. (Korczyński et al., 2020b) analysed the network... |
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that ... | False negatives in our measurements mean that a network that does not perform filtering of spoofed packets is not marked as such. We next list the causes of false negatives for each of our three techniques. Essentially the false negatives cannot be resolved, and therefore our measurement results of networks that enforc... |
In their recent measurement of ingress and egress filtering (Luckie et al., 2019) conclude that filtering of inbound spoofed packets is less deployed than filtering of outbound packets, despite the fact that spoofed inbound packets pose a threat to the receiving network. (Korczyński et al., 2020b) analysed the network... | The correlation between egress and ingress filtering in previous work shows that the measurements of ingress filtering also provide a lower bound on the number of networks that enforce egress filtering of spoofed outbound packets. Therefore our results on networks that do not enforce ingress filtering imply that at lea... | D |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... |
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
| While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | D |
Our algorithm is a dynamic program, where we define a subproblem for each separator
index i𝑖iitalic_i, and each set of endpoints B∈ℬi𝐵subscriptℬ𝑖B\in\mathcal{B}_{i}italic_B ∈ caligraphic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The value of A[i,B]𝐴𝑖𝐵A[i,B]italic_A [ italic_i , italic_B ] is defined as f... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | A |
Note that it is not known whether the class of automaton semigroups is closed under taking the opposite semigroup [3, Question 13].
In defining automaton semigroups, we make a choice as to whether states act on strings on the right (as in this paper) or the left, | idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup.
For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups: | The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the c... |
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment... | B |
This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any spon... | Following Selvaraju et al. (2019), we train HINT on the subset with human-based attention maps Das et al. (2017), which are available for 9% of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to 2×10−52superscript1052\times 10^{-5}2 × 10 start_POSTSUPERSCRIPT - 5 end_PO... | We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Fu... |
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, whi... | Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 ... | C |
We downloaded the URL dump of the May 2019 archive.333https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-22/cc-index.paths.gz Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. We applied a selection cr... |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies ... | We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with... |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used da... |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of ... | B |
Schneider et al. [47] employed both bagging and boosting ensembles in an effort to combine the data and model space. The authors applied scatterplots and DR projections for the visualization of the data space, with the goal to add, delete, or replace models from the ensemble model space. | Each point is one model from the stack, projected from an 8-dimensional space where each dimension of each model is the value of a user-weighted metric. Thus, groups of points represent clusters of models that perform similarly according to all the metrics.
A summary of the performance of each model according to all se... | Pairs of validation metrics allow the user to select the best models (sorted by performance or similarity). A selection results in an update of the data space.
Our approach of aligning the data and model spaces is influenced by this work, but we improved the process by aggregating the alternative performance metric res... |
In this paper, we introduced an interactive VA system, called StackGenVis, for the alignment of data, algorithms, and models in stacking ensemble learning. The adaptation of an already-existing knowledge generation model leads us to stable design goals and analytical tasks that were realized by StackGenVis. With the c... |
Figure 7: The exploration of the models’ and predictions’ spaces and the metamodel’s results. (a) presents the initial models’ space and how it can be simplified with the removal of unnecessary models. The predictions’ space is then updated, and the user is able to select instances that are not well classified by the ... | B |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscr... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | A |
For both BLEU and C Score, Jac Score is around 1 in each cluster, which means the persona descriptions are not similar. The dialogue quantity also seems similar among different clusters. So we can conclude that data quantity and task profile does not have a major impact on the fine-tuning process.
| Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2).
When the data quantity i... | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transfor... | D |
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-base... | and the CCA scheme achieves higher SE than the UPA scheme obviously with different t-UAV number K𝐾Kitalic_K. The main reason is that the UPA with DREs can only receive/transmit the signal within a limited angular range at a certain time slot while the CCA does not have such limitation. It is also shown that the gap be... |
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated. Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, t... | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ... |
According to Theorem 1, only a subarray of CCA can be activated at a certain beam angle. Next, the relationship between the subarray and the beam angles is studied. The number and position of the activated elements determine the subarray. Assuming that the elements in the activated subarray are adjacent to each other ... | B |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the... | C |
The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Che... |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et... |
Related Work. When the value function approximator is linear, the convergence of TD is extensively studied in both continuous-time (Jaakkola et al., 1994; Tsitsiklis and Van Roy, 1997; Borkar and Meyn, 2000; Kushner and Yin, 2003; Borkar, 2009) and discrete-time (Bhandari et al., 2018; Lakshminarayanan and | D |
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settin... |
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ... | Multilingual translation uses a single model to translate between multiple language pairs Firat et al. (2016); Johnson et al. (2017); Aharoni et al. (2019). Model capacity has been found crucial for massively multilingual NMT to support language pairs with varying typological characteristics Zhang et al. (2020); Xu et ... |
Compared to the baseline Zhang et al. (2020), Table 7 shows that: 1) our approach can lead to +3.023.02+3.02+ 3.02 and +3.383.38+3.38+ 3.38 BLEU improvements on average in the En→→\rightarrow→xx and xx→→\rightarrow→En directions respectively in the evaluation over 4 typologically different languages, and 2) using dept... | +version.1.4.1 We report average BLEU over 94949494 language pairs BLEU94, win ratio WR (%) compared to Zhang et al. (2020), average BLEU over 4444 selected typologically different target languages with varied training data sizes (de, zh, br, te) BLEU4. Results are shown in Table 7.
| D |
on ⟨⟦𝖥𝖮[σ]⟧𝒟≤2∩τ⊆i⟩\langle\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\mathcal{D}_{\leq 2}}\cap%
\uptau_{\subseteq_{i}}\rangle⟨ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_PO... | (Y_{i}\right)=\uptau_{\to}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) = ⋃ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCR... | 𝒦∘(Yi)=τ→∩⟦𝖥𝖮i[σ]⟧X\mathcal{K}^{\circ}\!\left(Y_{i}\right)=\uptau_{\to}\cap\llbracket\mathsf{FO}_%
{i}[\upsigma]\rrbracket_{X}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO start_... | ^{-1}(U)( italic_x , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ∈ italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_x , italic_y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT × italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_x... | \rrbracket_{X}\in\uptau_{\subseteq_{i}}\}\subsetneq\llbracket\mathsf{FO}[%
\upsigma]\rrbracket_{Y}\cap\uptau_{\subseteq_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) = caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = { ⟦ it... | D |
To overcome the above limitations, previous methods exploit more guided features such as the semantic information and distorted lines [9, 10], or introduce the pixel-wise reconstruction loss [11, 12, 13]. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like... | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o... | After predicting the distortion labels of a distorted image, it is direct to use the distance metric loss such as ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss or ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT loss to learn the network paramete... | 2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. Also, the devised distortion-aware perception layer boosts the feature extraction of different degrees of distortion.
| In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl... | D |
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the fou... | showed that existing SGD methods with a large batch size will lead to a drop in the generalization accuracy of deep learning models. Figure 1
shows a comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large-batch training indeed |
Figure 3 shows the validation perplexity of the three methods with a small batch size of 20 and a large batch size of 2000. In small-batch training, SNGM and LARS achieve validation perplexity comparable to that of MSGD. Meanwhile, in large-batch training, SNGM achieves better performance than MSGD and LARS. | D |
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... | Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... |
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮... | The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ... | A |
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and... |
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions.
We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditio... | C |
In recent years, the massive digital information of individuals has been collected by numerous organizations. The data holders, also known as curators, use the data for data mining tasks, meanwhile they also exchange or publish microdata for further comprehensive research. However, the publication of microdata poses cr... |
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, whi... | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution i... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... |
This section evaluates the effectiveness of the proposed MuCo algorithm. We apply Mondrian [14], which is one of the most effective generalization approaches, and Anatomy [33], which always preserves the best information utility, as the baselines. We use the US Census data [29], eliminate the tuples with missing value... | A |
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | D |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... |
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... | C |
For any algorithm, the dynamic regret is at least Ω(B1/3d5/6HT2/3)Ωsuperscript𝐵13superscript𝑑56𝐻superscript𝑇23\Omega(B^{1/3}d^{5/6}HT^{2/3})roman_Ω ( italic_B start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 6 end_POSTSUPERSCRIPT italic_H italic_T start_POSTSUPERSCRIPT 2 / 3 en... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni... | D |
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,SD=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | B |
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... |
This work is funded by National Natural Science Foundation of China (NSFCU23B2055/NSFCU19B2027/NSFC91846204), Zhejiang Provincial Natural Science Foundation of China (No.LGG22F030011), and Fundamental Research Funds for the Central Universities (226-2023-00138). | The existing methods for KG embedding and word embedding exhibit even more similarities. As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sentence. Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a ... |
The performance of decentRL at the input layer notably lags behind that of other layers and AliNet. As discussed in previous sections, decentRL does not use the embedding of the central entity as input when generating its output embedding. However, this input embedding can still accumulate knowledge by participating i... | In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct compr... | A |
At the beginning of each episode, we put three objects in the workspace. Using fewer objects makes the robot arm harder to interact with the objects by taking actions randomly. We use a set of 10 different objects for training and 5555 objects for testing. We follow [13] and use the Object-Interaction Frequency (OIF) ... | In this section, we present the results of the ablation study of VDM. Recall that we have ”Common” modules and ”VDM-specific” models according to Tab. II. Common modules are used for policy optimization rather than exploration, and all compared methods use the same common modules and not be tuned. ”VDM-specific” module... | Optimization detail. We update the parameters of VDM for tvdmsubscript𝑡vdmt_{\rm vdm}italic_t start_POSTSUBSCRIPT roman_vdm end_POSTSUBSCRIPT times after each episode by using Adam optimizer with the learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The hyper-parameter tvdmsub... |
(i) For the network architecture, the important hyper-parameters include the dimensions of latent space Z𝑍Zitalic_Z, the dimensions of state features d𝑑ditalic_d, and the use of skip-connection between the prior and generative networks. We add an ablation study in Tab. IV to perform a grid search. The result shows t... |
In this section, we introduce VDM for exploration. In section III-A, we introduce the theory of VDM based on conditional variational inference. In section III-B, we present the detail of the optimizing process. In section III-C, we analyze the result of VDM used in ‘Noisy-Mnist’ that models the multimodality and stoch... | A |
Several improvements have been presented, including Floatman–Hormann interpolation [16, 38], that reach better approximation quality than splines.
However, all of them share the above weaknesses (A,B,C), as we demonstrate in the numerical experiments of Section 8. | reached by sparse samples that avoid the curse of dimensionality in high dimensions m∈ℕ𝑚ℕm\in\mathbb{N}italic_m ∈ blackboard_N, m≤16𝑚16m\leq 16italic_m ≤ 16.
However, when asking such approaches to deliver approximations to machine precision, or to leave the tight class of well-behaving functions, | Though, approximations of lower accuracy might be reached faster then by polynomial interpolation, this makes these approaches incapable for answering Question 1 when higher-precision
approximations are required. The multivariate polynomial interpolation method presented here reaches this goal. | Therefore, alternative interpolation schemes with better numerical condition and lower computational complexity are desirable.
While previous approaches to addressing this problem relied on tensorial interpolation schemes [33, 48, 59, 75], we here propose a different approach. | that these approaches are prevented from approximating a generic class of functions, but are limited to well-behaving bounded analytical or holomorphic functions
occurring, for instance, as solutions of elliptic PDEs. In these scenarios, reasonable uniform approximations of the function f𝑓fitalic_f can be | B |
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + f... | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance.
Omitted proofs can be found in Appendix A. | A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark.
In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance ... | The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1].
However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t inves... | D |
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the mod... | Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perf... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data g... | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z... | A |
Now, we will define ‘window operators’ to have the same connection as a 3-pin based structural computer using the reverse signal pair described earlier. ‘Window operator’ is a cube of 3x3, each containing elements of 0,i,1,-1,2, and 2. Each element (or cell) is inputted in the same way as three pin structural computing... |
DFS (Depth First Search) verifies that the output is possible for the actual Pin connection state. As described above, the output is determined by the 3-pin input, so we will enter 1 with the A2 and A1 connections, the B2 and B1 connections (the reverse is treated as 0), and the corresponding output will be recognized... |
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation. | This window operator calculates the connection between the pie and alpha, or beta, at A and B and transfers it to the right side (A AND B). In case of output, it is possible to measure by firing a laser onto a pie pin on the resulting side and checking whether it returns to either alpha or beta. The picture shows the c... | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... | C |
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with gcd(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start... | The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic... |
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form | The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representat... |
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representati... | B |
The code used to perform nonnegative forward selection is based on stepAIC from MASS 7.3-47 (Venables \BBA Ripley, \APACyear2002). The optimization required for fitting the interpolating predictor is performed using the package lsei 1.2-0 (Y. Wang \BOthers., \APACyear2017). After optimization, coefficients smaller than... | The values of partial η2superscript𝜂2\eta^{2}italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT obtained from the mixed ANOVAs for each of the four outcome measures are given in Table 1. Note that we are primarily interested in the extent to which differences between the meta-learners are moderated by the experiment... |
Table 1: Standardized measures of effect size (partial η2superscript𝜂2\eta^{2}italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) for the interactions between the choice of meta-learner and the other experimental factors, for each of the four outcome measures of true positive rate, false positive rate, false discov... |
Although we can average over the 100 replications within each condition, with 7 different meta-learners and 48 experimental conditions, this would still lead to 336 averages for each of the outcome measures. In our reporting of the results we will therefore focus only on the most important interactions between the met... |
Large or moderate effect sizes can be observed across all four outcome measures for the main effect of the meta-learner, as well as for the interactions with sample size and correlation structure. When accuracy or TPR is used as the outcome, the three-way interaction between meta-learner, sample size and correlation s... | A |
Some readers may wonder what the differences are between DepAD and subspace anomaly detection approaches since both use a subset of variables for anomaly detection. We differentiate them in this subsection. To tackle the problem of anomaly detection in high-dimensional data, subspace anomaly detection methods, like tho... |
In this section, we introduce the DepAD framework. We begin with an overview of the framework and then proceed to explain each phase in detail. For each phase, we discuss its goal, key considerations, and the off-the-shelf techniques that can be utilized. Finally, we present the algorithm for instantiating the DepAD f... | To address these gaps, this paper introduces a Dependency-based Anomaly Detection framework (DepAD) to provide a general approach to dependency-based anomaly detection. For each phase of the DepAD framework, this paper analyzes what and how to utilize the off-the-shelf techniques in the context of anomaly detection. We... |
In this paper, we introduce DepAD, a versatile framework for dependency-based anomaly detection. DepAD offers a general approach to construct effective, scalable, and flexible anomaly detection algorithms by leveraging off-the-shelf feature selection techniques and supervised prediction models for various data types a... | The rest of the paper is organized as follows. In Section 2, we survey the related work. Section 3 introduces the DepAD framework and presents the outline of the algorithms instantiated from DepAD. In Section 4, we empirically study the performance of the DepAD methods and present the comparison of the proposed methods... | A |
Comparison with Abeille et al. [2021] Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
| Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\... | Comparison with Abeille et al. [2021] Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
|
Comparison with Amani & Thrampoulidis [2021] While the authors in Amani & Thrampoulidis [2021] also extend the algorithms of Faury et al. [2020] to a multinomial problem, their setting is materially different from ours. They model various click-types for the same advertisement (action) via the multinomial distribution... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | C |
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur... |
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates ... | Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period... |
Figure 2: Architecture of the proposed video self-stitching graph network (VSGN). Its takes a video sequence and generates detected actions with start/end time as well as their categories. It has three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), and scoring and localization (SoL).... | Fig. 2 demonstrates the overall architecture of our proposed Video self-Stitching Graph Network (VSGN). It is comprised of three components: video self-stitching (VSS), cross-scale graph pyramid network (xGPN), scoring and localization (SoL), which will be elaborated in Sec. 3.2, 3.3, and 3.4, respectively. Before delv... | D |
(2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in... | (iv) control the evolutionary process by setting the number of models that will be used for crossover and mutation in each algorithm (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b)); and
(v) compare the performances of the best so far identified ensemble against the acti... | (2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Ana... | Thus, VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(h) is always active for Projection 2, as it is related to the majority-voting ensemble.
Soft majority voting strategy (i.e., predicted probabilities) is always applied. | D |
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi... | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
| Building on this new consensus protocol, the paper introduces a decentralized state-dependent Markov chain (DSMC) synthesis algorithm. It is demonstrated that the synthesized Markov chain, formulated using the proposed consensus algorithm, satisfies the aforementioned mild conditions. This, in turn, ensures the exponen... | Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi... | Furthermore, unlike previous algorithms in [14, 15], the convergence rate of the DSMC algorithm does not rapidly decrease in scenarios where the state space contains sparsely connected regions.
Due to the decentralized nature of the consensus protocol, the Markov chain synthesis relies on local information, similar to ... | D |
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar... | Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5.
From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con... | There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar... | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | C |
A graph is an interval graph if it is the intersection graph of a family of intervals on the real line; or, equivalently, the intersection graph of a family of subpaths of a path. Interval graphs are characterized by Lekkerkerker and Boland [15] as chordal graphs with no asteroidal triples, where an asteroidal triple i... |
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect... | interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path %
graphs $\subset$ path graphs $\subset$ chordal graphs}.interva... | We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two ... |
We now introduce a last class of intersection graphs. A rooted path graph is the intersection graph of directed paths in a rooted tree. Rooted path graphs can be recognized in linear time by using the algorithm by Dietz [7]. All inclusions between introduced classes of graphs are resumed in the following: | D |
\end{bmatrix}.italic_P start_POSTSUBSCRIPT ( italic_i italic_j ) end_POSTSUBSCRIPT = italic_p [ start_ARG start_ROW start_CELL 0.2 end_CELL start_CELL 0.05 end_CELL start_CELL 0.05 end_CELL end_ROW start_ROW start_CELL 0.05 end_CELL start_CELL 0.2 end_CELL start_CELL 0.05 end_CELL end_ROW start_ROW start_CELL 0.05 end_... |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... | The numerical results of these two sub-experiments are shown in panels (i) and (j) of Figure 1, from which we can find that: all procedures enjoy improvement performances when the simulated network becomes denser; Mixed-SLIM outperforms the other three approaches, especially under the DCMM setting.
|
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... | C |
In contrast, the feasible set of distributional optimization is the Wasserstein space on a subset 𝒳𝒳\mathcal{X}caligraphic_X of ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, which is an infinite-dimensional manifold.
As a result, unlike | variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes.
However, | See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018... | See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (... | See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm.
Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient flow of KL[p(z),p(z|x))... | C |
Mixedh. The mixedh is a mixed high traffic flow with a total flow of 4770 in one hour, in order to simulate a heavy peak. The difference from the mixedl setting is that the arrival rate of vehicles during 1200-1800s increased from 0.33 vehicles/s to 4.0 vehicles/s. The data statistics are listed in Tab. II. | Definition 3 (Average Travel Time)
The travel time of a vehicle is the time discrepancy between entering and leaving a particular area. A vehicle from the origin to the destination (OD) is regarded as a travel. Average travel time of all vehicles in a road network is the most frequently used measure to evaluate the per... |
Reward. We define the reward for agent i𝑖iitalic_i as the negative of the queue length on incoming lanes. Note that optimizing queue length has been proved to be equivalent to optimizing average travel time in [38] under certain assumptions. Average travel time is a global criteria which cannot be optimized directly ... | Most conventional traffic signal control methods are designed based on fixed-time signal control [21], actuated control [22] or self-organizing traffic signal control [23]. These approaches rely on expert knowledge and often perform unsatisfactorily in complicated real-world situations. To solve this problem, several o... |
Following existing studies [46, 13, 40, 41, 14], we use the average travel time to evaluate the performance of different methods for traffic signal control. The average travel time indicates the overall traffic situation in an area over a period of time. For a detailed definition of average travel time, see Section 3.... | D |
Since the system is consistent, namely 𝐛∈ℛ𝒶𝓃ℊℯ(𝒜)𝐛ℛ𝒶𝓃ℊℯ𝒜\mathbf{b}\,\in\,\mathpzc{Range}(A)bold_b ∈ italic_script_R italic_script_a italic_script_n italic_script_g italic_script_e ( italic_script_A ), the
solution set is an (n−r)𝑛𝑟(n-r)( italic_n - italic_r )-dimensional affine subspace | :={A†𝐛+𝐲|𝐲∈𝒦ℯ𝓇𝓃ℯ𝓁(𝒜)}assignabsentconditional-setsuperscript𝐴†𝐛𝐲𝐲𝒦ℯ𝓇𝓃ℯ𝓁𝒜\displaystyle~{}~{}:=~{}~{}\big{\{}A^{\dagger}\,\mathbf{b}+\mathbf{y}~{}\big{|%
}~{}\mathbf{y}\,\in\,\mathpzc{Kernel}(A)\big{\}}:= { italic_A start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_b + bold_y | bold_y ∈ italic_scrip... |
A†𝐛+𝒦ℯ𝓇𝓃ℯ𝓁(𝒜)superscript𝐴†𝐛𝒦ℯ𝓇𝓃ℯ𝓁𝒜\displaystyle A^{\dagger}\,\mathbf{b}+\mathpzc{Kernel}(A)italic_A start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_b + italic_script_K italic_script_e italic_script_r italic_script_n italic_script_e italic_script_l ( italic_script_A ) | _{*}))~{}~{}\subset~{}~{}\mathpzc{Kernel}(\mathbf{f}_{\mathbf{x}}(\mathbf{x}_{%
*}))italic_script_s italic_script_p italic_script_a italic_script_n { bold_v } ⊕ italic_script_R italic_script_a italic_script_n italic_script_g italic_script_e ( italic_ϕ start_POSTSUBSCRIPT bold_z end_POSTSUBSCRIPT ( bold_z start_POSTSUBS... | the solution set
A†𝐛+𝒦ℯ𝓇𝓃ℯ𝓁(𝒜)superscript𝐴†𝐛𝒦ℯ𝓇𝓃ℯ𝓁𝒜A^{\dagger}\,\mathbf{b}+\mathpzc{Kernel}(A)italic_A start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_b + italic_script_K italic_script_e italic_script_r italic_script_n italic_script_e italic_script_l ( italic_script_A ) is A†𝐛+N𝐳0superscript𝐴†... | B |
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n... | In terms of analysis techniques, we note that the theoretical analysis of the algorithms we present is specific to the setting at hand and treats items “collectively”. In contrast, almost all known online bin packing algorithms are analyzed using a weighting technique (?), which treats each bin “individually” and indep... |
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum n... |
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small. The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given t... | We give the first theoretical and experimental study of online bin packing with machine-learned predictions. Previous work on this problem has assumed ideal and error-free predictions that must be provided by a very powerful oracle, without any learnability considerations, as we discuss in more detail in Section 1.2. I... | A |
We examine the generative capabilities of the provided LoCondA model compared to the existing reference approaches. In this experiment, we follow the evaluation protocol provided in (Yang et al., 2019). We use standard measures for this task like Jensen-Shannon Divergence (JSD), coverage (COV), and minimum matching dis... | In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018... | In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar... |
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create... |
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou... | D |
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n... | Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate.
Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constr... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... |
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the n... |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. ... | A |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimensio... |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba... |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the stric... | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g... | B |
N=N(b,k,m,ℓ)𝑁𝑁𝑏𝑘𝑚ℓN=N(b,k,m,\ell)italic_N = italic_N ( italic_b , italic_k , italic_m , roman_ℓ ) such that for every n≥N𝑛𝑁n\geq Nitalic_n ≥ italic_N and any group homomorphism h:Ck(G[n]m)→(ℤ2)b:ℎ→subscript𝐶𝑘𝐺superscriptdelimited-[]𝑛𝑚superscriptsubscriptℤ2𝑏h:C_{k}(G[n]^{m})\to\left(\mathbb{Z}_{2}\right)... | In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIP... |
Two central problems in this line of research are to identify the weakest possible assumptions under which the classical theorems generalize, and to determine their key parameters, for instance the Helly number (d+1𝑑1d+1italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT i... | In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma... |
1111111111111111001111001111111100111111110011110000111111110011110011110000111100111100001111111111111111001111001111000000001111111111110000000000001111111111111111001111111111110011110011111111001111111100000000000000000011111111111111110000000011110000111111111111001111001111111100001111000000000011111111111100111... | D |
G4: Generation of new features and comparison with the original features.
With the same statistical evidence as defined in G3, users should get visually informed about strongly correlated features that perform the same for each class. Next, the tool can use automatic feature selection techniques to compare the new feat... | G5: Reassessment of the instances’ predicted probabilities and performance, computed with appropriate validation metrics. In the end, users’ interactions should be tracked in order to preserve a history of modifications in the features, and the performance should be monitored with validation metrics (T5). At all stages... |
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati... | T5: Evaluate the results of the feature engineering process.
At any stage of the feature engineering process (T2–T4), a user should be able to observe the fluctuations in performance with the use of standard validation metrics (e.g., accuracy, precision, and recall) [32]. Also, users could possibly want to refer to the... |
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heat... | A |
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT itali... | This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe... |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so... | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af... | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | B |
The CelebA dataset [43] of celebrity faces is widely used to assess bias mitigation techniques [55, 56, 46, 50]. Following earlier work, it is used for binary hair color classification (blond or non-blond), which is correlated with gender. There are two major bias sources: a) class imbalance, with non-blond occurring ... | In addition, we posit that the commonly used benchmarks are not challenging enough to test generalization to realistic scenarios. For example CelebA and Colored MNIST, two of the most widely used benchmarks, contain a single bias variable to mitigate: gender and color respectively. It is unclear how well methods would ... | To test scalability on a natural dataset, we conduct four experiments per explicit method on GQA-OOD with the explicit bias variables: a) head/tail (2 groups), b) answer class (1833 groups), c) global group (115 groups), and d) local group (133328 groups). Unlike Biased MNISTv1, we do not test with combinations of thes... | We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is u... | In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu... | C |
Recent studies found that concatenating the features of two eyes helps to improve the gaze estimation accuracy [54, 55].
Fischer et al. [54] employ two VGG-16 networks to extract individual features from two eye images, and concatenate two eye features for regression. | They claim that the two eyes are asymmetric, and propose an asymmetric regression and evaluation network to extract different features from two eyes.
More recent studies propose to use attention mechanism to fuse two eye features. Cheng et al. [56] argue that the weights of two eye features are determined by face imag... | Cheng et al. [55] build a four-stream CNN network for extracting features from two eye images.
Two streams of CNN are used for extracting individual features from left/right eye images, the other two streams are used for extracting joint features of two eye images. | Recent studies found that concatenating the features of two eyes helps to improve the gaze estimation accuracy [54, 55].
Fischer et al. [54] employ two VGG-16 networks to extract individual features from two eye images, and concatenate two eye features for regression. | Two eye asymmetry Property.
Cheng et al. discover the ’two eye asymmetry’ property that the appearances of two eyes are different while the gaze directions of two eyes are approximately the same [44]. Based on this observation, Cheng et al. propose to treat two eyes asymmetrically in the CNN. They design an asymmetry r... | B |
The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax f... | The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax f... |
The comparison of the computation times between the proposed method and Almabdy et al.’s method almabdy2019deep shows that the use of the BoF paradigm decreases the time required to extract deep features and to classify the masked faces (See Table 4). Note that this comparison is performed using the same pre-trained ... |
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ... |
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explaine... | D |
Types are defined by the following grammar, presupposing some mutually recursive type definitions of the form X[i¯]=AX(i¯)𝑋delimited-[]¯𝑖subscript𝐴𝑋¯𝑖X[\overline{i}]=A_{X}(\overline{i})italic_X [ over¯ start_ARG italic_i end_ARG ] = italic_A start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( over¯ start_ARG itali... | There are eight kinds of processes: two for the structural rules (identity and cut), one for each combination of type polarity (positive or negative) and rule type (left or right), one for definition calls, and one for unreachable code.
{defi}[Process] | To review SAX, let us make observations about proof-theoretic polarity. In the sequent calculus, inference rules are either invertible—can be applied at any point in the proof search process, like the right rule for implication—or noninvertible, which can only be applied when the sequent “contains enough information,” ... |
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which... |
The first two kinds of processes correspond to the identity and cut rules. Values V𝑉Vitalic_V and continuations K𝐾Kitalic_K are specified on a per-type-and-rule basis in the following two tables. Note the address variable x𝑥xitalic_x distinguished by each rule. | A |
Implement privacy-preserving access control. On the one hand, the cloud should be prevented from obtaining the private plaintext of the data it encounters, including the owner’s media content, the users’ fingerprints, and the LUTs. On the other hand, only users authorized by the owner can access the media content.
| First, the owner requires that the cloud not be able to obtain the plaintext about the media content and the LUTs, and that access to the media content is controlled by his/her authorization.
Second, the owner asks for significant overhead savings from cloud media sharing. Third, the owner demands traitor tracing of us... | The whole FairCMS-I scheme is summarized as follows.
First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 2. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\mathb... | The whole FairCMS-II scheme is summarized as follows. First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 5. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\math... | Protect the owner’s copyright. We need to embed the user’s fingerprint in the owner’s media content to enable traitor tracing. As long as an unfaithful user makes an unauthorized redistribution, he/she can be detected by the embedded fingerprint in the media content.
| D |
It should be noted that fssubscript𝑓𝑠f_{s}italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is invariant to the order of its input, i.e., fs(𝐞i,𝐞j)=fs(𝐞j,𝐞i)subscript𝑓𝑠subscript𝐞𝑖subscript𝐞𝑗subscript𝑓𝑠subscript𝐞𝑗subscript𝐞𝑖f_{s}(\mathbf{e}_{i},\mathbf{e}_{j})=f_{s}(\mathbf{e}_{j},\mathbf{e}_{i... |
From the estimated edge weighted matrix 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at each layer, we then sample the beneficial feature interactions, which is also to sample the neighborhood for each feature field. | At each layer of GraphFM, we select the beneficial feature interactions and treat them as edges in a graph. Then we utilize a neighborhood/interaction aggregation operation to encode the interactions into feature representations.
By design, the highest order of feature interaction increases at each layer and is determi... | To overcome this limitation, we replace the edge set E𝐸Eitalic_E with weighted adjacency 𝐏𝐏\mathbf{P}bold_P, where pijsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is interpreted as the probability of (vi,vj)∈Esubscript𝑣𝑖subscript𝑣𝑗𝐸(v_{i},v_{j})\in E( italic_v start_POS... | where 𝐏(k)[i,:]superscript𝐏𝑘𝑖:\mathbf{P}^{(k)}[i,:]bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT [ italic_i , : ] denotes the i𝑖iitalic_i-th column of matrix 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at k𝑘kitalic_k-th layer, 𝐏(k)[i,−idxi]s... | A |
Linear Minimization Oracle (LMO): Given 𝐝∈ℝn𝐝superscriptℝ𝑛\mathbf{d}\in\mathbb{R}^{n}bold_d ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, return argmin𝐱∈𝒳⟨𝐱,𝐝⟩subscriptargmin𝐱𝒳𝐱𝐝\operatorname*{argmin}\limits_{\mathbf{x}\in\mathcal{X}}\left\langle\mathbf{x}%
,\mathbf{d}\right\rangleroman_... | Table 1:
Number of iterations needed to achieve an ε𝜀\varepsilonitalic_ε-optimal solution for Problem 1.1. We denote line search by LS, zeroth-order oracle by ZOO, second-order oracle by SOO, domain oracle by DO, local linear optimization oracle by LLOO, and the assumption that 𝒳𝒳\mathcal{X}caligraphic_X is polyhed... | This means that Theorems 2.4 and 2.6 effectively bound the number of ZOO, FOO, DO, and LMO oracle calls needed to achieve a target primal gap or Frank-Wolfe gap accuracy ε𝜀\varepsilonitalic_ε as a function of Tνsubscript𝑇𝜈T_{\nu}italic_T start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT and ε𝜀\varepsilonitalic_ε; note... | If either of these two checks fails, we simply do not move: the algorithm sets 𝐱t+1=𝐱tsubscript𝐱𝑡1subscript𝐱𝑡\mathbf{x}_{t+1}=\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in Line 6 of Algorithm 1.
As customary, we assume short-circ... |
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability o... | D |
For massive graphs the classical matching algorithms are not only prohibitively slow, but also space complexity becomes a concern. If a graph is too large to fit into the memory of a single machine, all the classical algorithms—which assume random access to the input—are not applicable.
This demand for a more realistic... | One that has attracted a lot of attention, especially in the past decade, is the graph stream model, which was introduced by Feigenbaum et al. [FKM+04, FKM+05, Mut05] in 2005.
In this model, the edges of the graph are not stored in the memory but appear in an arbitrary (that is, adversarially determined) sequential ord... | In particular, it is desirable that the number of passes is independent of the input graph size.
We call an algorithm a k𝑘kitalic_k-pass algorithm if the algorithm makes k𝑘kitalic_k passes over the edge stream, possibly each time in a different order [MP80, FKM+05]. |
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu... | The edge e𝑒eitalic_e will remain in the memory of our algorithm until the end of the phase.
Suppose that after an edge from the stream is processed the arcs a𝑎aitalic_a and b←←𝑏\overleftarrow{b}over← start_ARG italic_b end_ARG do not belong to the same structure anymore, e.g., due to an invocation of Reduce-Label-an... | A |
Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols.
The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over a... | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the com... | In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CP... |
We propose CPP – a novel decentralized optimization method with communication compression. The method works under a general class of compression operators and is shown to achieve linear convergence for strongly convex and smooth objective functions over general directed graphs. To the best of our knowledge, CPP is the... | Subsequently, decentralized optimization methods for undirected networks, or more generally, with doubly stochastic mixing matrices, have been extensively studied in the literature; see, e.g., [11, 12, 13, 14, 15, 16].
Among these works, EXTRA [14] was the first method that achieves linear convergence for strongly conv... | D |
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... | Note that in the proposed formulation (1) we consider both the centralized and decentralized cases. In the decentralized setting, all nodes are connected within a network, and each node can communicate/exchange information only with their neighbors in the network. While the centralized architecture consists of master-s... |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | We propose a lower bounds both on the communication and the number of local oracle calls for a general algorithms class (that satisfy Assumption 3). The bounds naturally depend on the communication matrix W𝑊Witalic_W (as in the minimization problem), but our results apply to SPP (see ”Lower” rows in Table 1
for variou... | We present a new SPP formulation of the PFL problem (1) as the decentralized min-max mixing model. This extends the classical PFL problem to a broader class of problems beyond the classical minimization problem. It furthermore covers various communication topologies and hence goes beyond the centralized setting.
| D |
Outside of normal form (NF) games, this problem setting arises in multi-agent training when dealing with empirical games (also called meta-games), where a game payoff tensor is populated with expected outcomes between agents playing an extensive form (EF) game, for example the StarCraft League (Vinyals et al., 2019) a... | We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ... | In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games)... |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... |
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, th... | B |
\sigma}{\delta\epsilon}\right)\right)italic_n = roman_Ω ( divide start_ARG roman_Δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG italic_k end_ARG end_ARG start_ARG italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG roman_ln ( divide start_ARG italic_k italic_σ end_ARG start_ARG italic_δ it... |
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mecha... |
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section ... |
We prove these theorems via a new notion, pairwise concentration (PC) (Definition 4.2), which captures the extent to which replacing one dataset by another would be “noticeable,” given a particular query-response sequence. This is thus a function of particular differing datasets (instead of worst-case over elements), ... | recently established a formal framework for understanding and analyzing adaptivity in data analysis, and introduced a general toolkit for provably preventing the harms of choosing queries adaptively—that is, as a function of the results of previous queries. This line of work has established that enforcing that computat... | C |
In fact, we prove a slightly stronger statement. If a graph G𝐺Gitalic_G can be reduced to a graph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by iteratively removing z𝑧zitalic_z-antlers, each of width at most k𝑘kitalic_k, and the sum of the widths of this sequence of antlers is t𝑡... |
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr... |
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets). We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler... | As described in Section 1, our algorithm aims to identify vertices in antlers using color coding. To allow a relatively small family of colorings to identify an entire antler structure (C,F)𝐶𝐹(C,F)( italic_C , italic_F ) with |C|≤k𝐶𝑘|C|\leq k| italic_C | ≤ italic_k, we need to bound |F|𝐹|F|| italic_F | in terms of... | The remainder of the paper is organized as follows. After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3. We present structural properties of antlers and how they combine in Section 4. In Section 5 we show how color coding can be used to find a large feedb... | B |
During image composition, the foreground is usually extracted using image segmentation [108] or matting [180] methods. However, the segmentation or matting results may be noisy and the foregrounds are not precisely delineated. When the foreground with jagged boundaries is pasted on the background, there will be abrupt... | Traditional image blending methods aim to smooth the transition between foreground and background. Alpha blending [123] proposed to assign alpha values for boundary pixels indicating what fraction of the colors are from foreground or background, in which the alpha values need to be manually set. Alpha blending is a sim... |
We report the results of Poisson image blending [121], GP-GAN [172], Zhang et al. [198], and MLF [194]. We also report the ground-truth composite image obtained using ground-truth alpha matte for comparison. From Fig. 9, it can be seen that the obtained composite images using predicted alpha mattes are very close to t... | Specifically, the fusion network in Zhang et al. [194] used two separate encoders to extract and fuse multi-scale features from foreground and background. Because the fusion network relies on ground-truth composite images obtained by using accurate alpha matte as supervision, the work [194] also proposed an easy-to-har... | Another group of methods attempt to achieve smooth
boundary transition by enforcing gradient domain smoothness [31, 63, 74, 144]. The earliest work along this research direction is Poisson image blending [121]. Poisson image blending [121] proposed to enforce the gradient domain consistency with respect to the source i... | A |
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comp... | Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations. In the field of urban computing, it is highly probable that the knowledge required ... | CityNet’s comprehensive and correlated data make it a valuable resource for machine learning tasks in urban computing. These tasks include spatio-temporal predictions and its multi-task variant, spatio-temporal transfer learning, and reinforcement learning. In this paper, we present extensive benchmarking results for t... |
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One e... | To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcem... | A |
This loss function follows from the maximum likelihood principle with a Gaussian likelihood function 𝒩(y^(x),σ^2(x))𝒩^𝑦𝑥superscript^𝜎2𝑥\mathcal{N}\big{(}\hat{y}(x),\hat{\sigma}^{2}(x)\big{)}caligraphic_N ( over^ start_ARG italic_y end_ARG ( italic_x ) , over^ start_ARG italic_σ end_ARG start_POSTSUPERSCRIPT 2 ... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... | where R𝑅Ritalic_R again denotes the ensemble size. With this approach, the predictive distribution is modelled as an approximation of a uniformly-weighted Gaussian mixture model reynolds2009gaussian , where the parameters of every component are estimated by a model in the ensemble, by a normal distribution with the sa... |
When making predictions, the conditional mean is again approximated by MC integration (12), i.e. one takes the average of multiple forward passes. The total predictive variance is given by the sum of the empirical variance of the ensemble and the variance predicted by the model itself: | Every ensemble allows for a naive construction of a prediction interval heskes1997practical when the aggregation strategy in Algorithm 2 is given by the arithmetic mean. By treating the predictions of the individual models in the ensemble as elements of a data sample, one can calculate the empirical mean and variance ... | C |
Instead of feeding the token embedding of each of them individually to the Transformer, we can combine the token embedding of either the four tokens for MIDI scores or six tokens for MIDI performances in a group by concatenation and let the Transformer model
process them jointly, as depicted in Fig. 1(b). We can also m... | These constitute the main ideas of the CP representation \parencitehsiao21aaai,
which has at least the following two advantages over its REMI counterpart: 1) the number of time steps needed to represent a MIDI piece is much reduced, since the tokens are merged into a “super token” (a.k.a. a “compound word” \parencitehs... | Instead of feeding the token embedding of each of them individually to the Transformer, we can combine the token embedding of either the four tokens for MIDI scores or six tokens for MIDI performances in a group by concatenation and let the Transformer model
process them jointly, as depicted in Fig. 1(b). We can also m... | In addition to REMI, we experiment with the “token grouping” idea of the compound word (CP) representation \parencitehsiao21aaai, to reduce the length of the token sequences.
We depict the two adopted token representations in Fig. 1 and provide some details below. | While each time step corresponds to a single token in REMI, each time step would correspond to a super token that assembles four tokens in total in CP. Without such a token grouping, the sequence length (in terms of the number of time steps) of REMI is longer than that of CP (in this example, 16 versus 4). Please note ... | A |
Let G𝐺Gitalic_G be a graph on n𝑛nitalic_n vertices and H𝐻Hitalic_H its spanning subgraph. Then λ(χ(H)−1)+1≤BBCλ(G,H)≤λ(χ(H)−1)+n−χ(H)+1𝜆𝜒𝐻11𝐵𝐵subscript𝐶𝜆𝐺𝐻𝜆𝜒𝐻1𝑛𝜒𝐻1\lambda(\chi(H)-1)+1\leq BBC_{\lambda}(G,H)\leq\lambda(\chi(H)-1)+n-\chi(H)+1italic_λ ( italic_χ ( italic_H ) - 1 ) + 1 ≤ italic_B ... |
Moreover, it was proved before in [4] that there exists a 2222-approximate algorithm for complete graphs with bipartite backbones and a 3/2323/23 / 2-approximate algorithm for complete graphs with connected bipartite backbones. Both algorithms run in linear time. As a corollary, it was proved that we can compute BBC... | An obvious extension would be an analysis for a class of split graphs, i.e. graphs whose vertices can be partitioned into a maximum clique C𝐶Citalic_C (of size ω(G)=χ(G)𝜔𝐺𝜒𝐺\omega(G)=\chi(G)italic_ω ( italic_G ) = italic_χ ( italic_G )) and an independent set I𝐼Iitalic_I.
A simple application of Theorem 2.18 gi... | Additionally, [16] proved for comparability graphs we can find a partition of V(G)𝑉𝐺V(G)italic_V ( italic_G ) into at most k𝑘kitalic_k sets which induce semihamiltonian subgraphs in the complement of G𝐺Gitalic_G (i.e. it contains a Hamiltonian path) and from that it follows that BBC2(Kn,G)𝐵𝐵subscript𝐶2subscr... | The λ𝜆\lambdaitalic_λ-backbone coloring problem was studied for several classes of graphs, for example split graphs [5], planar graphs [3], complete graphs [6], and for several classes of backbones: matchings and disjoint stars [5], bipartite graphs [6] and forests [3].
For a special case λ=2𝜆2\lambda=2italic_λ = 2 i... | D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.