context
stringlengths
250
4.63k
A
stringlengths
250
4.73k
B
stringlengths
250
4.85k
C
stringlengths
250
4.32k
D
stringlengths
250
8.2k
label
stringclasses
4 values
a1,n−1⁢fn⁢(x)=(a2,n−1+a3,n−1⁢x)⁢fn−1⁢(x)−a4,n−1⁢fn−2⁢(x),subscript𝑎1𝑛1subscript𝑓𝑛𝑥subscript𝑎2𝑛1subscript𝑎3𝑛1𝑥subscript𝑓𝑛1𝑥subscript𝑎4𝑛1subscript𝑓𝑛2𝑥a_{1,n-1}f_{n}(x)=(a_{2,n-1}+a_{3,n-1}x)f_{n-1}(x)-a_{4,n-1}f_{n-2}(x),italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT italic_f start_POST...
\frac{f_{n-2}(x)}{f_{n-1}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_a start_POSTSUBSCRIPT 1 , italic_n - 1 end_POSTSUBSCRIPT end_ARG start_ARG ita...
(x)\frac{f_{n-1}(x)}{f_{n}(x)}}.divide start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSC...
})_{1+(n-m)/2}}.italic_h start_POSTSUBSCRIPT italic_i , italic_n , italic_m end_POSTSUBSCRIPT = ( - ) start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( italic_n + italic_D / 2 ) divide start_ARG ( divide start_ARG italic_m - italic_i end_ARG start_ARG 2 end_ARG ) start_POSTSUBSCRIPT ( italic_n - i...
}\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m% }}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG s...
A
For example, computing the Bruhat decomposition of a random matrix in GL⁢(250,2)GL2502\textrm{GL}(250,2)GL ( 250 , 2 ) resulted in an SLP of length 353 969353969353\;969353 969. During the evaluation, our MSLP required 32323232 memory slots and it was easily possible to evaluate this MSLP on the standard generators of...
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quo...
We note that after applying the function SlotUsagePattern, the resulting SLP only required 12121212 memory slots and could be evaluated in the same time as our MSLP. This is due to the fact that SlotUsagePattern was handed a well-designed SLP. When faced with an SLP not designed to be memory efficient, one might not ex...
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be...
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
B
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That al...
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method ...
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the ide...
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov...
C
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
C
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le...
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha...
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analys...
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed ...
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on t...
C
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipsh...
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max ...
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is a...
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to...
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz...
C
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ...
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true ...
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we ...
The performance of Twitter features are stable over time from the beginning to the end. The 3 best of Twitter Features are all based on contained URLs in tweets: ContainNEWS, UrlRankIn5000, WotScore, as shown in Table 8. It is quite reasonable that the news event would have higher probability to be reported by news or ...
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with ...
C
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie...
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ...
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall...
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear...
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res...
D
RT=𝔼⁢{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST...
one uses p⁢(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal, i.e., π⁢(A|xt+1,ℋ1:t)=ℙ⁢(A=at+1∗|xt+1,θt,...
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ...
Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many. TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018].
D
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it i...
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
C
Further improvements of benchmark results could potentially be achieved by a number of additions to the processing pipeline. Our model demonstrates a learned preference for predicting fixations in central regions of images, but we expect performance gains from modeling the central bias in scene viewing explicitly Kümme...
The spatial allocation of attention when viewing natural images is commonly represented in the form of topographic saliency maps that depict which parts of a scene attract fixations reliably. Identifying the underlying properties of these regions would allow us to predict human fixation patterns and gain a deeper under...
Overcoming these issues requires a higher-level scene understanding that models object interactions and predicts implicit gaze and motion cues from static images. Robust object recognition could however be achieved through more recent classification networks as feature extractors Oyama and Yamanaka (2018) at the cost ...
Figure 1: A visualization of four natural images with the corresponding empirical fixation maps, our model predictions, and estimated maps based on the work by Itti et al. (1998). The network proposed in this study was not trained on the stimuli shown here and thus exhibits its generalization ability to unseen instanc...
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer...
B
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better under...
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute the locality number (with “greedy strategie...
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed gr...
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection....
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the local...
D
These predictions formed a vector field which was then used for evolving the contour using the Sobolev active contour framework. Anh et al.[130] created a non-rigid segmentation method based on the distance regularized level set method that was initialized and constrained by the results of a structured inference using ...
In[163] the authors created an image super-resolution method based on a residual CNN that allows the use of input data acquired from different viewing planes for improved performance. They compared it with other interpolation methods (linear, spline, multi-atlas patch match, shallow CNN, CNN) achieving better results i...
They used multi-scale discrete WT to facilitate the extraction of MI features at specific frequency resolutions and softmax regression to build a multi-class classifier based on the learned features. Their validation experiments show that their method performed better than previous methods in terms of sensitivity and s...
They tested mean squared error and structural similarity based loss functions on two patch-based CNNs with four layers and compared them for different types and levels of noise. Nirschl et al.[230] used endomyocardial biopsy images from 209 patients to train and test a patch-based six layer CNN to identify heart failur...
The learned features are further utilized in defining a similarity measure for MRI atlas selection. They compared their method with majority voting, patch-based label fusion, multi-atlas patch match and SVM with augmented features achieving superior results in terms of accuracy.
D
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^...
As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen...
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-...
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ...
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a...
A
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level...
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin...
We finally define the combinations of m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT as the members c𝑐citalic_c of the set of combined models C𝐶Citalic_C. Using the previous definitions, the aim of this paper is the evaluation of the set of models C𝐶Citalic_C, where C𝐶C...
A
In this equation, T𝑇Titalic_T is the time taken to negotiate, Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the energy used by the actuated joint i𝑖iitalic_i, n𝑛nitalic_n denotes the actuated joint number, and d⁢t𝑑𝑡dtitalic_d italic_t signifies the time step.
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The result...
Fig. 7 illustrates the hierarchical control design for the autonomous locomotion mode transition. The decision-making process for this transition is accomplished in MATLAB, whereas the control of each separate locomotion mode is enacted in CoppeliaSim. The connection between MATLAB and the physical robot model in Copp...
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal...
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ...
C
(1.5,4.5)1.54.5(1.5,4.5)( 1.5 , 4.5 )-competitive; otherwise, we set α𝛼\alphaitalic_α to a smaller value. For α=0.9𝛼0.9\alpha=0.9italic_α = 0.9 for example, the algorithm is asymptotically (1.5625,3.75)1.56253.75(1.5625,3.75)( 1.5625 , 3.75 )-competitive. Similarly, for α=0.868𝛼0.868\alpha=0.868italic_α = 0.868, we ...
Johnson [18] proved that the competitive ratio of First-Fit and Best-Fit is 1.7. Many other algorithms with improved competitive ratios have been studied. The best known algorithm was introduced by Balogh et al. [6] and has a competitive ratio of at most 1.5783. Moreover, it is known that no online algorithm can achiev...
This can be accomplished with a binary search in the interval [1,⌈log⁡u⌉]1𝑢[1,\lceil\log u\rceil][ 1 , ⌈ roman_log italic_u ⌉ ] , since we know that the doubling strategy in which the i𝑖iitalic_i-th bid equals 2isuperscript2𝑖2^{i}2 start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT is w𝑤witalic_w-competitive for al...
To address this issue, we introduce an algorithm named Toggle (Tog) that has a parameter β∈[0,1/2]𝛽012\beta\in[0,1/2]italic_β ∈ [ 0 , 1 / 2 ], and uses 2 advice bits to select one of the algorithms Timestamp, Mtfe or Mtfo, see Figure 3. This algorithm achieves a competitive ratio of rTog=5/3+5⁢β6+3⁢βsubscript𝑟Tog535...
Our solution uses an algorithm introduced by Boyar et al. [12] which achieves a competitive ratio of 1.5 using O⁢(log⁡n)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) bits of advice. We refer to this algorithm as Reserve-Critical in this paper and describe it briefly. See Figure 2 for an illustration.
D
As said earlier, each chunk contained 10% of the subject’s writing history, a value that for some subjects could be just a single post while for others hundreds or even thousands of them. Furthermore, the use of chunks assumes we know in advance all subject’s posts, which is not the case in real life scenarios, in whic...
Given that we do not have previous results available from other participants under this new scenario, for comparison, we had to perform experiments not only with SS3 but also with other standard classifiers Logistic Regression (LOGREG), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and K𝐾Kitalic_K-Neare...
from the first chunk on, the cumulative confidence value of one of the classes (negative in this case) stays above and always growing faster the other one. In this example, correctly, it was not possible to classify this subject as depressed after reading all its chunks.
We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed. For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be...
Since the dataset was highly unbalanced we optimized the penalty parameter, C𝐶Citalic_C (C>0)𝐶0(C>0)( italic_C > 0 ), and the class weight parameter w𝑤witalic_w (w≥1)𝑤1(w\geq 1)( italic_w ≥ 1 ) for SVM and LOGREG; for MNB only the class weight w𝑤witalic_w was varied, while for K𝐾Kitalic_KNN the K𝐾Kitalic_K param...
A
Each worker computes stochastic gradients locally and communicates with the server or other workers to obtain the aggregated stochastic gradients for updating the model parameter. Recently, more and more large-scale deep learning models, such as large language models (Devlin et al., 2019; Brown et al., 2020; Touvron et...
Researchers have proposed two main categories of communication compression methods for reducing communication cost: quantization (Wen et al., 2017; Alistarh et al., 2017; Jiang and Agrawal, 2018) and sparsification (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al....
In existing error feedback based sparse communication methods, most are for vanilla DSGD (Aji and Heafield, 2017; Alistarh et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019). There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (...
Due to the presence of compressed error, naively compressing the communicated vectors in DSGD or DMSGD will damage the convergence, especially when the compression ratio is high. The most representative technique designed to tackle this issue is error feedback (Stich et al., 2018; Karimireddy et al., 2019), also called...
Each worker computes stochastic gradients locally and communicates with the server or other workers to obtain the aggregated stochastic gradients for updating the model parameter. Recently, more and more large-scale deep learning models, such as large language models (Devlin et al., 2019; Brown et al., 2020; Touvron et...
A
Previous literature has also demonstrated the increased biological plausibility of sparseness in artificial neural networks [24]. Spike-like sparsity on activation maps has been thoroughly researched on the more biologically plausible rate-based network models [25], but it has not been thoroughly explored as a design o...
Using backpropagation [2] the gradient of each weight w.r.t. the error of the output is efficiently calculated and passed to an optimization function such as Stochastic Gradient Descent or Adam [3] which updates the weights making the output of the network converge to the desired output. DNNs were successful in utilizi...
Previous work by Blier et al. [31] demonstrated the ability of DNNs to losslessly compress the input data and the weights, but without considering the number of non-zero activations. In this work we relax the lossless requirement and also consider neural networks purely as function approximators instead of probabilist ...
The increased number of weights and non-zero activations make DNNs more complex, and thus more difficult to use in problems that require corresponding causality of the output with a specific set of neurons. The majority of domains where machine learning is applied, including critical areas such as healthcare [26], requ...
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one. The cost of the description of the data could be seen as proportional to the number of weights and the number o...
C
In summary, our work differs significantly from each of the above-mentioned works, and other literatures in UAV ad-hoc networks. As far as we know, our proposed algorithm is capable of learning previous utilities and strategies, achieve NE with restricted information and constrained strategies sets, and update strategi...
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
When there are numbers of UAVs in the network, it is possible for the coverage areas of different UAVs to overlap. When a UAV overlaps with another, they will not support all users but share the mission. The users in the overlaps will be served randomly with equal probability by each UAV. Fig. 2 presents the overlaps b...
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm wit...
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) ...
A
In this derivation, it is assumed that the boundary term ∫u⁢𝐩⋅𝑑𝚪=0⋅𝑢𝐩differential-d𝚪0\int u\mathbf{p}\cdot d\boldsymbol{\Gamma}=0∫ italic_u bold_p ⋅ italic_d bold_Γ = 0, i.e.,formulae-sequence𝑖𝑒i.e.,italic_i . italic_e . , there is no flux of the continuous vector field u⁢𝐩𝑢𝐩u\mathbf{p}italic_u bold_p
to density, n¯¯𝑛\overline{n}over¯ start_ARG italic_n end_ARG will automatically evolve to satisfy the natural boundary condition (ζ⁢∇⟂n)|Γ=0evaluated-at𝜁subscript∇perpendicular-to𝑛Γ0\left(\zeta\,\nabla_{\perp}n\right)|_{\Gamma}=0( italic_ζ ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_n ) | start_POSTSUBSCRIPT ro...
impose the natural boundary conditions (∇⟂ω)|Γ=0evaluated-atsubscript∇perpendicular-to𝜔Γ0\left(\nabla_{\perp}\,\omega\,\right)|_{\Gamma}=0( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_ω ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0, and the third term in equation 3.21 will vanish due to
has no component perpendicular to the boundary (i.e.,𝐩⟂|Γ=0)(i.e.,\,\mathbf{p_{\perp}}|_{\Gamma}=0)( italic_i . italic_e . , bold_p start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0 ). In the following, we will refer to these conditions as the natural
if no boundary conditions are explicitly applied to v¯ϕsubscript¯𝑣italic-ϕ\overline{v}_{\phi}over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT, the properties of the element-to-node divergence operation will automatically
C
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P...
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡...
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI...
C
Q∗⁢(s,a)=m⁢a⁢xπ⁢Qπ⁢(s,a)superscript𝑄𝑠𝑎𝑚𝑎subscript𝑥𝜋superscript𝑄𝜋𝑠𝑎Q^{*}(s,a)=max_{\pi}Q^{\pi}(s,a)italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_s , italic_a ) = italic_m italic_a italic_x start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRI...
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Class...
Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action v...
The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein...
B
The scarcity of richly annotated medical images is limiting supervised deep learning-based solutions to medical image analysis tasks (Perone and Cohen-Adad, 2019), such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models.
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol...
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add ...
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic...
Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a G...
A
In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped. In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ...
In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped. In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ...
In Sec. IV-E we introduced the spectral similarity distance to quantify how much the spectrum of the Laplacian associated with the sparsified adjacency matrix changes when edges smaller than ϵitalic-ϵ\epsilonitalic_ϵ are dropped. In Fig. 13 we show how the graph structure (in terms of spectral similarity) varies, when ...
In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped. Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially.
In every example, for small values of ϵitalic-ϵ\epsilonitalic_ϵ the structure of the graphs changes only slightly while a large amount of edges is dropped. Notably, the spectral similarity increases almost linearly with ϵitalic-ϵ\epsilonitalic_ϵ, while the edge density decreases exponentially.
C
SVM: Support vector machine (Chang & Lin, 2011) is a popular classifier that tries to find the best hyperplane that maximizes the margin between the classes. As evaluated by Fernández-Delgado et al. (2014), the best performance is achieved with a radial basis function kernel.
RF: Random forest (Breiman, 2001) is an ensemble-based method consisting of multiple decision trees. Each decision tree is trained on a different randomly selected subset of features and samples. The classifier follows the same overall setup, i.e., 500500500500 decision trees and a maximum depth of ten.
Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features. Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features...
Random forests are trained with 500500500500 decision trees, which are commonly used in practice (Fernández-Delgado et al., 2014; Olson et al., 2018). The decision trees are constructed up to a maximum depth of ten. For splitting, the Gini Impurity is used and N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG features ...
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
A
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ...
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into po...
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces...
The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained s...
To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regu...
D
We use a DenseNet architecture (Huang et al., 2017) consisting of 100 layers with bottleneck and compression layers, i.e., a DenseNet-BC-100. We select the default growth rate of k=12𝑘12k=12italic_k = 12 for the model, i.e., the number of feature maps added per layer.
In this section, we provide a comprehensive overview of methods that enhance the efficiency of DNNs regarding memory footprint, computation time, and energy requirements. We have identified three different major approaches that aim to reduce the computational complexity of DNNs, i.e., (i) weight and activation quantiza...
The horizontal red line shows the error of the real-valued baseline. Quantization is performed using different bit widths in three different modes: activation-only (blue), weight-only (green), and combined weight and activation quantization (purple).
We selected some of the most popular quantization approaches (see Section 3.1) for our comparison: binary weight networks (BWN) (Courbariaux et al., 2015b), binarized neural networks (BNN) (Hubara et al., 2016), DoReFa-Net (Zhou et al., 2016), trained ternary quantization (TTQ) (Zhu et al., 2017), and LQ-Net (Zhang et ...
In Wu et al. (2018b), weights, activations, weight gradients, and activation gradients are subject to customized quantization schemes that allow for variable bit widths and facilitate integer arithmetic during training and testing. In contrast to Zhou et al. (2016), the work of Wu et al. (2018b) accumulates weight chan...
C
{v0,v27}+{v27,v28}+{v28,v14}+{v14⁢v29}+{v29,v23}+{v23,v30}+{v30,v31}+{v31,v0},subscript𝑣0subscript𝑣27subscript𝑣27subscript𝑣28subscript𝑣28subscript𝑣14subscript𝑣14subscript𝑣29subscript𝑣29subscript𝑣23subscript𝑣23subscript𝑣30subscript𝑣30subscript𝑣31subscript𝑣31subscript𝑣0\displaystyle\quad\{v_{0},v_{27}\}+\...
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\m...
B
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde...
The observed conclusions are confirmed when we compare the component-wise CIs for both groups—since none of them overlap—and the results of all component-wise Mann-Whitney U tests, with all U’s well below the critical value of 47, showing that t-viSNE had significantly larger scores in all four ICE-T components. These ...
As described in Subsection 6.1, we complemented the data from the tasks themselves by using the ICE-T methodology and questionnaire to gather and compare structured user feedback from both groups. The scores obtained from all participants, for all ICE-T components, can be seen in Table II. Larger is better, with green ...
The goals of the experiment are defined by two research questions, RQ1: “Will the users spend the same time performing the tasks in both tools?”, and RQ2: “Will both tools provide, from the users’ perspective, the same level of support for the given tasks?” Thus, we were interested in checking the completion time for t...
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde...
A
Metaheuristic optimization algorithms: an overview - 2024 [39]: This paper focuses on studying the main components and concepts of optimization. More specifically, the overview provides the advantages (agnostic to the problem being solved, gradient independence, global search capability, the capability of dealing with...
Theoretical studies: By the hand of the fitness landscape for a better understanding of how a search algorithm can perform on a family of problem instances, of multidisciplinary theories to study the role of diversity and the balance of local search and global search required to undertake a certain problem efficiently...
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n...
Metaheuristic optimization algorithms: an overview - 2024 [39]: This paper focuses on studying the main components and concepts of optimization. More specifically, the overview provides the advantages (agnostic to the problem being solved, gradient independence, global search capability, the capability of dealing with...
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex...
B
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ...
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,...
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propo...
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec...
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
B
SMap first collects the dataset of services.Our dataset is constructed as follows: we periodically download the entire IPv4 scan from Sonar Project (son, [n. d.]). We use the scan results on UDP port 53 as input for Name servers and DNS resolvers, scan data on TCP port 25 for Mail servers and scan results on TCP port ...
SMap architecture consists of two parts: dataset scan and ingress filtering scan. The dataset scan collects the popular services using two methods: domain-based scan and IPv4 based scan. In IPv4 scan to locate the services SMap probes every IP, checking for open ports that correspond to the services that we need; for i...
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th...
Our work provides the first comprehensive view of ingress filtering in the Internet. We showed how to improve the coverage of the Internet in ingress filtering measurements to include many more ASes that were previously not studied. Our techniques allow to cover more than 90% of the Internet ASes, in contrast to best ...
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes a...
D
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification ...
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data a...
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ...
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer....
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape...
B
Let ti+∈𝒯+subscriptsuperscript𝑡𝑖superscript𝒯t^{+}_{i}\in\mathcal{T}^{+}italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_T start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, and let q1subscript𝑞1q_{1}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT be a poi...
A(3)⁢[i,q1,q2]:={The length of the shortest path from q1 to q2 that visits all points in P⁢[1,st⁢(i)], such that the neighbour of q2 is a point in P⁢[1,st⁢(i)−5⁢k2].assignsuperscript𝐴3𝑖subscript𝑞1subscript𝑞2casesThe length of the shortest path from q1 to q2 that visits all points in P⁢[1,st⁢(i)], such that the neig...
Not shown is the property that the neighbour of q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a point in P⁢[1,st⁢(i)−5⁢k2]𝑃1st𝑖5superscript𝑘2P[1,\mathrm{st}(i)-5k^{2}]italic_P [ 1 , roman_st ( italic_i ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ].
Case (ii): p∈P⁢[s⁢t+⁢(i−1)−5⁢k2+1,st⁢(i)−5⁢k2]𝑝𝑃𝑠superscript𝑡𝑖15superscript𝑘21normal-st𝑖5superscript𝑘2p\in P[st^{+}\mkern-2.0mu(i-1)-5k^{2}+1,\mathrm{st}(i)-5k^{2}]italic_p ∈ italic_P [ italic_s italic_t start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_i - 1 ) - 5 italic_k start_POSTSUPERSCRIPT 2 end_POSTSU...
st}(i)]$, such that the neighbour of $q_{2}$ is a point in $P[1,\mathrm{st}(i)% -5k^{2}]$.}\end{cases}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT [ italic_i , italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] := { start_ROW start_CELL The length of the sh...
A
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belong...
A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1⁢…⁢qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic...
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cann...
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem...
C
In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: 𝒮r⁢a⁢n⁢d∼uniform⁢(0,1)similar-tosubscript𝒮𝑟𝑎𝑛𝑑uniform01\mathcal{S}_{rand}\sim\textit{uniform}(0,1)caligraphic_S start_POSTSUBSCRIPT italic_r italic_a italic_n italic_d en...
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible...
Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2’s test set. The percentage of overlap is defined as:
To perform the tests, we first randomly sample 5000500050005000 subsets of non-overlapping test instances. We then average the accuracy of each subset across 5555 runs, obtaining 5000500050005000 values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table 2, the p𝑝pitalic_p...
To test if the changes in results were statistically significant, we performed Welch’s t-tests Welch (1938) on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch’s t-test over the Student’s t-test, because the latter assumes equal variances for predictions from different var...
D
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categorie...
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word...
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert ann...
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categorie...
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application...
A
Evaluation of the Results with the Test Data Set. To confirm that our findings are solid, we applied the resulting metamodel to the same test data as Skeppstedt et al. [51], see Table 1. For the hypotheticality category, the reported f1-score for the baseline approach was 66%.
In our case, we reached the following results with the final stack and weighted-average: 94.46% for accuracy, 93.87% for precision, 94.46% for recall, and f1-score of 93.87%. Additionally, as an extra validation, we checked the results for the prediction category (again as a binary classification problem).
To illustrate how to choose different metrics (and with which weights), we start our exploration by selecting the heart disease data set in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a). Knowing that the data set is balanced, we pick accuracy (weight...
Evaluation of the Results with the Test Data Set. To confirm that our findings are solid, we applied the resulting metamodel to the same test data as Skeppstedt et al. [51], see Table 1. For the hypotheticality category, the reported f1-score for the baseline approach was 66%.
Weighted-average calculates the metrics for each label and finds their average weighted by support (the number of true instances for each label). The data set is a binary classification problem and contains 165 diseased and 138 healthy patients. Hence, we choose micro-average to weight the importance of the largest cla...
A
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v...
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end...
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
C
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity i...
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances o...
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla ...
For both BLEU and C Score, Jac Score is around 1 in each cluster, which means the persona descriptions are not similar. The dialogue quantity also seems similar among different clusters. So we can conclude that data quantity and task profile does not have a major impact on the fine-tuning process.
B
For the CCA-enabled UAV mmWave network, the array size is usually large and the corresponding inter-element distance Δ⁢ϕΔitalic-ϕ\Delta\phiroman_Δ italic_ϕ is small. Therefore, it is assumed that Δ⁢αΔ𝛼\Delta\alpharoman_Δ italic_α and Δ⁢βΔ𝛽\Delta\betaroman_Δ italic_β satisfy Δ⁢ϕc≤Δ⁢αΔsubscriptitalic-ϕcΔ𝛼\Delta\phi_{\...
The analog precoding architecture adopted for DRE-covered CCA is shown in Fig. 2 [13], which tunes the partially-connected precoding architecture by adapting the connection between the RF chains and the antenna elements to the channel variation and forming dynamic subarrays. For a fixed time slot, the precoding archite...
After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the ...
Given the maximum resolution of the codebook, we continue to discuss the characteristic of the multi-resolution and the beamwidth with the CCA codebook. For the multi-resolution codebook, the variable resolution is tuned by the beamwidth, which is determined by the number of the activated elements [12]. Note that the ...
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac...
A
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the...
C
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et...
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T...
corresponding to θ(m)⁢(k)=(θ1⁢(k),…,θm⁢(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (...
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe...
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear...
D
He et al. (2016) present the residual learning framework to ease the training of deep neural networks. Srivastava et al. (2015) propose the highway network which contains a transform gate and a carry gate to control the produced output and the input. Chai et al. (2020) propose a highway Transformer with a self-gating ...
For the convergence of deep Transformers, Bapna et al. (2018) propose the Transparent Attention mechanism which allows each decoder layer to attend weighted combinations of all encoder layer outputs. Wang et al. (2019) present the Dynamic Linear Combination of Layers approach that additionally aggregates shallow layers...
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et...
Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose a densel...
We examine whether depth-wise LSTM has the ability to ensure the convergence of deep Transformers and measure performance on the WMT 14 English to German task and the WMT 15 Czech to English task following Bapna et al. (2018); Xu et al. (2020a), and compare our approach with the pre-norm Transformer in which residual ...
C
⟨X,τ,ℒ⟩𝑋τℒ\left\langle X,\uptau,\mathcal{L}\right\rangle⟨ italic_X , roman_τ , caligraphic_L ⟩. Hence ⟨ℒ′⟩=⟨τ∩ℒ⟩delimited-⟨⟩superscriptℒ′delimited-⟨⟩τℒ\langle\mathcal{L}^{\prime}\rangle=\langle\uptau\cap\mathcal{L}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ = ⟨ roman_τ ∩ caligraphic_L ⟩.
⟨X,τ,ℒ⟩𝑋τℒ\left\langle X,\uptau,\mathcal{L}\right\rangle⟨ italic_X , roman_τ , caligraphic_L ⟩. Hence ⟨ℒ′⟩=⟨τ∩ℒ⟩delimited-⟨⟩superscriptℒ′delimited-⟨⟩τℒ\langle\mathcal{L}^{\prime}\rangle=\langle\uptau\cap\mathcal{L}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ = ⟨ roman_τ ∩ caligraphic_L ⟩.
Since U∈ℒ′𝑈superscriptℒ′U\in\mathcal{L}^{\prime}italic_U ∈ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, U𝑈Uitalic_U is compact in ⟨ℒ′⟩delimited-⟨⟩superscriptℒ′\langle\mathcal{L}^{\prime}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩, which means
ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to only define compact sets is asking for ⟨ℒ′⟩delimited-⟨⟩superscriptℒ′\langle\mathcal{L}^{\prime}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ not to contain too many sets.
property. Finally, sets in ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT define compact sets in ⟨ℒ′⟩delimited-⟨⟩superscriptℒ′\langle\mathcal{L}^{\prime}\rangle⟨ caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ because it is
B
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation. However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem. These problems seriously limit...
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify...
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a probl...
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed o...
The ordinal distortion represents the image feature in terms of the distortion distribution, which is jointly determined by the distortion parameters and location information. We assume that the camera model is the division model, and the ordinal distortion 𝒟𝒟\mathcal{D}caligraphic_D can be defined as
D
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the fou...
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
C
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C...
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ...
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ...
𝒟𝒟\mathcal{D}caligraphic_D corresponds to a distribution of disease outcomes. Health departments prepared some vaccination and testing sites in advance, based on projected demands [5], i.e., in stage-I, which may have multiple benefits; for example, the necessary equipment and materials might be cheaper and easier to...
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscrip...
C
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp...
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi...
As a result, the existing methods are no longer applicable. In fact, the inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error, which leads the nonegative supermartingale converg...
(Lemma 3.1). To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (...
That is, the mean square error at the next time can be controlled by that at the previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does n...
A
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI ...
In this experiment, we use the approach of aggregate query answering [37] to check the information utility of MuCo. We randomly generate 1,000 queries and calculate the average relative error rate for comparison. The sequence of the query is expressed in the following form SELECT SUM(salary) FROM Microdata
This experiment measures the information loss of MuCo. Note that, the mechanism of MuCo is much more different from that of generalization. Thus, for the sake of fairness, we compare the information loss of MuCo and Mondrian when they provide the same level of protections. Then, the experiment measures the effectivene...
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and mo...
Results from Figure 10 show that the increase of l𝑙litalic_l lowers the information loss but raises the relative error rate. It is mainly because the number of tuples in each group increases with the growth of l𝑙litalic_l. On the one hand, in random output tables, the probabilities that tuples have to cover on the Q...
A
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess...
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared...
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe...
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an...
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62....
A
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG ita...
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s...
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma...
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subsc...
C
Using the master algorithm to select the window size is reminiscent of model selection approaches to online RL (Agarwal et al., 2017; Pacchiano et al., 2020; Lee et al., 2021; Abbasi-Yadkori et al., 2020). Typically model selection approaches achieve worse rates compared to the best base algorithm. However, in our sett...
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and th...
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variatio...
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 202...
In this section, we perform empirical experiments on synthetic datasets to illustrate the effectiveness of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart. We compare the cumulative rewards of the proposed algorithms with five baseline algorithms: Epsilon-Greedy (Watkins, 1989), Random-Exploration, LSVI-UCB (Jin et al., 2020...
D
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover...
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and t...
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic...
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,...
D
In the entity alignment task, we leverage the widely used DBP15K datasets [15, 16, 17, 28, 34, 36, 40, 18] in our experiment. These datasets encompass three entity alignment settings, each comprising two linked KGs in different languages. For instance, the ZH-EN dataset involves the alignment between Chinese and Englis...
In the entity prediction task, we use four prominent datasets: (1) FB15K which is a dataset that has been widely used for many years and includes Freebase entities and their relations [11, 33, 66, 67]; (2) WN18 which is another extensively used dataset comprising entities and relations from WordNet [11, 30, 31, 32, 34,...
In Table 8, we present more detailed entity prediction results on open-world FB15K-237, considering the influence of different decoders. Our observations indicate that decentRL consistently outperforms the other methods across most metrics when using TransE and DistMult as decoders. Furthermore, we provide results on ...
Table 6 and Table 7 present the results for conventional entity prediction. decentRL demonstrates competitive or even superior performance when compared to state-of-the-art methods on the FB15K and WN18 benchmarks, showcasing its efficacy in entity prediction. While on the FB15K-237 and WN18RR datasets, the performanc...
Table 2 provides a comparison between the datasets before and after the re-splitting. Although we sample 20% of the entities in the testing set as new entities, the number of triplets removed from the training set exceed 20%. As a result, the new datasets present a greater challenge for all KG embedding methods. Notabl...
A
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-err...
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ...
The related exploration methods aim to remove the stochasticity of the dynamics rather than modeling it. For example, Inverse Dynamics [10], Random Features [11], and EMI [30] learn a feature space to remove the task-irrelevant information in feature space such as white-noise. Curiosity-Bottleneck [31] and Dynamic Bot...
An ordinary encoder-decoder based dynamics model that makes deterministic predictions often fails to capture the multimodality and stochasticity in dynamics and outputs an averaged prediction. An intuitive example is given in Fig. 1, there are two roads (one from the left, and the other from the right) to reach the go...
In this paper, we propose the Variational Dynamic Model (VDM), which models the multimodality and stochasticity of the dynamics explicitly based on conditional variational inference. VDM considers the environmental state-action transition as a conditional generative process by generating the next-state prediction unde...
B
Until today, the classic Gauss quadrature formula is the best approach to approximating integrals IGauss⁢(f)≈∫Ωf⁢(x)⁢dxsubscript𝐼Gauss𝑓subscriptΩ𝑓𝑥differential-d𝑥I_{\mathrm{Gauss}}(f)\approx\int_{\Omega}f(x)\,\mathrm{d}xitalic_I start_POSTSUBSCRIPT roman_Gauss end_POSTSUBSCRIPT ( italic_f ) ≈ ∫ start_POSTSUBSCRIPT...
However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Tr...
convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality....
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros...
Leslie Greengard, Christian L. Mueller, Alex Barnett, Manas Rachh, Heide Meissner, Uwe Hernandez Acosta, and Nico Hoffmann are deeply acknowledged for their inspiring hints and helpful discussions. Further, we are grateful to Michael Bussmann and thank the whole CASUS institute (Görlitz, Germany) for hosting stimulatin...
D
The Wasserstein distance, as a particular case of IPM, is popular in many machine learning applications. However, a significant challenge in utilizing the Wasserstein distance for two-sample tests is that the empirical Wasserstein distance converges at a slow rate due to the complexity of the associated function space....
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
Our two-sample testing algorithm also gives us interpretable characterizations for understanding differences between two high-dimensional distributions, by studying the worst-case projection mappings and projected samples in low dimensions. See Fig. 2(a)) for the optimized linear mapping so that the Wasserstein distanc...
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized. Our proposed projected Wasserstein distance is similar to this framework, but we use 1111-Wasserst...
Typical examples include principal component analysis [27], linear discriminant analysis [28], etc. It is intuitive to understand the differences between two collections of high-dimensional samples by projecting those samples into low-dimensional spaces in some proper directions [29, 30, 31, 6, 32, 33, 34].
D
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Z...
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise...
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA...
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i...
C
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only...
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical si...
Unlike these biographies, however, light is so structural-dependent that there is geometrical optics, which is a study of the placement of mediums and their trajectory by their shape, which is straight forward by Fermat’s principle of minimum time. Thus, to address errors in electricity, structural computers will be us...
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only...
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ...
B
Initially, the Koopman operator framework was used extensively for dynamics over reals (or complex) state space, and the function space is infinite-dimensional, which leads to resorting to finite-dimensional numerical approximations of the Koopman operator [28, 29] for practical computations. In our setting of dynamica...
A finite field, by definition, is a finite set, and the set of all permutation polynomials over the finite field forms a group under composition. Given a finite subset of such permutations, we can compute a group generated by this set. In this paper, we propose a representation of such a group using the concept of lin...
Given a group G𝐺Gitalic_G of permutations over a finite set, the (group) representation represents the group action in terms of invertible matrices over a finite-dimensional vector space, and the group operation is replaced by matrix multiplication. Such representations are imperative in studying abstract groups as it...
This paper defines a linear representation of polynomial maps F𝐹Fitalic_F over finite fields 𝔽𝔽\mathbb{F}blackboard_F as matrices M𝑀Mitalic_M over 𝔽𝔽\mathbb{F}blackboard_F of smallest size N𝑁Nitalic_N. The number N𝑁Nitalic_N is defined as the Linear Complexity of F𝐹Fitalic_F over 𝔽𝔽\mathbb{F}blackboard_F. Th...
A finite group, GFsubscript𝐺𝐹G_{F}italic_G start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, can be generated from Fisubscript𝐹𝑖F_{i}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using composition as the group operation. In this section, we devise a procedure to compute the linear representation of the gro...
B
Since feature selection often comes at a cost in terms of stability (Xu \BOthers., \APACyear2012), it is to be expected that view selection stability (Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end_ARG) is higher for meta-learners that select more views. The results of two meta-learners do not align with this pattern, name...
Since feature selection often comes at a cost in terms of stability (Xu \BOthers., \APACyear2012), it is to be expected that view selection stability (Φ^^Φ\hat{\Phi}over^ start_ARG roman_Φ end_ARG) is higher for meta-learners that select more views. The results of two meta-learners do not align with this pattern, name...
The results for the breast cancer data can be observed in Table 3. The interpolating predictor and the lasso are the best performing meta-learners in terms of all three classification measures, with the interpolating predictor having higher test accuracy and H, and the lasso having higher AUC. However, the interpolatin...
The true positive rate in view selection for each of the meta-learners can be observed in Figure 2. Ignoring the interpolating predictor for now, nonnegative ridge regression has the highest TPR, which is unsurprising seeing as it performs feature selection only through its nonnegativity constraints. Nonnegative ridge...
The results of applying MVS with the seven different meta-learners to the colitis data can be observed in Table 2. In terms of raw test accuracy the nonnegative lasso is the best performing meta-learner, followed by the nonnegative elastic net and the nonnegative adaptive lasso. In terms of AUC and H, the best performi...
B
We evaluate the performance of anomaly detection methods with two commonly used metrics: the Area under the Receiver Operating Characteristic Curve (ROC AUC) [74] and Average Precision (AP) [73]. ROC AUC measures the overall performance of the method, ranging from 0 to 1, where a value of 1 indicates perfect performan...
The experimental results (ROC AUC and AP) of the five relevant variable selection techniques are shown in Figure 3. For each technique, its 25 results (each is the average results over the 32 datasets) are presented with a violin plot overlaid by a dot plot. For the dot plot, each black dot corresponds to a result. Fo...
The synthetic datasets used in our experiments are generated using the ARTH150 benchmark Bayesian network from the Bnlearn repository [88]. Each dataset comprises 5000 objects and 107 variables, known as original variables, with Gaussian distributions whose means depend linearly on their parent variables in the Bayesi...
For each dataset, we conduct repeated experiments to ensure robustness. If the ratio of anomalies to the total number of objects in the dataset is greater than 1%percent11\%1 %, we randomly sample 1%percent11\%1 % of the total number of objects from the anomalous class as anomalies. This sampling process is repeated 20...
We follow the common process to obtain ground truth labels. When a dataset consists of two classes, we designate the majority class as the normal class and the minority class as the anomalous class. For datasets with multiple classes of imbalanced sizes, we select one or a few minority classes as anomaly class(es).
C
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a m...
In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffe...
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of...
Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to th...
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward mo...
C
The training batch size is 32 for both datasets. We train 10 epochs at learning rate 0.00005 for THUMOS and 15 epochs at learning rate 0.0001 for ActivityNet. We directly predict the 20 action categories for THUMOS; we conduct binary classification and then fuse our prediction scores with video-level classification sc...
We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp...
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce c...
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s...
3) VSGN shows obvious improvement on short actions over other concurrent methods, and also achieves new state-of-the-art overall performance. On THUMOS-14, VSGN reaches 52.4% mAP@0.5, compared to previous best score 40.4% under the same features. On ActivityNet-v1.3, VSGN reaches an average mAP of 35.07%, compared to t...
A
In another example, a medical expert might focus more on eliminating false-negative predictions than false-positives (e.g., a patient being actually ill but predicted as healthy) with a bad impact on the latter. However, this trade-off is necessary when considering a person’s life.
The tool consists of eight main interactive visualization panels (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization): (a) data sets and validation metrics (→→\rightarrow→ G1), (b) process tracker and algorithms/models selector (→→\rightarrow→ G3), (c) overall performance for e...
Exploration and Selection of Algorithms and Models. Similar to the workflow described in Section 4, we start by setting the most appropriate validation metrics for the imbalanced data set (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(a)). The projection in Figure 5(a)...
The available metrics are divided into two groups: balanced data sets (→→\rightarrow→ accuracy, precision, recall, and f1-score) and imbalanced data sets (→→\rightarrow→ g-mean, ROC AUC, log loss, and MCC). For the initialization of VisEvol, the user should direct his/her attention to the top-left panel shown in VisEvo...
Support for (1) selecting proper validation metrics for balanced and imbalanced data sets and (2) directing the experts’ attention to different classes for the given problem constitute two of the critical open challenges in ML. For instance, accuracy is preferred to the g-mean metric for a balanced data set [BDA13].
C
Markov chain synthesis has garnered attention from various disciplines, including physics, systems theory, computer science, and numerous other fields of science and engineering. This attention is particularly notable within the context of Monte Carlo Markov Chain (MCMC) algorithms [1, 2, 3].
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transi...
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state. The probabilistic guidance algorithm led to the dev...
The fundamental idea underlying MCMC algorithms is to synthesize a Markov chain that converges to a specified steady-state distribution. Random sampling of a large state space while adhering to a predefined probability distribution is the predominant use of MCMC algorithms.
and a complex communication architecture is not required for the estimation of the distribution. By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially f...
C
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5. From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con...
In this work we fill this gap by introducing a generalisation of state-of-the-art isometric two-shape matching approaches towards isometric multi-shape matching. We demonstrate that explicitly exploiting the isometry property leads to a natural and elegant formulation that achieves improved results compared to previous...
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ...
A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati...
It was shown that deep learning is an extremely powerful approach for extracting shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning me...
A
A graph G𝐺Gitalic_G is a chordal graph if and only if it there exists a tree T𝑇Titalic_T, called clique tree, with vertex set 𝐂𝐂\mathbf{C}bold_C such that, for every v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, T⁢(𝐂v)𝑇subscript𝐂𝑣T(\mathbf{C}_{v})italic_T ( bold_C start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) is a tree ...
We present the algorithm RecognizePG. Note that it is an implementation of Theorem 6 with very small changes. W.l.o.g., we assume that G𝐺Gitalic_G is connected, indeed a graph G𝐺Gitalic_G is a path graph if and only if all its connected components are path graphs. Moreover, we can obtain the clique path tree of G𝐺Gi...
The tree T𝑇Titalic_T of the previous theorem is called the clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a path graph or the directed clique path tree of G𝐺Gitalic_G if G𝐺Gitalic_G is a directed path graph. In Figure 1, the left part shows a path graph G𝐺Gitalic_G, and on the right there is a clique path tree...
The main goal of our paper is: given a graph G𝐺Gitalic_G, find a (directed) clique path tree of G𝐺Gitalic_G or say that G𝐺Gitalic_G is not a (directed) path graph. To reach our purpose, we follow the same way in [18], by decomposing recursively G𝐺Gitalic_G by clique separators.
If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed path graph.
C
We study how the purity of mixed nodes under different settings affects the performances of these overlapping community detection methods in sub-experiments 1(e) and 1(f). Fix (n0,ρ)=(100,0.1)subscript𝑛0𝜌1000.1(n_{0},\rho)=(100,0.1)( italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_ρ ) = ( 100 , 0.1 ), and l...
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha...
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ...
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting....
D
For any functional F:ℳ→ℝ:𝐹→ℳℝF\colon\mathcal{M}\rightarrow\mathbb{R}italic_F : caligraphic_M → blackboard_R, we let grad⁡Fgrad𝐹\operatorname{{\mathrm{grad}}}Froman_grad italic_F denote the functional gradient of F𝐹Fitalic_F with respect to the Riemannian metric g𝑔gitalic_g.
Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity. Therefore, in this scenario, variational ...
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilber...
Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical...
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe...
B
The overall training algorithm is provided in Alg. 1. In our experiments, we train the policy and the mVAE using different replay buffers: ℬπsuperscriptℬ𝜋\mathcal{B}^{\pi}caligraphic_B start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT only collects recent trajectories since we use on-policy RL algorithms, and for th...
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a...
To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traf...
In this paper, we propose a novel Meta RL method MetaVIM for multi-intersection traffic signal control, which can make the policy learned from a training scenario generalizable to new unseen scenarios. MetaVIM learns the decentralized policy for each intersection which considers neighbor information in a latent way. W...
We conduct the experiments on CityFlow [20], an city-level open-source simulation platform for traffic signal control. The simulator is used as the environment to provide state for traffic signal control, the agents execute actions by changing the phase of traffic lights, and the simulator returns feedback. Specifical...
D
\mathbf{f}(\mathbf{x}_{k})\;\;\;\;\mbox{for}\;\;\;\;k\,=\,0,1,\ldotsbold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT = bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_J ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( bold_x start_POS...
Input: matrix A∈ℂm×n𝐴superscriptℂ𝑚𝑛A\,\in\,\mathbbm{C}^{m\times n}italic_A ∈ blackboard_C start_POSTSUPERSCRIPT italic_m × italic_n end_POSTSUPERSCRIPT with m<n𝑚𝑛m\,<\,nitalic_m < italic_n, vector 𝐛∈ℂm𝐛superscriptℂ𝑚\mathbf{b}\,\in\,\mathbbm{C}^{m}bold_b ∈ blackboard_C start_POSTSUPERSCRIPT italic_m end_POSTSUPE...
denoted by ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ℂnsuperscriptℂ𝑛\mathbbm{C}^{n}blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT respectively. The vector space of m×n𝑚𝑛m\times nitalic_m × italic_n complex matrices including real matrices
is holomorphic, where the domain ΩΩ\Omegaroman_Ω is an open subset of ℂnsuperscriptℂ𝑛\mathbbm{C}^{n}blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT or ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
Input: matrix A∈ℂm×n𝐴superscriptℂ𝑚𝑛A\,\in\,\mathbbm{C}^{m\times n}italic_A ∈ blackboard_C start_POSTSUPERSCRIPT italic_m × italic_n end_POSTSUPERSCRIPT with m<n𝑚𝑛m\,<\,nitalic_m < italic_n, vector 𝐛∈ℂm𝐛superscriptℂ𝑚\mathbf{b}\,\in\,\mathbbm{C}^{m}bold_b ∈ blackboard_C start_POSTSUPERSCRIPT italic_m end_POSTSUPE...
B
In this section, we present an experimental evaluation of the performance of our algorithms111The code on which the experiments are based is available at https://github.com/shahink84/BinPackingPredictions.. Specifically, in Section 6.1 we describe the benchmarks and the input generation model; in Section 6.2, we expan...
publicly available benchmarks, such as the BPPLIB benchmarks (?), but also on distributions studied specifically in the context of offline bin packing, such as the Weibull distribution (?). The results show that our algorithms outperform the known efficient algorithms without any predictions. We also evaluate a heurist...
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin...
Several benchmarks have been used in previous work on offline bin packing; we refer to the discussion by (?) for a list of related work. Many of these previous benchmarks typically rely on either uniform or normal distributions. There are two important issues to take into account. First, such simple distributions are ...
the objective is to assign the items to the minimum number of bins so that the sum of item sizes in each bin does not exceed the bin capacity. Bin packing is instrumental in modelling resource allocation problems such as load balancing and scheduling (?), and has many applications in areas such as supply chain manageme...
C
Hypernetworks (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. The authors aim to reduce the number of trainable parameters by designing a hypernetwork with fewer of parameters than the original network. Making an analogy between hypernetworks a...
To mitigate the issue of the discrete atlas, we define Continuous Atlas, a novel paradigm for meshing any object with an atlas that is leveraged in our method. In the first step, we construct a mapping that models a local structure of the object S𝑆Sitalic_S. By Continuous Atlas (𝒞⁢𝒜𝒞𝒜\mathcal{CA}{}caligraphic_C c...
In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro...
In this section, we introduce Continuous Atlas, a novel paradigm for creating meshes from patches. We are pointing out the limitations of current approaches that are based on Discrete Atlas representations and show how we overcome them using our model.
We present Locally Conditioned Atlas (LoCondA), a framework for generating and reconstructing meshes of objects using an atlas of localized charts that leverage the introduced notion of the continuous atlas. It consists of two parts. Firstly, we map the target object into a known prior distribution (training) or sample...
C
\leavevmode\nobreak\ \text{s.t.}\leavevmode\nobreak\ x_{k}\neq 0,y_{k}\neq 0\}% \right]\leq\left\lfloor\frac{T-2t}{t+\rho\tau}\right\rfloor+2.roman_max start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT [ roman_max { italic_k ∈ caligraphic_N : ∃ italic_x ∈ italic_H start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT start_P...
The previous lemma gives us an understanding of how quickly we can approximate the solution. In particular, in coordinates that can be non-zero we are able to have a value that absolutely coincides with the solution, but in zero coordinates this is impossible. It remains to understand what the solution even looks like....
This fact leads to the main idea of the proof. At the initial moment of time T=0𝑇0T=0italic_T = 0, we have all zero coordinates in the global output, since the starting points x0,y0subscript𝑥0subscript𝑦0x_{0},y_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ar...
If Bρ≠∅subscript𝐵𝜌B_{\rho}\neq\varnothingitalic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ≠ ∅, in the global output of any procedure that satisfies Assumption 4.1, after T𝑇Titalic_T units of time, only the first k=⌊T−2⁢tt+ρ⁢τ⌋+2𝑘𝑇2𝑡𝑡𝜌𝜏2k=\left\lfloor\frac{T-2t}{t+\rho\tau}\right\rfloor+2italic_k = ⌊ div...
The global function f⁢(x,y)𝑓𝑥𝑦f(x,y)italic_f ( italic_x , italic_y ) in (66) is exactly an example of a bilinear function from the work on non-distributed lower bounds for the strongly convex-strongly concave case [60], but with strong convexity and strong concavity constants μ∼εR2similar-to𝜇𝜀superscript𝑅2\mu\sim...
A
And from the bijection we can deduce that ∩(Tw)<∩(Gw∧Ts)subscript𝑇𝑤subscript𝐺𝑤subscript𝑇𝑠\cap(T_{w})<\cap(G_{w}\wedge T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) < ∩ ( italic_G start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ∧ italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) for so...
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba...
necessarily complete) G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) that admits a star spanning tree Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In the first part we present a formula to calculate ∩(Ts)subscript𝑇𝑠\cap(T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSU...
In this section we present some experimental results to reinforce Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g...
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i...
C
Fix a simplicial complex K𝐾Kitalic_K, a value δ∈(0,1]𝛿01\delta\in(0,1]italic_δ ∈ ( 0 , 1 ], and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ). If ℱℱ\mathcal{F}caligraphic_F is a sufficiently large (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover such that πm⁢(ℱ)≥δ⁢(|ℱ|m)...
Through a series of papers [18, 35, 22], the Helly numbers, Radon numbers, and fractional Helly numbers for (⌈d/2⌉,b)𝑑2𝑏(\lceil d/2\rceil,b)( ⌈ italic_d / 2 ⌉ , italic_b )-covers in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT were bounded in terms of d𝑑ditalic_d and...
One immediate application of Theorem 1.2 is the reduction of fractional Helly numbers. For instance, it easily improves a theorem444[35, Theorem 2.3] was not phrased in terms of (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free covers but readily generalizes to that setting, see Section 1.4.1. of Patáková [35, Theorem 2.3] in...
Note that the constant number of points given by the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem in this case depends not only on p𝑝pitalic_p, q𝑞qitalic_q, and d𝑑ditalic_d, but also on b𝑏bitalic_b. For the setting of (1,b)1𝑏(1,b)( 1 , italic_b )-covers in surfaces555By a surface we mean a compact 2-dimensional ...
It is known that the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover is bounded from above in terms of K𝐾Kitalic_K and b𝑏bitalic_b [18] 222The bound on Helly number of (K,b)-free cover directly follows from a combination of Proposition 30 and Lemma 26 in [18]., as is the Radon number [35, Proposit...
B
G4: Generation of new features and comparison with the original features. With the same statistical evidence as defined in G3, users should get visually informed about strongly correlated features that perform the same for each class. Next, the tool can use automatic feature selection techniques to compare the new feat...
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heat...
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati...
T5: Evaluate the results of the feature engineering process. At any stage of the feature engineering process (T2–T4), a user should be able to observe the fluctuations in performance with the use of standard validation metrics (e.g., accuracy, precision, and recall) [32]. Also, users could possibly want to refer to the...
G5: Reassessment of the instances’ predicted probabilities and performance, computed with appropriate validation metrics. In the end, users’ interactions should be tracked in order to preserve a history of modifications in the features, and the performance should be monitored with validation metrics (T5). At all stages...
D
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT itali...
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi...
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. Af...
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to so...
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive numbe...
C
For example, in systems trained to infer hair color on the CelebA dataset [43], the majority group of non-blond males occurs 50505050 times more than the minority group of blond males, resulting in systems incorrectly predicting non-blond as hair color for the minority group.
While this is a toy problem, in the real world, hidden minority patterns are common and failing on them can have dire consequences. Systems designed to aid human resources, help with medical diagnosis, determine probation, or loan qualification could be biased against minority groups based on age, gender, religion, sex...
Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even ampli...
Methods are typically highly sensitive to hyperparameter choices, and papers report numbers on systems in which the hyperparameters were tuned using the test set distribution [18, 50, 64]. In the real world, biases may stem from multiple factors and may change in different environments, making this setup unrealistic. ...
In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compu...
A
Cross-Dataset Evaluation. We conduct four tasks including 𝒟E→𝒟M→subscript𝒟𝐸subscript𝒟𝑀\mathcal{D}_{E}\rightarrow\mathcal{D}_{M}caligraphic_D start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT → caligraphic_D start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT, 𝒟E→𝒟D→subscript𝒟𝐸subscript𝒟𝐷\mathcal{D}_{E}\rightarrow\m...
TABLE VI: \added Benchmark of cross-domain gaze estimation. ‘Source-free’ indicates whether the method requires source images during domain adaption. ‘Target num’ presents the number of images used for domain adaption. 𝒟Esubscript𝒟𝐸\mathcal{D}_{E}caligraphic_D start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT, 𝒟Gsubs...
The calibration problem can be considered as domain adaption problems, where the training set is the source domain and the test set is the target domain. The test set usually contains unseen subjects or unseen environment. Researchers aim to improve the performance in the target domain using calibration samples.
The result is shown in Tab. VI. \addedUnsupervised domain adaption methods are usually proposed to solve the cross-dataset problem. These methods require target images for domain adaption. We summarize the number of required target images. The source-free column indicates whether the method requires source images durin...
Cheng et al.  [72] propose a domain generalization method. They improve the corss-dataset performance without knowing the target dataset or touching any new samples. They propose a self-adversarial framework to remove the gaze-irrelevant features in face images. Cui et al. define a new adaption problem [138]: adaptatio...
C
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (...
has been successfully employed for image classification tasks krizhevsky2017imagenet . This deep model is pre-trained on a few millions of images from the ImageNet database through eight learned layers, five convolutional layers and three fully-connected layers. The last fully-connected layer allows to classify one tho...
simonyan2014very is trained on the ImageNet dataset which has over 14 million images and 1000 classes. Its name VGG-16 comes from the fact that it has 16 layers. It contains different layers including convolutional layers, Max Pooling layers, Activation layers, and Fully Connected (fc) layers. There are 13 convolution...
Despite the recent breakthroughs of deep learning architectures in pattern recognition tasks, they need to estimate millions of parameters in the fully connected layers that require powerful hardware with high processing capacity and memory. To address this problem, we present in this paper an efficient quantization b...
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the ...
C
recursive definitions of the form y←f⁢i¯⁢x¯=Pf⁢(i¯,x¯,y)←𝑦𝑓¯𝑖¯𝑥subscript𝑃𝑓¯𝑖¯𝑥𝑦y\leftarrow f~{}\overline{i}~{}\overline{x}=P_{f}(\overline{i},\overline{x},y)italic_y ← italic_f over¯ start_ARG italic_i end_ARG over¯ start_ARG italic_x end_ARG = italic_P start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( over¯ s...
There are eight kinds of processes: two for the structural rules (identity and cut), one for each combination of type polarity (positive or negative) and rule type (left or right), one for definition calls, and one for unreachable code. {defi}[Process]
The first two kinds of processes correspond to the identity and cut rules. Values V𝑉Vitalic_V and continuations K𝐾Kitalic_K are specified on a per-type-and-rule basis in the following two tables. Note the address variable x𝑥xitalic_x distinguished by each rule.
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which...
where the arithmetic variables in 𝒱𝒱\mathcal{V}caligraphic_V are free in the constraints (arithmetic formulas) in 𝒞𝒞\mathcal{C}caligraphic_C, the types in ΓΓ\Gammaroman_Γ, the process P𝑃Pitalic_P, and type C𝐶Citalic_C; moreover, the address variables in ΓΓ\Gammaroman_Γ, which are free in P𝑃Pitalic_P, stand for ...
B
In this section, we bring forward two cloud media sharing schemes, namely FairCMS-I and FairCMS-II. FairCMS-I essentially delegates the re-encryption management of LUTs to the cloud, thus significantly reducing the overhead of the owner side. Nevertheless, FairCMS-I cannot achieve IND-CPA security for the media conten...
Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However...
In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ...
The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the ado...
This is particularly important, as we mentioned in Section IV-B, because only in this way, the increase in the number of users will not overload the cost of the owner, which is a major problem faced in the AFP scheme. In other words, the owner can achieve significant cost savings by renting the cloud’s resources, there...
B
Currently, Graph Neural Networks (GNN) Kipf and Welling (2017); Hamilton et al. (2017); Veličković et al. (2018) have recently emerged as an effective class of models for capturing high-order relationships between nodes in a graph and have achieved state-of-the-art results on a variety of tasks such as computer vision...
In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter...
At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer. This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate pr...
The high-order relations between nodes can be modeled explicitly by stacking layers. Gated Graph Neural Networks (GGNN) Li et al. (2015) uses GRU Cho et al. (2014) to update the node representations based on the aggregated neighborhood feature information.
It first proposes to connect all the feature fields, and thus the multi-field features can be treated as a fully-connected graph. Then it utilizes GGNN Li et al. (2015) to model high-order feature interactions on the feature graph. KD-DAGFM Tian et al. (2023) uses knowledge distillation and proposes a lightweight stude...
B
\mathbf{y}-\mathbf{x}\right\rangle\geq\mu_{f}^{\mathcal{L}_{0}}\omega_{\nu}(-d% _{\nu}(\mathbf{x},\mathbf{y}))\left\|\mathbf{x}-\mathbf{y}\right\|^{2}.italic_f ( bold_y ) - italic_f ( bold_x ) - ⟨ ∇ italic_f ( bold_x ) , bold_y - bold_x ⟩ ≥ italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT c...
One of the quantities that we have used in the proof of Theorem 2.4 is Lfℒ0superscriptsubscript𝐿𝑓subscriptℒ0L_{f}^{\mathcal{L}_{0}}italic_L start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. Note that the function f𝑓fitalic...
we have that as the backtracking line search makes monotonic primal progress, we know that for t≥0𝑡0t\geq 0italic_t ≥ 0 we will have that 𝐱t∈ℒ0subscript𝐱𝑡subscriptℒ0\mathbf{x}_{t}\in\mathcal{L}_{0}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ caligraphic_L start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. As the f...
Complexity comparison: Number of iterations needed to reach a solution with h⁢(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) below ε𝜀\varepsilonitalic_ε for Problem 1.1 for Frank-Wolfe-type algorithms in the literature. The asterisk on FW-LLOO highlights the fact that the procedure is different from the standard LMO procedur...
When we compare the bounds obtained from local strong convexity in (2.16) and that obtained directly from generalized self-concordance in (2.18), we can see that the former is tighter than the latter, albeit local. For this reason, we have used the former bound in the proof of Theorem 2.11.
D
Maximum Matching is one of the most fundamental problems in combinatorial optimization and has been extensively studied in the classic centralized model of computation for almost half a century. We refer to [Sch03] for an overview. In particular, several exact polynomial-time deterministic maximum matching algorithms a...
This model is not only interesting for massive data sets but also whenever there is no random access to the input, for instance, if the input is only defined implicitly. Moreover, many insights and techniques from this model naturally carry over to a variety of areas in theoretical computer science, including communica...
For massive graphs the classical matching algorithms are not only prohibitively slow, but also space complexity becomes a concern. If a graph is too large to fit into the memory of a single machine, all the classical algorithms—which assume random access to the input—are not applicable. This demand for a more realistic...
One that has attracted a lot of attention, especially in the past decade, is the graph stream model, which was introduced by Feigenbaum et al. [FKM+04, FKM+05, Mut05] in 2005. In this model, the edges of the graph are not stored in the memory but appear in an arbitrary (that is, adversarially determined) sequential ord...
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximu...
B
When b=6𝑏6b=6italic_b = 6 or k=20𝑘20k=20italic_k = 20, the trajectories of CPP are very close to that of exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, which indicates that when the compression errors are small, they are no longer the bottleneck of convergence.
To see why CPP outperforms Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, note that the vectors sent in CPP have been compressed, and hence the transmitted bits at each iteration are greatly reduced compared to Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
Figure 3: Performance of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, B-CPP against the number of transmitted bits: the left column shows the results with quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and the right column shows the results with Rand-k (k=5,10,20𝑘51020k=5,10,20ital...
Figure 1: Linear convergence of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, and B-CPP with b𝑏bitalic_b bit quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and Rand-k (k=5,10,20𝑘51020k=5,10,20italic_k = 5 , 10 , 20) compressors.
We can see from all of the sub-figures of Fig. 3 that, to reach a high accuracy within about 10−15superscript101510^{-15}10 start_POSTSUPERSCRIPT - 15 end_POSTSUPERSCRIPT, the number of transmitted bits required by these methods have the ranking: B-CPP <<< CPP <<< Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A ca...
D
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing [10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net...
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low...
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out ...
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile...
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing [10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Net...
B
Kuhn Poker (Kuhn, 1950; Southey et al., 2009; Lanctot, 2014) is a zero-sum poker game with only two actions per player. The two-player variant is solvable with PSRO, however the three-player version benefits from JPSRO. The results in Figure 2(a) show rapid convergence to equilibrium.
We propose that (C)CEs are good candidates as meta-solvers (MSs). They are more tractable than NEs and can enable coordination to maximize payoff between cooperative agents. In particular we propose three flavours of equilibrium MSs. Firstly, greedy (such as MW(C)CE), which select highest payoff equilibria, and attempt...
Measuring convergence to NE (NE Gap, Lanctot et al. (2017)) is suitable in two-player, constant-sum games. However, it is not rich enough in cooperative settings. We propose to measure convergence to (C)CE ((C)CE Gap in Section E.4) in the full extensive form game. A gap, ΔΔ\Deltaroman_Δ, of zero implies convergence t...
PSRO has proved to be a formidable learning algorithm in two-player, constant-sum games, and JPSRO, with (C)CE MSs, is showing promising results on n-player, general-sum games. The secret to the success of these methods seems to lie in (C)CEs ability to compress the search space of opponent policies to an expressive an...
Trade Comm is a two-player, common-payoff trading game, where players attempt to coordinate on a compatible trade. This game is difficult because it requires searching over a large number of policies to find a compatible mapping, and can easily fall into a sub-optimal equilibrium. Figure 2(b) shows a remarkable domina...
D
In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst. This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability ...
Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K⁢(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient...
Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes sta...
The simpler part of the argument is posterior accuracy, which we prove can be inherited directly from the sample accuracy of a mechanism. This lemma resembles Lemma 6 in Jung et al. (2020), but has the advantage of being independent of the range of the queries.
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_PO...
C
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
The goal of this paper is to open up a new research direction aimed at understanding the power of preprocessing in speeding up algorithms that solve NP-hard problems exactly [26, 31]. In a nutshell, this new direction can be summarized as: how can an algorithm identify part of an optimal solution in an efficient prepro...
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the techni...
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction fr...
This line of investigation opens up a host of opportunities for future research. For combinatorial problems such as Vertex Cover, Odd Cycle Transversal, and Directed Feedback Vertex Set, which kinds of substructures in inputs allow parts of an optimal solution to be identified by an efficient preprocessing phase? Is i...
D
To the best of our knowledge, there are only few deep learning methods [172, 198, 194] for image blending and there is no unified benchmark dataset. Zhang et al. [198] do not mention the source of used images. Wu et al. [172] manually crop objects from transient attributes database [70] to create input composite images...
The existing deep image blending works [172, 198, 194] adopt the following evaluation metrics: 1) calculating realism score using the pretrained model [209] which reflects the realism of a composite image; 2) conducting user study by asking engaged users to select the most realistic images; 3) Zhang et al. [194] deem t...
For quantitative evaluation, existing works adopt metrics including Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity index (SSIM) [130], Learned Perceptual Image Patch Similarity (LPIPS) [201] to calculate the distance between harmonized result and ground-truth. These metrics can also ...
We report the results of Poisson image blending [121], GP-GAN [172], Zhang et al. [198], and MLF [194]. We also report the ground-truth composite image obtained using ground-truth alpha matte for comparison. From Fig. 9, it can be seen that the obtained composite images using predicted alpha mattes are very close to t...
Figure 9: The leftmost column is the initial composite image obtained using the alpha matte predicted by LFPNet [95]. The rightmost column is the ground-truth composite image obtained using ground-truth alpha matte. The middle columns are the refined results obtained by Poisson image blending [121], GP-GAN [172], Zhang...
A
Spatio-temporal Alignment: In order to facilitate the analysis of correlations between cities and tasks, we have adopted a consistent spatio-temporal configuration for all sub-datasets from the same city. This includes a uniform range of geographical coordinates, temporal frames, and sampling intervals.
Comprehensiveness: Fig. 1(a), illustrates that CityNet comprises three types of raw data (mobility data, geographical data, and meteorological data) collected from seven different cities. Furthermore, we have processed the raw data into several sub-datasets (as shown in Fig. 1(b)) to to capture a wider range of urban p...
Interrelationship: We have classified the sub-datasets into two categories: service data and context data, as depicted in Fig. 1(c). Service data pertains to the status of urban service providers (e.g. taxi companies), while context data refers to the urban environment (e.g. weather). Based on this categorization, we h...
In order to facilitate a clear understanding of the data used in this study, we have classified all taxi-related mobility data (including flow, pickup, and idle driving and traffic speed data) as service data, as they pertain to the operational states of transport service providers. Accordingly, all other data have bee...
Our analyses and experiments on CityNet have yielded valuable insights for researchers. Our studies have confirmed the correlations among sub-datasets and have demonstrated that urban modeling and analyses can be enhanced by appropriately utilizing the mutual knowledge among correlated sub-datasets. To this end, we hav...
B
\mathbf{x}^{*})-z^{\alpha}\hat{\sigma}(\mathbf{x}^{*})\,,\,\hat{\mu}(\mathbf{x% }^{*})+z^{\alpha}\hat{\sigma}(\mathbf{x}^{*})\big{]}.roman_Γ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_MVE end_POSTSUBSCRIPT ( bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) := [ over^ start_ARG ital...
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat...
One can immediately expect that, analogous to general mean-variance estimators with a Gaussian prediction interval, this procedure does not give optimal intervals for data sets that do not follow a normal distribution. One of the consequences is that this model might suffer from the validity problems discussed in Secti...
For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc...
In the section on quantile regression it was noted that this approach tends to have a problem with modelling the tails of the distribution, with the added consequence that this can influence the validity at extreme significance levels. However, when combining such models with conformal prediction, validity is not an i...
B
For MIDI scores, our final token vocabulary for REMI contains 16 unique Sub-bar tokens, 86 Pitch tokens, 64 Duration tokens, one Bar token, one Pad token and one Mask token, in total 169 tokens. For CP, we do not use a Pad token but represent a zero-padded super token by Bar(Pad), Sub-bar(Pad), Pitch(Pad) and Duration...
For the sequence-level tasks, which require only a prediction for an entire sequence, we follow \textciteemopia and choose the Bi-LSTM-Attn model from \textcitelin2017structured as our baseline, which was originally proposed for sentiment classification in NLP. The model combines LSTM with a self-attention module for t...
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, ...
We evaluate PTMs on four piano music classification tasks. These include two note-level classification tasks, i.e., melody extraction \parencitesimonettaCNW19,note-affinity and velocity prediction \parencitewidmer94aaai,jeongKKLN19ismir,jeongKKN19icml and two sequence-level classification tasks, i.e., style classificat...
Throughout this article, we refer to note-level classification tasks as tasks that perform a prediction for each individual note in a music sequence and sequence-level tasks as tasks that require a single prediction for an entire music sequence. We consider two note-level tasks and two sequence-level tasks in our exper...
D
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively). Note that it has a natural in...
This description draws a comparison e.g. to L⁢(k,1)𝐿𝑘1L(k,1)italic_L ( italic_k , 1 )-labeling problem (see e.g. [10] for a survey), where the colors of any two adjacent vertices have to differ by at least k𝑘kitalic_k and the colors of any two vertices within distance 2222 have to be distinct.
We will color F𝐹Fitalic_F by assigning colors to Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT first, and then to Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBS...
Since all vertices in c𝑐citalic_c have different colors, it is true that |Y|≤l𝑌𝑙|Y|\leq l| italic_Y | ≤ italic_l. Moreover, the optimality of c𝑐citalic_c implies that both R𝑅Ritalic_R and B𝐵Bitalic_B are non-empty. From the fact that c𝑐citalic_c is a coloring of Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT ...
First, we note that Z⁢(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) by the property (A)𝐴(A)( italic_A ) of the Zeckendorf representation does not have two consecutive ones. Thus, the only combinations available when we sum the rightmost blocks of type A (i.e. the ones which do...
A