robench-2024b
Collection
48 items • Updated
context stringlengths 102 1.68k | A stringlengths 102 2.6k | B stringlengths 106 2.67k | C stringlengths 101 2.91k | D stringlengths 105 2.35k | label stringclasses 4
values |
|---|---|---|---|---|---|
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic... | the product of xmsuperscript𝑥𝑚x^{m}italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by a polynomial of degree n−m𝑛𝑚n-mitalic_n - italic_m: | The inversion of (8) assembles powers xisuperscript𝑥𝑖x^{i}italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT by sums | Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios | xi≡∑n=m(mod2)ihi,n,mRnm(x);i−m=0,2,4,6,….formulae-sequencesuperscript𝑥𝑖superscriptsubscript𝑛annotated𝑚pmod2𝑖subscriptℎ𝑖𝑛𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑖𝑚0246…x^{i}\equiv\sum_{n=m\pmod{2}}^{i}h_{i,n,m}R_{n}^{m}(x);\quad i-m=0,2,4,6,\ldots.italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ≡ ∑ start_PO... | B |
As we run through the algorithm described by Taylor, we deal with the columns of g𝑔gitalic_g in reverse order, beginning with column d𝑑ditalic_d. | For each column c𝑐citalic_c with r=r(c)≤d−2𝑟𝑟𝑐𝑑2r=r(c)\leq d-2italic_r = italic_r ( italic_c ) ≤ italic_d - 2, | Having ‘cleared’ column c𝑐citalic_c, we clear the entries in position j=1,…,c−1𝑗1…𝑐1j=1,\ldots,c-1italic_j = 1 , … , italic_c - 1 in the r𝑟ritalic_rth row by multiplying g𝑔gitalic_g on the right by the transvections | At this stage, g𝑔gitalic_g has been reduced to a matrix in which columns c−1,…,d𝑐1…𝑑c-1,\ldots,ditalic_c - 1 , … , italic_d have exactly one nonzero entry (and these entries are in different rows). | Suppose we have reached column c𝑐citalic_c, for some c∈{1,…,d}𝑐1…𝑑c\in\{1,\ldots,d\}italic_c ∈ { 1 , … , italic_d }. | D |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computa... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computa... | The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are | and denoted by Localized Orthogonal Decomposition Methods–LOD were introduced and analyzed in [MR3246801, MR2831590, MR3552482, MR3591945]. | C |
As we will see, the killing functions in these problems are also simple. Nevertheless, it requires painstaking effort and creativity to derive the killing functions. | Chandran and Mount [8] compute all the P𝑃Pitalic_P-stable triangles in linear time by the Rotating-Caliper technique, | from which we can see this technique is different from the Rotating-Caliper technique (see discussions in subsection 1.2.3). | One major difference is that the Rotating-Caliper uses only one parameter (e.g. some angle θ𝜃\thetaitalic_θ), whereas | An application of Toussaint’s Rotating-Caliper (RC) [26] technique usually adopts the following framework: | D |
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of new... | CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, [18] use an extensive list of bipolar sentiments with a set of combinational rules. In... | It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event. | We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet lev... | the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumor... | B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log i... | where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) componen... | In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) | converges to some limit 𝐰∞subscript𝐰\mathbf{w}_{\infty}bold_w start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT, then we can write 𝐰(t)=g(t)𝐰∞+𝝆(t)𝐰𝑡𝑔𝑡subscript𝐰𝝆𝑡\mathbf{w}\left(t\right)=g\left(t\right)\mathbf{w}_{\infty}+\boldsymbol{\rho}% | and in the last line we used the fact that 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) | B |
\textbf{S}^{D}_{i,1},...,\textbf{S}^{D}_{i,N})italic_V ( italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = ( F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTS... | 𝐅i,tD=(f~i,t,1,f~i,t,2,…,f~i,t,D)subscriptsuperscript𝐅𝐷𝑖𝑡subscript~𝑓𝑖𝑡1subscript~𝑓𝑖𝑡2…subscript~𝑓𝑖𝑡𝐷\textbf{F}^{D}_{i,t}=(\widetilde{f}_{i,t,1},\widetilde{f}_{i,t,2},...,% | 𝐒i,tD=𝐅i,t+1D−𝐅i,tDInterval(Ei)subscriptsuperscript𝐒𝐷𝑖𝑡subscriptsuperscript𝐅𝐷𝑖𝑡1subscriptsuperscript𝐅𝐷𝑖𝑡𝐼𝑛𝑡𝑒𝑟𝑣𝑎𝑙subscript𝐸𝑖\textbf{S}^{D}_{i,t}=\frac{\textbf{F}^{D}_{i,t+1}-\textbf{F}^{D}_{i,t}}{% | \textbf{S}^{D}_{i,1},...,\textbf{S}^{D}_{i,N})italic_V ( italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = ( F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTS... | We split this event time frame into N intervals and associate each tweet to one of the intervals according to its creation time. Thus, we can generate a vector V(Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) of features for each time interval. In order to capture the changes of feature ov... | A |
Table 4: Performance of the baselines (RWR relatedness scores, RWR+MLE, RWR+MLE-W, LNQ, and PNQ) compared with our ranking models; | For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 ... | an unsupervised ensemble method to produce the final ranking score. Supposed a¯¯𝑎\bar{a}over¯ start_ARG italic_a end_ARG is a testing entity aspect of entity e𝑒eitalic_e. We run each of the ranking models in 𝐌𝐌\mathbf{M}bold_M against the instance of a¯¯𝑎\bar{a}over¯ start_ARG italic_a end_ARG, multiplied by the t... | ∗∗\ast∗,††\dagger†, ∓minus-or-plus\mp∓ indicates statistical improvement over the baseline using t-test with significant at p<0.1𝑝0.1p<0.1italic_p < 0.1, p<0.05𝑝0.05p<0.05italic_p < 0.05, p<0.01𝑝0.01p<0.01italic_p < 0.01 respectively. | Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achiev... | C |
More importantly, these algorithms are commonly designed under the assumption of stationary reward distributions, | Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016]. | in many science and engineering problems [Ristic et al., 2004; van Leeuwen, 2009; Ionides et al., 2006; Creal, 2012], | SMC methods [Arulampalam et al., 2002; Doucet et al., 2001; Djurić et al., 2003] have been widely used | from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | C |
Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity). | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. | Table 2: Descriptive statistics for the number of patient data entries per day. Active intervals are 10 minute intervals with at least 10 steps taken. | Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times. | The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | C |
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect... | Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer... | A prerequisite for the successful application of deep learning techniques is a wealth of annotated data. Fortunately, the growing interest in developing and evaluating fixation models has lead to the release of large-scale eye tracking datasets such as MIT1003 Judd et al. (2009), CAT2000 Borji and Itti (2015), DUT-OMRO... | Weight values from the ASPP module and decoder were initialized according to the Xavier method by Glorot and Bengio (2010). It specifies parameter values as samples drawn from a uniform distribution with zero mean and a variance depending on the total number of incoming and outgoing connections. Such initialization sch... | To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation met... | C |
𝙴𝚒𝚗𝚣𝚎𝚕𝚎𝚕𝚎𝚖𝚎𝚗𝚝¯¯𝙴𝚒𝚗𝚣𝚎𝚕𝚎𝚕𝚎𝚖𝚎𝚗𝚝\displaystyle\overline{\mathtt{E}\mathtt{i}\mathtt{n}\mathtt{z}\mathtt{e}% | In the following, we investigate another aspect of greedy strategies. Any symbol that is marked next in a marking sequence can have isolated occurrences (i. e., occurrences that are not adjacent to any marked block) and block-extending occurrences (i. e., occurrences with at least one adjacent marked symbol). Each isol... | The important property of the word αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT is that for every edge {x,y}𝑥𝑦\{x,y\}{ italic_x , italic_y } of H𝐻Hitalic_H (except e𝑒eitalic_e), it contains two distinct size-2222 factors that are xy𝑥𝑦xyitalic_x italic_y- or yx𝑦𝑥yxitalic_y i... | For the sake of convenience, let ℓ=2kℓ2𝑘\ell=2kroman_ℓ = 2 italic_k for some k≥1𝑘1k\geq 1italic_k ≥ 1. Let σ𝜎\sigmaitalic_σ be any block-extending marking sequence for α𝛼\alphaitalic_α. If σ𝜎\sigmaitalic_σ marks y𝑦yitalic_y first, then we have 2k2𝑘2k2 italic_k marked blocks and if some xisubscript𝑥𝑖x_{i}ital... | For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if th... | D |
Results were obtained using an independent dataset of 3039 PPG achieving better results than previous methods that were based on handcrafted features. | Besides AF detection, wearable data have been used to search for optimal cardiovascular disease predictors. | Gotlibovych et al.[117] trained an one layer CNN network followed by a LSTM using 180h of PPG wearable data to detect AF. | CRFs have been jointly trained with CNNs and have been used in depth estimation in endoscopy[269] and liver segmentation in CT[270]. | Wearable devices, which impose restrictions on size, power and memory consumption for models, have also been used to collect cardiology data for training deep learning models for AF detection. | A |
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world mode... | The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabi... | In search for an effective world model we experimented with various architectures, both new and modified versions of existing ones. This search resulted in a novel stochastic video prediction model (visualized in Figure 2) which achieved superior results compared to other previously proposed models. In this section, we... | We evaluate our method on 26262626 games selected on the basis of being solvable with existing state-of-the-art model-free deep RL algorithms222Specifically, for the final evaluation we selected games which achieved non-random results using our method or the Rainbow algorithm using 100100100100K interactions., which in... | To evaluate the design of our method, we independently varied a number of the design decisions. Here we present an overview; see Appendix A for detailed results. | D |
Out of the 11500 signals we used 76%percent7676\%76 %, 12%percent1212\%12 % and 12%percent1212\%12 % of the data (8740,1380,13808740138013808740,1380,13808740 , 1380 , 1380 signals) as training, validation and test data respectively. | All networks were trained for 100100100100 epochs and model selection was performed using the best validation accuracy out of all the epochs. | The convolutional and linear layers of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT were initialized according to their original implementation. | Three identical channels were also stacked for all m𝑚mitalic_m outputs to satisfy the input size requirements for bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT. | Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D. | A |
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measure... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and wal... | There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that ... | In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking. ... | D |
γ≤c/(c+t)≤γ+1/2k𝛾𝑐𝑐𝑡𝛾1superscript2𝑘\gamma\leq c/(c+t)\leq\gamma+1/2^{k}italic_γ ≤ italic_c / ( italic_c + italic_t ) ≤ italic_γ + 1 / 2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT. In case c/(c+t)𝑐𝑐𝑡c/(c+t)italic_c / ( italic_c + italic_t ) is a positive integer multiple of 1/2k1superscript2𝑘1/2^{k}1 /... | The advice for Rrc is a fraction γ𝛾\gammaitalic_γ, integer multiple of 1/2k1superscript2𝑘1/2^{k}1 / 2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, that is encoded in k𝑘kitalic_k bits such that if the advice is trusted then | The remaining cases are more interesting and involve scenarios when the advice is untrusted, or when the advice is trusted but the algorithm maintains a ratio of α𝛼\alphaitalic_α instead of γ𝛾\gammaitalic_γ as indicated by the advice. | Note that for a sufficiently large, yet constant, number of bits, γ𝛾\gammaitalic_γ provides a good approximation of the critical ratio. Indeed having γ𝛾\gammaitalic_γ as advice is sufficient to achieve a competitive ratio that approaches 1.51.51.51.5 in the trusted advice model, as shown in [2]. | First, note that when γ≤α𝛾𝛼\gamma\leq\alphaitalic_γ ≤ italic_α, then the algorithm works with the ratio γ𝛾\gammaitalic_γ as indicated by the advice. | C |
c→←←→𝑐absent\overrightarrow{c}\leftarrowover→ start_ARG italic_c end_ARG ← Classify-At-Level(text𝑡𝑒𝑥𝑡textitalic_t italic_e italic_x italic_t, MAX_LEVEL𝑀𝐴𝑋_𝐿𝐸𝑉𝐸𝐿MAX\_LEVELitalic_M italic_A italic_X _ italic_L italic_E italic_V italic_E italic_L) | {)}( divide start_ARG over→ start_ARG roman_Δ italic_c end_ARG [ 1 ] end_ARG start_ARG over→ start_ARG roman_Δ italic_c end_ARG [ 0 ] end_ARG > 4 ) or (c→[1]>c→[0])→𝑐delimited-[]1→𝑐delimited-[]0\Big{(}\overrightarrow{c}[1]>\overrightarrow{c}[0]\Big{)}( over→ start_ARG italic_c end_ARG [ 1 ] > over→ start_ARG italic... | local variables: c→→𝑐\overrightarrow{c}over→ start_ARG italic_c end_ARG, the subject confidence vector | return a set of indexes selected by applying a policy, π𝜋\piitalic_π, to c→→𝑐\overrightarrow{c}over→ start_ARG italic_c end_ARG | c→←c→+Δc→←→𝑐→𝑐→Δ𝑐\overrightarrow{c}\leftarrow\overrightarrow{c}+\overrightarrow{\Delta c}over→ start_ARG italic_c end_ARG ← over→ start_ARG italic_c end_ARG + over→ start_ARG roman_Δ italic_c end_ARG | C |
There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (DGC) (Lin et al., 2018), which has achieved better performance than vanilla DSGD with sparse communication in practice. | However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information. | We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is ... | GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. | To make a comprehensive comparison of these methods, we will compare GMC with two implementations of DGC: DGC (w/ mfm) and DGC (w/o mfm). Different from DGC (w/ mfm), DGC (w/o mfm) will degenerate to DMSGD if sparse communication is not adopted. | A |
For the purposes of this paper we use a variation of the database444https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals in total. | In Section II we define the φ𝜑\varphiitalic_φ metric, then in Section III we define the five tested activation functions along with the architecture and training procedure of SANs, in Section IV we experiment SANs on the Physionet [32], UCI-epilepsy [33], MNIST [34] and FMNIST [35] databases and provide visualizations... | We use 10000100001000010000 images from the training dataset as a validation dataset and train on the rest 50000500005000050000 for 5555 epochs with a batch size of 64646464. | First, we merge the tumor classes (2222 and 3333) and the eyes classes (4444 and 5555) resulting in a modified dataset of three classes (tumor, eyes, epilepsy). | The CNN feature extractor consists of two convolutional layers with 3333 and 16161616 filters and kernel size 5555, each one followed by a ReLU and a Max-Pool with pool size 2222. | C |
20: xi(t+1)=0subscript𝑥𝑖𝑡10x_{i}(t+1)=0italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t + 1 ) = 0. | Let denote τ𝜏\tauitalic_τ as the dynamic degree of the scenarios. The harsher environment the networks suffers, the higher τ𝜏\tauitalic_τ it is. In the highly dynamic scenarios, we suppose that τ≥0.01𝜏0.01\tau\geq 0.01italic_τ ≥ 0.01. With proper τ𝜏\tauitalic_τ, PBLLA asymptotically converges and leads the UAV ad-h... | The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Never... | However, we have to recognize that the altering strategies probability ω𝜔\omegaitalic_ω severely impacts on the efficiency of SPBLLA. If Theorem 5 limits m𝑚mitalic_m to be a large value, the probability will decrease. When m𝑚mitalic_m is too large, UAVs are hard to move, and the learning rate will decrease. To some ... | The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚... | D |
Once again, with boundary conditions 𝐯¯⟂|Γ=𝟎evaluated-atsubscript¯𝐯perpendicular-toΓ0\overline{\mathbf{v}}_{\perp}|_{\Gamma}=\mathbf{0}over¯ start_ARG bold_v end_ARG start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0, | For the particular case of U¯=1¯¯𝑈¯1\overline{U}=\overline{1}over¯ start_ARG italic_U end_ARG = over¯ start_ARG 1 end_ARG , this implies | Note that Δ¯¯U¯¯¯Δ¯𝑈\overline{\overline{\Delta}}\,\,\overline{U}over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG over¯ start_ARG italic_U end_ARG may be expressed | radii of the wall. Initial system toroidal flux is zero, and ΦPI(t)subscriptΦ𝑃𝐼𝑡\Phi_{PI}(t)roman_Φ start_POSTSUBSCRIPT italic_P italic_I end_POSTSUBSCRIPT ( italic_t ) | Hence, system toroidal flux is conserved. Note that in this case f¯¯𝑓\overline{f}over¯ start_ARG italic_f end_ARG | D |
FD g(x)→g(y)𝑔𝑥→𝑔𝑦g(x)\operatorname{\rightarrow}g(y)italic_g ( italic_x ) → italic_g ( italic_y ) is valid in r𝑟ritalic_r. | This is the case, for instance, of the reality g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT depicted to the left of Figure | Instead of g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we consider the interpretation g1′superscriptsubscript𝑔1′g_{1}^{\prime}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT depicted in Figure | according to the realities g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to g6subscript𝑔6g_{6}italic_g start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT given in Figure 9. | Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. | A |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes be... | We detected the variance between DQN and Dropout-DQN visually and numerically as Figure 3 and Table I show. | In the experiments we detected variance using standard deviation from average score collected from many independent learning trails. | Figure 3: Dropout DQN with different Dropout methods in CARTPOLE environment. The bold lines represent the average scores obtained over 10 independent learning trials, while the shaded areas indicate the range of the standard deviation. | To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classi... | B |
Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions,... | Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic... | While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore,... | We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images. | Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solution... | B |
Tab. VI-B reports the average results achieved over 10 independent runs by a GNN implemented with different pooling operators. | Similarly to the MNIST experiment, we notice that neither DiffPool nor TopK𝐾Kitalic_K are able to solve this graph signal classification task. | As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods. | We consider two tasks on graph-structured data: graph classification and graph signal classification. | Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing. | D |
Massiceti et al. (2017) extend this approach and introduce a network splitting strategy by dividing each decision tree into multiple subtrees. The subtrees are mapped individually and share common neurons for evaluating the split decision. | When using all decision trees, data samples are created where all trees agree with a high probability. | The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. | These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption. | Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to simple random forests. | C |
In particular, when specialized to the tabular setting, our setting corresponds to the third setting with d=|𝒮|2|𝒜|𝑑superscript𝒮2𝒜d=|{\mathcal{S}}|^{2}|\mathcal{A}|italic_d = | caligraphic_S | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | caligraphic_A |, where OPPO attains an H3/2|𝒮|2|𝒜|Tsuperscript𝐻32supe... | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al.... | step, which is commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), lacks such a notion of robustness. | A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient ... | Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; ... | D |
In a first step, channel (WRN: Channel), kernel (WRN: Kernel), and group pruning (WRN: Group) are evaluated separately on the WRN architecture. | The results for number of floating-point operations (FLOPs), parameters, activations, and memory (=== parameters +++ activations) are reported in Figure 4. | Typically, the weights of a DNN are stored as 32-bit floating-point values and during inference millions of floating-point operations are carried out. | Ultimately, it can be stated that group convolutions are excellent at reducing FLOPs and parameters but can harm the overall memory requirements by increasing the amount of activations. | The dense architecture outperforms the residual blocks in terms of number of FLOPs as well as parameters. | A |
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori no... | In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori no... | In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence b... | In Section 4, we show that the Vietoris-Rips filtration can be (categorically) seen as a special case of persistent homology obtained through metric embeddings via the isomorphism theorem (Theorem 1). In this section, we also we also establish the stability of the filtration obtained via metric embeddings. | In this paper we significantly generalize this point of view by proving an isomorphism theorem between the Vietoris-Rips filtration of any compact metric space X𝑋Xitalic_X and its Kuratowski filtration: | C |
We aim to enhance the trust into and interpretability of t-SNE through visualization and exploration of the model, the data, and the hyper-parameters. An overall picture of the interface is shown in Figure 1, and each of its different views is described below, divided into our four design goals: Hyper-parameter Explora... | Significantly-different t-SNE projections can be generated from the same data set, due to its well-known sensitivity to hyper-parameter settings [14]. We propose to support users in finding a good t-SNE projection for their data by using visual exploration, as follows. A Grid Search mode (Figure 1(a)) initiates a syste... | The answers to Q.1.2 also show that t-viSNE users needed fewer iterations to find a good parameter setting. | We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and ... | VisCoDeR [22] supports the comparison between multiple projections generated by different DR techniques and parameter settings, similarly to our initial parameter exploration, using a scatterplot view with an on-top heatmap visualization for evaluating the quality of these projections. In contrast to t-viSNE, it does n... | A |
Metaheuristics “In the Large” - 2022 [28]: The objective of this work is to provide a useful tool for researchers. To address the lack of novelty, the authors propose a new infrastructure to support the development, analysis, and comparison of new approaches. This framework is based on (1) the use of algorithm template... | Good practices for designing metaheuristics: It gathers several works that are guidelines for good practices related to research orientation to measure novelty [26], to measure similarity in metaheuristics [27], Metaheuristics “In the Large” (to support the development, analysis, and comparison of new approaches) [28],... | An exhaustive review of the metaheuristic algorithms for search and optimization: taxonomy, applications, and open challenges - 2023 [34]: This taxonomy provides a large classification of metaheuristics based on the number of control parameters of the algorithm. In this work, the authors question the novelty of new pro... | The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-in... | Designing new metaheuristics: Manual versus automatic approaches - 2023 [29]: This study discusses two methods for the design of new metaheuristics, manual or automatic. Although authors give credit to the manual design of metaheuristics because this development is based on the designer’s intuition and often involves l... | D |
A feasible approach is to recompute the connectivity distribution based on the embedding Z𝑍Zitalic_Z, which contains the potential manifold information of data. | However, the following theorem shows that the simple update based on latent representations may lead to the collapse. | (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective st... | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes mo... | Therefore, the update step with the same sparsity coefficient k𝑘kitalic_k may result in collapse. To address this problem, we assume that | A |
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this eval... | Accuracy of IPID measurements. The IPID techniques are known to be difficult to leverage, requiring significant statistical analyses to ensure correctness. Recently, (Ensafi et al., 2014; Pearce et al., 2017) developed statistical methods for measuring IPID. However, in contrast to our work, the goal in (Ensafi et al.,... | How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing stu... | ∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we p... | What SMap improves. The infrastructure of SMap is more stable than those used in previous studies, e.g., we do not risk volunteers moving to other networks. Our measurements do not rely on misconfigurations in services which can be patched, blocking the measurements. The higher stability also allows for more accurate r... | A |
Natural systems need to adapt to a changing world continuously; seasons change, food sources and shelter opportunities vary, cooperation and competition with other animals evolves over time. Moreover, their embodiment also changes over their lifetime. Young animals experience a period of growth where their size increas... | It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long... | Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a br... | While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this pape... | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration. | A |
Each of the six cases has several subcases, depending on the left-to-right order of the vertices inside the gray rectangles in the figure. | Each point has a fixed x𝑥xitalic_x-coordinate and a y𝑦yitalic_y-range specified by the array Y𝑌Yitalic_Y; | The vertical order of the edges is also not fixed, as the points can have any y𝑦yitalic_y-coordinate in the range [0,22]022[0,2\sqrt{2}][ 0 , 2 square-root start_ARG 2 end_ARG ]. | A scenario contains for each point q𝑞qitalic_q an x𝑥xitalic_x-coordinate x(q)𝑥𝑞x(q)italic_x ( italic_q ) from the set of allowed x𝑥xitalic_x-coordinates for q𝑞qitalic_q, and a range y-range(q)⊆[0,22]y-range𝑞022\mbox{$y$-range}(q)\subseteq[0,2\sqrt{2}]italic_y -range ( italic_q ) ⊆ [ 0 , 2 square-root start_AR... | For any fixed ordering we can still vary the y𝑦yitalic_y-coordinates in the range [0,δ]0𝛿[0,\delta][ 0 , italic_δ ]. | D |
Let S𝑆Sitalic_S be a finitely generated simple or 00-simple idempotent-free semigroup. Then S𝑆Sitalic_S is not residually finite. | For our proof, we will show that no simple or 00-simple idempotent-free semigroup is residually finite. A semigroup S𝑆Sitalic_S is called residually finite, if, for all s,t∈S𝑠𝑡𝑆s,t\in Sitalic_s , italic_t ∈ italic_S with s≠t𝑠𝑡s\neq titalic_s ≠ italic_t, there is a homomorphism φ:S→F:𝜑→𝑆𝐹\varphi:S\to Fitalic_φ ... | By 16, A𝐴Aitalic_A or C𝐶Citalic_C embeds into S𝑆Sitalic_S. Since neither of the two is residually finite (by 20), we obtain that S𝑆Sitalic_S cannot be residually finite either by 17. | If a semigroup S𝑆Sitalic_S is not residually finite and embeds into a semigroup T𝑇Titalic_T, then T𝑇Titalic_T cannot be residually finite either. | If there is an injective homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T between two semigroups S𝑆Sitalic_S and T𝑇Titalic_T, then S𝑆Sitalic_S is isomorphic to a subsemigroup of T𝑇Titalic_T and we also say that S𝑆Sitalic_S embeds into T𝑇Titalic_T (or that S𝑆Sitalic_S can be embedded into T𝑇Titalic_T). Thus, 16 st... | B |
Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations,... | It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in ... | Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the... | As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the p... | This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any spon... | D |
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier p... | The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates. | The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. Du... | Document Classification. Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria. To separate privacy policies from other web documents we used a supervised machine learning approach. Two researchers in the team labe... | Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier p... | A |
Multiple metrics are important to avoid the dangers of using single metrics, such as accuracy [32, 54], for every data set. | As mentioned in section 1, the selection of the right performance metrics for different types of analytical problems and/or data sets is challenging. | However, comparison and selection between multiple performance indicators is not trivial, even for widely used metrics [12, 46]; alternatives such as Matthews correlation coefficient (MCC) might be more informative for some problems [8]. | Optimized Models for Specific Predictions. In Figure 7(a), we see the initial projection of the 200 models selected up to this point (i.e., \raisebox{-.0pt} {\tiny\bfS1}⃝). Some models perform well according to our metrics, but others could be removed due to lower performance. However, we should try not to break the ba... | T3: Manage the performance metrics for enhancing trust in the results. Many performance or validation metrics are used in the field of ML. For each data set, there might be a different set of metrics to measure the best-performing stacking. Controlling the process by alternating these metrics and observing their influe... | B |
{(a,b,c)∈N3|{a,b,c}≠N}conditional-set𝑎𝑏𝑐superscript𝑁3𝑎𝑏𝑐𝑁\{(a,b,c)\in N^{3}~{}|~{}\{a,b,c\}\neq N\}{ ( italic_a , italic_b , italic_c ) ∈ italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT | { italic_a , italic_b , italic_c } ≠ italic_N }; | By Theorem 2.1, each surjective mapping t′superscript𝑡′t^{\prime}italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in | R(0¯,(v,[02]),(v,[12]))𝑅¯0𝑣delimited-[]02𝑣delimited-[]12R(\overline{0},(v,[02]),(v,[12]))italic_R ( over¯ start_ARG 0 end_ARG , ( italic_v , [ 02 ] ) , ( italic_v , [ 12 ] ) ); | Surjective homomorphisms appear naturally in the linear-algebraic theory of homomorphism-related combinatorial quantities that was pioneered by Lovász [25, 26, 12, 6]. | on the variables in V𝑉Vitalic_V that appear in the αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT; | C |
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020]. | Many techniques have been employed to address the issue of data scarcity, including self-supervised pre-training [Achiam et al., 2023, OpenAI, 2022], transfer learning [Gero et al., 2018, Kumar et al., 2022], and meta-learning [Madotto et al., 2019, Song et al., 2020, Zhao et al., 2022]. Compared to other approaches, m... | Other works use MAML for multi-domain and low-resource language generation, such as few-shot dialogue system [Mi et al., 2019, Madotto et al., 2019, Qian and Yu, 2019, Song et al., 2020] and low-resource machine translation [Gu et al., 2018]. | Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020]. | When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla M... | B |
For the CCA-enabled UAV mmWave network, the array size is usually large and the corresponding inter-element distance ΔϕΔitalic-ϕ\Delta\phiroman_Δ italic_ϕ is small. Therefore, it is assumed that ΔαΔ𝛼\Delta\alpharoman_Δ italic_α and ΔβΔ𝛽\Delta\betaroman_Δ italic_β satisfy Δϕc≤ΔαΔsubscriptitalic-ϕcΔ𝛼\Delta\phi_{\... | For the LOS channel, the AOAs and AODs in (5) are mainly determined by the position and attitude of the t-UAVs and r-UAV. | Given the maximum resolution of the codebook, we continue to discuss the characteristic of the multi-resolution and the beamwidth with the CCA codebook. For the multi-resolution codebook, the variable resolution is tuned by the beamwidth, which is determined by the number of the activated elements [12]. Note that the b... | The analog precoding architecture adopted for DRE-covered CCA is shown in Fig. 2 [13], which tunes the partially-connected precoding architecture by adapting the connection between the RF chains and the antenna elements to the channel variation and forming dynamic subarrays. For a fixed time slot, the precoding archite... | For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam trac... | C |
After the merging, we obtain an “almost” A|Bconditional𝐴𝐵A|Bitalic_A | italic_B-biregular graph with size M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG. | As in the example, almost is because it is possible that there are parallel edges between two vertices in G𝐺Gitalic_G | Again the “almost” is because it is possible that there are parallel edges between two vertices in G𝐺Gitalic_G. | the edges are between the vertices in Aπsubscript𝐴𝜋A_{\pi}italic_A start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT | whereas R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT contains only edges between vertices in U1subscript𝑈1U_{1}italic_U start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and vertices in V2subscript𝑉2V_{2}italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. | A |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et ... | Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, T... | corresponding to θ(m)(k)=(θ1(k),…,θm(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ... | Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represen... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | D |
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4. | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4. | Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-layer... | Table 4 shows that, even though this is counter-intuitive, element-wise addition (with fewer parameters) empirically results in slightly higher BLEU than the concatenation operation. Furthermore, even though using 2 depth-wise LSTM sub-layers connecting cross- and masked self-attention sub-layers leads to the highest B... | D |
(τ⊆i,𝖤𝖥𝖮[σ𝒢])subscriptτsubscript𝑖𝖤𝖥𝖮delimited-[]subscriptσ𝒢(\uptau_{\subseteq_{i}},\mathsf{EFO}[\upsigma_{\mathcal{G}}])( roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_EFO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ] )-preservatio... | \upsigma_{\mathcal{G}}]\right\rangle⟨ ↓ caligraphic_C , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ] ⟩ the space of all the finite | from Example 5.5. The subset 𝒞⊆𝒟≤2𝒞subscript𝒟absent2\mathcal{C}\subseteq\mathcal{D}_{\leq 2}caligraphic_C ⊆ caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT | Consider the class 𝒞⊆𝒢𝒞𝒢\mathcal{C}\subseteq\mathcal{G}caligraphic_C ⊆ caligraphic_G of all finite simple | in Example 5.7 is 𝒟≤2⊆𝒢subscript𝒟absent2𝒢\mathcal{D}_{\leq 2}\subseteq\mathcal{G}caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT ⊆ caligraphic_G the set of finite simple graphs of degree | C |
In the training stage, we crop each distorted image into four distortion elements and learn the parameters of the neural network using all data. Note that this training process is data-independent, where each part of the entire image is fed into the network one by one without the data correlation. In the test stage, we... | 9: Compute the distortion coefficients 𝒦^^𝒦\hat{\mathcal{K}}over^ start_ARG caligraphic_K end_ARG using the 𝒟^^𝒟\hat{\mathcal{D}}over^ start_ARG caligraphic_D end_ARG based on Eq. 18 | As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene li... | To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion recti... | Evaluation Metrics: Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluatio... | D |
Hence, the computation complexity for achieving an ϵitalic-ϵ\epsilonitalic_ϵ-stationary point is 𝒪(1/ϵ4)𝒪1superscriptitalic-ϵ4{\mathcal{O}}(1/\epsilon^{4})caligraphic_O ( 1 / italic_ϵ start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ). | In this section, we prove the convergence rate of SNGM for both smooth and relaxed smooth objective functions. First, we introduce the auxiliary variable as follows: | Table 1: Comparison between MSGD and SNGM for a L𝐿Litalic_L-smooth objective function. 𝒞𝒞{\mathcal{C}}caligraphic_C denotes the computation complexity (total number of gradient computations). | Theorem 5.5 and Corollary 5.6 extend the convergence analyses in Theorem 5.2 and Corollary 5.3 for a smooth objective function to a relaxed smooth objective function, which is a more general scenario. | Recently, the authors in [38] observed the relaxed smooth property in deep neural networks. According to Definition 2.2, the relaxed smooth property is more general than the L𝐿Litalic_L-smooth property. | C |
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively. | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | To continue this example, there may be further constraints on FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, irrespective of the stage-II decisions, which cannot be directly reduced to the budget B𝐵Bitalic_B. For instance, there may be a limited number of personnel available prior to the ... | An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. | Health departments prepared some vaccination and testing sites in advance, based on projected demands [5], i.e., in stage-I, which may have multiple benefits; for example, the necessary equipment and materials might be cheaper and easier to obtain. | D |