context
stringlengths
250
5.97k
A
stringlengths
250
8.2k
B
stringlengths
250
3.83k
C
stringlengths
250
5.02k
D
stringlengths
250
5.14k
label
stringclasses
4 values
“In our family, we are very open. I wouldn’t want to know that my teens might be keeping an app from me, but obviously, I wouldn’t want to encourage any kind of situation that might cause them to keep things private from me, like, it’s not a quality that I want to encourage in my family. I don’t like things that would...
Parents also did not like the bi-directional transparency of the app usage. They did not appreciate the fact that CO-oPS allowed their teens to have the same level of privacy in their app usage as they did. 68% of the parents (N=13) explicitly said that they would not want their teens to have the ability to hide apps,...
Apart from the above concerns, N=5 (26%) parents and N=8 (42%) teens said they did not want to hide apps that they had installed on their devices. Most teens said their parents already knew what apps were installed on their phones. Similarly, parents said they would not use any app that they would not share with their...
Overall, we found that most parents and teens made few considerations toward their own online safety or privacy when installing new apps or granting permissions to the apps they installed (RQ1). Meanwhile, parents often manually monitored the apps their teens installed but gave little thought to the permissions granted...
In reviewing the apps installed and the permissions granted on one another’s phones, parents were found to be more concerned about their teens’ app usage but teens generally found more concerning privacy permissions on parents’ phones. 74% of the parents (N=14) found apps on their teen’s phones that could cause concer...
B
We compare the performances of the proposed filtration against that of the distance-to-measure filtration. The sample points are shown in Figure 9, the persistence diagrams are shown in Figure 11, and the significant loops found by oracle and subsample bootstrapping are shown in Figure 10.
We compare the performances of the proposed filtration against that of the distance-to-measure filtration. The sample points are shown in Figure 9, the persistence diagrams are shown in Figure 11, and the significant loops found by oracle and subsample bootstrapping are shown in Figure 10.
The novel Robust Density-Aware Distance filtration is proposed in the present work for studying data with a non-uniform density. It is designed to make small holes of high-density regions more prominent. It is scale-invariant, and the persistences of homology classes in the proposed filtration depend on the shapes rath...
The distance-to-measure (DTM) function is a modification of the distance function that is designed to enhance robustness against potential noise and outliers. Roughly speaking, the distance-to-measure of a point x𝑥xitalic_x to a probability measure μ𝜇\muitalic_μ is the average distance of x𝑥xitalic_x from the neares...
As shown in Figure 10, while the proposed method misses some of the bigger cells detected by distance-to-mesaure, with the sizes of different loops normalized by density, it detects many smaller cells in the middle that distance-to-measure cannot detect.
D
Considering the locality of AUs, methods such as (Zhao, Chu, and Zhang 2016; Li, Abtahi, and Zhu 2017; Li et al. 2017; Song et al. 2021a; Chen et al. 2021a) make attempt to learn better facial appearance features by emphasizing important local facial regions. Zhao et al. (Zhao, Chu, and Zhang 2016) proposed Deep Region...
As for subject variation problem, works such as (Chen et al. 2013) provide a solution for enhancing the generalizability of AU recognition model by training personalized AU classifiers for each subject and works such as (Zen et al. 2016; Wang and Wang 2018) make attempt to relieve the subject-related prediction bias t...
Recent works have made progress in capturing high-level AU semantic relations in an implicit way (Corneanu, Madadi, and Escalera 2018; Niu et al. 2019) by exploiting correlations between AUs via probabilistic graphic models or in an explicit way (Li et al. 2019; Shao et al. 2020) by constructing an AU semantic graph ac...
Considering the semantic relations among AUs, some works (Wang et al. 2013; Walecki et al. 2017) make efforts in modeling such relations via probabilistic graphical models or graph neural networks. Wang et al. (Wang et al. 2013) introduced a restricted Boltzmann machine to model facial action units, thereby capturing n...
We compare the performance of CISNet with the previous state-of-the-art methods including DRML (Zhao, Chu, and Zhang 2016), EAC-Net (Li et al. 2017), ROI-Net (Li, Abtahi, and Zhu 2017), DSIN (Corneanu, Madadi, and Escalera 2018), JAA-Net (Shao et al. 2018), LP-Net (Niu et al. 2019), SRERL (Li et al. 2019), UGN-B (Song...
C
Countermeasure for Attacks II and III: Attacks II and III prolong the block propagation delay of BBP by violating TSO algorithms to generate invalid PPBs, e.g., selecting transactions with low GAS prices rather than high GAS prices. Fortunately, malicious nodes with attacks II and III can be simply identified by a sco...
The block propagation time in current blockchain networks, consisting of block validation time and transmission time, is the TPS performance bottleneck. Furthermore, the tradeoff between TPS and security is fundamental in today’s blockchains: many solutions that boost TPS come at the expense of lowered security. This ...
Limited TPS is a fundamental problem for public blockchain, and there are quite a few works on improving TPS, such as DAG, Sharding, new consensus protocols, and layer 2 [31, 32]. DAG is a new data structure that all transactions are directly or indirectly connected and can be validated in parallel [33, 34]; Sharding ...
TPS Limited by Node: While BBP holds the potential for nearly constant block propagation time, its achievable TPS is mainly constrained by the BBP block processing time at each node, i.e., the EVM limitation. For an ideal case, BBP can finish the network consensus as well as a new block is successfully processed by the...
A weakness of the present blockchains is their low data processing capability, as measured by transactions per second (TPS). For example, as shown in [7, 8], the TPS of Bitcoin and Ethereum are 7 and 15, respectively, significantly lower than that of centralized systems. The low TPS cannot meet the needs of large-scal...
B
In a broader context of reinforcement learning with partial observability, our work is related to several recent works on POMDPs with special structures. For example, Kwon et al. (2021) considers latent POMDPs, where each process has only one latent state, and the proposed algorithm efficiently infers the latent state...
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
D
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
There are some limitations in our study which is worth mentioning. We relied on three databases: ACM DL, IEEE Xplore, and Scopus; therefore, we may have missed relevant papers published in other databases. Another limitation is the inapplicability of quality appraisal methods such as the ”Risk of Bias Assessment” in ou...
were conducted in Europe (11 papers) and North America (6 papers). Studies conducted in Europe include four studies from Spain and one study each from Germany, Hungary, Romania, Portugal, the Czech Republic, and Italy. On the other hand, studies from North America include four studies from the USA, one collaborative st...
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to...
B
Finally, we note that the compression ratio is not overly sensitive to the choice of PCA dimension, and if we use more dimensions than the number of communities, we still get favorable results. For theoretical support, we show in Section E.4 of the appendix that the compression ratios of most points change only mildly ...
LOF has the best performance in the Zheng4eq and Zheng4uneq datasets. We also add the results for 5%percent55\%5 % removal in the Appendix, in Section E.3. We also add the improvements in the purity index for the 5% and 10% point removal cases, which is another popular measure of clustering accuracy. As aggregate infor...
Finally, we note that the compression ratio is not overly sensitive to the choice of PCA dimension, and if we use more dimensions than the number of communities, we still get favorable results. For theoretical support, we show in Section E.4 of the appendix that the compression ratios of most points change only mildly ...
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt...
Finally, we note a few limitations with our outlier removal algorithm. First, the algorithm is dependent on selecting a reasonable removal percentage. While we observed greater NMI improvement with greater removal rates, it is important to understand what is a suitable choice for different datasets. Another concern is ...
D
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
In addition, we also show again as in the baseline situations, that the first two levels of visual missingness, are innocuous for the SGG tasks. It provides empirical evidence and further insights to bring deliberately obfuscations for privacy concerns as in [13].
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
The experimental results show promising performance improvement with our proposed framework compared to multiple baselines. Notably, similar to the findings from [13] where the face obfuscated images only cause trivial performance drop for classifications and object detection, we also observe empirical evidence that no...
We observe that the PredCls does not fluctuate much in case of missing visions compared to other two metrics SGCls and SGDet. It is consistent with the previous findings in [2], where the authors find that the object labels are highly predictive of relation labels but not vice-versa. In contrast, SGCls and SGDet drops ...
A
Since then, the facility location game has become one of the main grounds for approximate mechanism design without money and has attracted numerous follow-up research [20, 21, 1, 14]. In particular, Fotakis and Tzamos [14] show that no deterministic strategyproof mechanism can achieve a bounded approximation ratio when...
In all the above models, the cost of an agent is measured by her distance to the closest facility. This cost can be considered as the travel fee. In many real-life scenarios, except for the travel fee, the agent may also need to pay the facility a service or entrance fee, such as tickets for swimming pools and museums....
Each facility, once located, has an entrance fee determined by its location. The cost of an agent is the sum of the travel fee (distance to the facility) and the entrance fee of the facility. Each agent will use one facility at a minimum cost. 111In this paper, we make the assumption that facilities are homogeneous in ...
The recent survey by Chan et al. [8] depicts state of the art. Here, we mention some of the models: obnoxious facility games where every agent wants to stay away from the facility [11, 13]; heterogeneous facility games where the acceptable set of facilities for each agent could be different [25, 17, 12, 16];
The seminal work of Procaccia et al.​ [23] initiates the study of approximate mechanism design without money for the facility location game. They study strategyproof mechanisms for one-facility and two-facility games through the lens of approximation ratio of the objective, which provides a way to quantify the fundamen...
C
It is clear that after the execution of the above procedure, the number of connected components of G⁢(F*)𝐺superscript𝐹G(F^{*})italic_G ( italic_F start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) has decreased by 1 since there is a path connecting v1subscript𝑣1v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and v2...
Firstly we describe a polynomial-time reduction from an input instance of Connected cubic planar positive 1in3SAT, a formula F𝐹Fitalic_F, to an input instance of Connected subcubic planar C4subscript𝐶4C_{4}italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-free positive 1in3SAT, a formula F′superscript𝐹′F^{\prime}ita...
The polynomial reduction given in [30] transforms connected instances of PLANAR POSITIVE 1in3SAT to connected instances of CUBIC PLANAR POSITIVE 1in3SAT. The NP-Completeness of this variant is guaranteed by the NP-Completeness of CONNECTED CUBIC PLANAR POSITIVE 1in3SAT and the correctness of this reduction.
The NP-Completeness proof of this more restricted variant can be found in [30] where a polynomial reduction is presented to transform an input instance F𝐹Fitalic_F of PLANAR POSITIVE 1in3SAT to an instance F′superscript𝐹′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT of CUBIC PLANAR POSITIVE 1in3SAT....
We prove this restricted variant 1in3SAT is also NP-Complete. Let F𝐹Fitalic_F be an input instance of PLANAR POSITIVE 1in3SAT, we will construct a positive formula F*superscript𝐹F^{*}italic_F start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT in CNF, input instance of CONNECTED PLANAR POSITIVE 1in3SAT, and show that F𝐹Fit...
B
Embodying a suitable compromise between model expressiveness and mathematical tractability, polytopic linear systems represent widely employed modelling paradigms able to capture structural uncertainties and parameter-varying dynamics 43. Their analysis and control design, however, is frequently complicated by the pres...
Once fixed feasible control inputs at the vertices of the invariant set have been computed, a variable structure controller either takes a convex combination of those values by exploiting the vertex reconstruction of any state belonging to such a set, or coincides with a purely linear gain stemming from a triangulatio...
We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes...
Since the existence of a polyhedral CLF for a polytopic system is a necessary and sufficient condition for its exponential stabilizability inside 𝒮𝒮\mathcal{S}caligraphic_S 9 Prop. 7.39, and since Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) as defined by any of the methods described in §2 makes Ψ⁢(⋅)Ψ⋅\Psi(\cdot)roman_Ψ ( ⋅ ) a...
Among the available approaches, the concept of control invariant set is one of the most exploited historically, since it ensures the existence of some feedback law able to steer the closed-loop trajectories of the uncertain system within a prescribed state set 25, 6, 8, 37. This is traditionally achieved by associating...
D
All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures. Instead it combines neural implicit representations to model the scene appearance with the estimation of the parameters of a known, ...
Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example occupancy values [32, 9, 39], or signed distance functi...
Several of the previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations [31, 16, 11, 10, 53, 59, 29, 60], or other general physics models [27]. While those are a elegant approaches that allow the model to adapt to different physical systems, they have two drawbacks. First, ...
We model the dynamics of the objects using an ordinary differential equation (ODE) and use implicit neural representations to model the appearance, where the static background and the planar dynamics allow us to model the appearance in 2D. Our objective is to estimate the unknown physical parameters, and the initial co...
In this work we presented a solution for identifying the parameters of a physical model from a video while also creating a photorealistic representation of the appearance of the scene objects. To this end, we proposed to combine neural implicit representations and neural ODEs in an analysis-by-synthesis fashion. Unlike...
A
The majority of QCN models optimize the quantum resource allocation and network overall performance by embedding classical data into quantum states that are shared over quantum channels between distant nodes [3, 4, 5, 6]. Additionally, numerous approaches have been proposed to develop resource-efficient QCNs, including...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m...
As discussed earlier, the QSC framework ensures minimality of quantum communication resources by extracting and compressing the semantic representations of the data, unlike existing semantic-agnostic QCNs. Moreover, to assess the accuracy of the QSC performance within the quantum semantics’ extraction, transmission, r...
As shown in Fig. 1, the constructed quantum semantic representations in the form of d2subscript𝑑2d_{2}italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-dimensional quantum states must be transmitted through quantum channels to the receiving quantum node. The quantum communication process must preserve the accuracy of ...
A
That is, the defensive function can capture the set of possible sequences of defensive actions for a given observation sequence generated by the system. A defensive projection111For example, consider a set of observable events Eo={a,b}subscript𝐸𝑜𝑎𝑏E_{o}=\{a,b\}italic_E start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT...
Note that the E𝐸Eitalic_E-verifier may generate some problematic states, i.e., for state (xv,xD)∈XEsubscript𝑥𝑣subscript𝑥𝐷subscript𝑋𝐸(x_{v},x_{D})\in X_{E}( italic_x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ) ∈ italic_X start_POSTSUBSCRIPT italic_E ...
The defensive function proposed in this section can alter observable output events of the system G𝐺Gitalic_G by deletions, insertions, or replacements. The problem of enforcing concealability of the system aims to determine whether the defensive function is C𝐶Citalic_C-enforcing, i.e., given constraints in terms of h...
The first condition requires that, as the interface of the system, the defensive function should be able to react to every observable sequence of events that can be generated by the system, such that defensive actions (deletions, insertions, or replacements) can be utilized.
From a practical viewpoint, there may exist some constraints in the use of the defensive function (e.g., the intruder may have direct access to a sensor that generates a certain label, therefore that label can never be replaced or inserted). To capture such constraints, we consider the following scenario in the remaind...
D
However, it is hard for these techniques to provide real-time solutions for dynamic mobile mmWave IAB networks. Indeed, the optimal solution derived under ideal link conditions remarkably underperforms when facing the stochastic on-off link behavior caused by mobile users and the varying signal attenuation due to mobil...
Once the convergence is reached (f.i., after a maximum number of steps or when minimal NN weight updates are performed), the training procedure stops and distributed agents continue the interaction with the environment based on local observations and fixed local policies. However, the training procedures can be re-acti...
In several cases, random link conditions can even eliminate all the advantages of a careful optimization. The network could, in principle, be re-optimized periodically or every time it undergoes a change. However, it can induce huge computational costs and, most likely, not be practical, because a non-negligible amount...
A heuristic algorithm proposed in [saad2019millimeter] to perform link scheduling. This algorithm generates a sequence of link sets, each of which contains a group of links that can be simultaneously activated in a slot satisfying the SINR conditions required by activated MCSs. Based on this algorithm, we periodically ...
However, it is hard for these techniques to provide real-time solutions for dynamic mobile mmWave IAB networks. Indeed, the optimal solution derived under ideal link conditions remarkably underperforms when facing the stochastic on-off link behavior caused by mobile users and the varying signal attenuation due to mobil...
B
This means that the marginal value of one additional sample is growing with sample size. If the cost of statistical evidence grows linearly with the sample size, this means that the agent with utility function min⁡(L,R)𝐿𝑅\min\left(L,R\right)roman_min ( italic_L , italic_R ) would either (a) decline to run a trial or ...
To shed light on the case in which u⁢(θ,L)𝑢𝜃𝐿u\left(\theta,L\right)italic_u ( italic_θ , italic_L ) is not linear in L𝐿Litalic_L, we make the following observation. The menu of all rescaled e𝑒eitalic_e-values is the largest menu resulting in an incentive-aligned statistical contract, by Proposition 1. It then foll...
To grant approval, the principal requires that the agent provide evidence for its product having sufficient quality; e.g., by conducting a randomized controlled trial (RCT) and testing a null hypothesis, H0:θ≤θ0:subscript𝐻0𝜃subscript𝜃0H_{0}:\theta\leq\theta_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : itali...
By formalizing the bet the researcher is already taking, the regulator establishes an economic basis for statistical inference. Betting scores have recently resurfaced as a distinct and powerful approach to statistical inference (see, e.g. Shafer, 2021; Ramdas et al., 2023). Our work builds on this line of work and con...
We thank Nivasini Ananthakrishnan, Jon McAuliffe and Aaditya Ramdas for helpful discussions. This work was supported in part by the Vannevar Bush Faculty Fellowship program under grant number N00014-21-1-2941 and in part by the European Union (ERC-2022-SYG-OCEAN-101071601).
D
Effectively optimizing Equation (1) with MPC or FHE is particularly challenging, due to the computational bottleneck of these techniques when applied to large-dimensional objects [23, 11], notably affecting the computation time and the occupation of communication bandwidth between parties.
We test two different techniques: (i): Uniformly Random Selection (URS), proposed by [45, 30], in which a random subset of dimension l≤d𝑙𝑑l\leq ditalic_l ≤ italic_d of spatial coordinates is sampled at every iteration with uniform probabilities, p⁢(𝒙)=1d𝑝𝒙1𝑑p({\bm{x}})=\frac{1}{d}italic_p ( bold_italic_x ) = divi...
Image Registration is a crucial task in medical imaging applications, allowing to spatially align imaging features between two or multiple scans. Registration methods are today a central component of state-of-the-art methods for atlas-based segmentation [41, 14], morphological and functional analysis [17, 4], multi-mod...
Since the registration gradient is generally driven mainly by a fraction of the image content, such as the image boundaries in the case of SSD cost, a reasonable approximation of Equations (4) and (6) can be obtained by evaluating the cost only on relevant image locations. This idea has been introduced in medical image...
This work presents privacy-preserving image registration (PPIR), a new methodological framework allowing image registration under privacy constraints. To this end, we reformulate the image registration problem to integrate cryptographic tools, namely MPC or FHE, thus preserving the privacy of the image data. Due to the...
C
The initial learning rate is 0.10.10.10.1, except 0.010.010.010.01 for MNIST, and we conduct a multi-step learning rate schedule which decreases the learning rate by 0.1 at the 116t⁢hsuperscript116𝑡ℎ116^{th}116 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT and 233t⁢hsuperscript233𝑡ℎ233^{th}233 start_POS...
DCGAN composes of a generator realized by transposed convolution layer and a discriminator realized by an ordinary convolution layer, which greatly reduces the number of network parameters and improves the image generation effect. As an extension of our method, we believe that generative models of different architectur...
Regardless of the method used, the essence of KD is to learn the mapping function of the teacher model from input to output, i.e., fTsubscript𝑓𝑇f_{T}italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT. However, it is hard to deduce the mapping function from the existing parameters of the teacher model.
This verifies the effectiveness of using DCGAN as the emulator to learn the inverse mapping of the teacher function, and also proves that DCGAN can indeed alleviate the problem of mode collapse and generate images consistent with the distribution of real images. These synthetic images can not only effectively integrate...
The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model. Feature-based and relation-based methods [24, 57], d...
A
The second reason is efficient compression of smooth functions. It is known that for functions with m𝑚mitalic_m continuous derivatives n𝑛nitalic_n-th coefficient is O⁢(n−m)𝑂superscript𝑛𝑚O(n^{-m})italic_O ( italic_n start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT ) for both Chebyshev [MH02, Theorem 5.14] and ...
First, the output of the neural operator is a neural network, i.e., essentially a black-box function. In classical numerical methods such as FEM [Cia02], spectral methods [Boy01] and others, the parametrization allows for extracting bounds on function, its derivatives, and any other local or global information in a co...
All these properties combined make trigonometric and Chebyshev polynomials especially suitable for PDE solutions by spectral methods [Boy01] and adaptive lossless computations with functions [Tre07]. The later goal is fully realized in the Chebfun software.444https://www.chebfun.org Chebfun demonstrates that computati...
Our solution to both of outlined problems is to detach the process of function approximation (interpolation) from the mapping between functional spaces, and explicitly fix the highest possible resolution. To do that, for both domain and codomain of neural operator we consider functions represented by finite series of t...
It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser...
B
Table 3: Comparing the semantic segmentation results (in mIoU%) of different methods on Pascal VOC. We can observe that our method surpasses all previous baselines by a significant margin. Specifically, on the popular compact architecture MobilenetV2, our method improves the student by 5.39% comparing to the stand-alon...
We report the result of our method in Table A.2.2 Hyperparameters on Cifar-100 with ℒKLsubscriptℒKL\mathcal{L}_{\rm{KL}}caligraphic_L start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT loss to compare with the baselines under the same settings. Our method with KD loss surpasses all the baselines again.
We evaluate the effectiveness of our method on two popular computer vision tasks, image classification and semantic segmentation. On the ImageNet classification dataset, the tiny ResNet18 student can be boosted from 70.04% to 72.41% in terms of the top-1 accuracy, and surpasses the state-of-the-art knowledge distillati...
As most baselines do not provide the code on the COCO dataset except KR, we only compare our method to KR in this case. We reproduce the baseline using the official code with the same training procedure. Our method surpasses the baseline by nearly 2%, and further demonstrates the effectiveness of our approach.
The results when distilling to ResNet20 is interesting. In this case, using a less powerful teacher, ResNet56, results a better student performance on average comparing to using ResNet110. In particular, directly distilling the feature in one-to-one fashion deteriorates the student’s performance compared to vanilla tra...
C
The final experiment tests the effect of the number of perspectives on the accuracy of the algorithm. We generate data as follows: We uniformly distribute 1000100010001000 points in a solid 3D ball, and randomly select several perspectives. We label each point in each perspective according to on which side of the persp...
Although we cannot directly quantitatively compare to t-SNE (or other similar dimension reduction embeddings), we can measure how similar a set of projections are. It is known that for many comparative tasks it is desirable to have as little change as possible while still being faithful to the data. This notion is ofte...
Naturally, there are many limitations to ENS-t-SNE. Here we consider only a partial list, starting with scalability. It is known that t-SNE is computationally expensive and in this prototype we have not yet considered applying ideas for speeding it up, such as those in [36, 33].
While MPSE [19] focuses on simultaneously capturing global distances between objects and ENS-t-SNE aims to capture local neighborhoods, other approaches for dimension reduction, such as UMAP [29], optimize both at the same time. It would be worthwhile to quantitatively verify the extend to which these goals can be real...
In the ENS-t-SNE embedding, each point belongs to two clusters; one for its species and one for its sex. In an interactive environment, one can follow a datapoint from one projection to the other. In other words, there is a transition between the two views in three dimensions that is missing when using small multiples....
B
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
We analyze the sample efficiency of ETC under the future and past sufficiency assumptions. In particular, such assumptions ensure that the future and past observations are sufficient for identifying the belief state, which captures the information-theoretic difficulty of POMDPs. We prove that ETC attains an O⁢(1/ϵ2)𝑂1...
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
In the sequel, we decribe the procedure of ETC. In summary, ETC iteratively (i) interacts with the environment to collect observations, (ii) fits the density mappings defined in (3.6) and (3.7), respectively, by observations, (iii) identifies a confidence set of parameters by fitting the Bellman equations according to ...
In this paper, we propose Embed to Control (ETC) as a unified framework for embedding and control in POMDPs. In particular, by exploiting the low-rank transition and the future sufficiency condition, we decompose the embedding learning into the learning of Bellman operators across multiple steps. By assembling the Bell...
C
μh⁢(Sh,Γh−1)≔𝒫hπ⁢(Sh,Γh−1)𝒫hb⁢(Sh,Γh−1).≔subscript𝜇ℎsubscript𝑆ℎsubscriptΓℎ1subscriptsuperscript𝒫𝜋ℎsubscript𝑆ℎsubscriptΓℎ1subscriptsuperscript𝒫𝑏ℎsubscript𝑆ℎsubscriptΓℎ1\displaystyle\mu_{h}(S_{h},\Gamma_{h-1})\coloneqq\frac{\mathcal{P}^{\pi}_{h}(S% _{h},\Gamma_{h-1})}{\mathcal{P}^{b}_{h}(S_{h},\Gamma_{h-1})}.it...
However, in many real-world applications, due to certain privacy concerns or limitations of the sensor apparatus, the states of the environment cannot be directly stored in the offline datasets. Instead, only partial observations generated from the states of the environments are stored (Dulac-Arnold et al., 2021). For ...
Now given Assumption 3.1 and Assumption 3.5 on the existence of proxy variables and bridge functions, we are ready to present the main identification result. It represents the true policy value J⁢(π)𝐽𝜋J(\pi)italic_J ( italic_π ) via the value bridge functions (3.1),
From a theoretical perspective, the identification result and the backward induction property of the bridge functions provide a way of decomposing the suboptimality of the learned policy in terms of statistical errors of the bridge functions. When combined with the pessimism and the fast statistical rates enjoyed by an...
The existence of such bridge functions is justified, e.g., by conditions on the the rank of certain conditional probabilities or singular values of certain conditional expectation linear operators. We present the following examples to explain the existence in the tabular case with reactive policies.
D
First, they studied unconstrained regression problems with objectives in the form F⁢(𝒙T⁢ξ)𝐹superscript𝒙𝑇𝜉F({\bm{x}}^{T}\xi)italic_F ( bold_italic_x start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_ξ ), resulting in objective Hessians owning rank-one updates that cannot be employed for our general problem ...
To our knowledge, this is the first work that performs online inference by taking into account not only the randomness of samples but also the randomness of computation (i.e., sketching and stepsize); the latter is particularly important for making second-order methods computationally promising.
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
The asymptotics of second-order Newton’s methods for unconstrained problems have recently been investigated. Bercu2020Efficient designed an online Newton’s method for logistic regression, and Boyer2023asymptotic generalized that method to general regression problems. Compared to first-order methods that often consider...
In this paper, we answer this question by complementing the global convergence guarantees and establishing the local asymptotic properties of existing StoSQP methods. Specifically, we focus on an Adaptive Inexact StoSQP scheme, referred to as AI-StoSQP. By adaptive we mean that the scheme inherits the critical merit of...
A
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition. In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3...
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous...
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB...
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont...
A
Token mixers that replace the self-attention in transformers with fixed token mixing mechanisms, such as the Fourier transform [47, 48], achieves comparable generalization with lower computational requirements on NLP tasks [49]. Other token-mixing architectures [50] have also been proposed that use standard neural comp...
As shown in Figure 1 (c), the design of the WaveMix block is such it that does not collapse the spatial resolution of the feature maps, unlike CNN blocks that use pooling operations [9]. And yet, it reduces the number of computations required by reducing the spatial dimensions of the feature maps using 2D-DWT, which tr...
Due to the properties of wavelet transform mentioned in the previous section – shift-invariance, multi-resolution analysis, edge-detection, local operations, and energy compaction – we propose using it in neural network architectures for computer vision as token mixers, feature reorganizers, and spatial compactors wit...
Natural images have a number of priors that are not comprehensively exploited in any single type of neural network architecture. For instance, (1) convolutional neural networks (CNNs) only model shift-invariance using convolutional design elements [5, 6, 7, 8, 9, 10, 11, 12], (2) vision transformers (ViT) model long-ra...
We relate WaveMix to previous works in Section 2, where we delve further into the image priors modeled by various classes of neural architectures for vision, and the use of wavelet transform. Our key innovations – the WaveMix blocks, use of multi-level 2D-DWT in each block, channel mixing, and the preservation of featu...
B
cos⁡(χ)⁢δ⁢x+sin⁡(χ)⁢δ⁢y𝜒δ𝑥𝜒δ𝑦\cos(\chi)\updelta x+\sin(\chi)\updelta yroman_cos ( italic_χ ) roman_δ italic_x + roman_sin ( italic_χ ) roman_δ italic_y, −sin⁡(χ)⁢δ⁢x+cos⁡(χ)⁢δ⁢y𝜒δ𝑥𝜒δ𝑦-\sin(\chi)\updelta x+\cos(\chi)\updelta y- roman_sin ( italic_χ ) roman_δ italic_x + roman_cos ( italic_χ ) roman_δ italic_y,
cos⁡(χ)⁢δ⁢x+sin⁡(χ)⁢δ⁢y𝜒δ𝑥𝜒δ𝑦\cos(\chi)\updelta x+\sin(\chi)\updelta yroman_cos ( italic_χ ) roman_δ italic_x + roman_sin ( italic_χ ) roman_δ italic_y, −sin⁡(χ)⁢δ⁢x+cos⁡(χ)⁢δ⁢y𝜒δ𝑥𝜒δ𝑦-\sin(\chi)\updelta x+\cos(\chi)\updelta y- roman_sin ( italic_χ ) roman_δ italic_x + roman_cos ( italic_χ ) roman_δ italic_y,
δ⁢zδ𝑧\updelta zroman_δ italic_z and δ⁢βδ𝛽\updelta\betaroman_δ italic_β (or δ⁢μδ𝜇\updelta\muroman_δ italic_μ) respectively. When using the flat outputs x,y,z,F𝑥𝑦𝑧𝐹x,y,z,Fitalic_x , italic_y , italic_z , italic_F, I4subscript𝐼4I_{4}italic_I start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is no
controls δ^l+δ⁢δlsubscript^𝛿𝑙δsubscript𝛿𝑙\hat{\delta}_{l}+\updelta\delta_{l}over^ start_ARG italic_δ end_ARG start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT + roman_δ italic_δ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT, δ^m+δ⁢δmsubscript^𝛿𝑚δsubscript𝛿𝑚\hat{\delta}_{m}+\updelta\delta_{m}over^ start_ARG italic...
with 1≤i≤41𝑖41\leq i\leq 41 ≤ italic_i ≤ 4, and 1≤j≤51𝑗51\leq j\leq 51 ≤ italic_j ≤ 5 for i=2𝑖2i=2italic_i = 2 or i=3𝑖3i=3italic_i = 3 and 1≤j≤31𝑗31\leq j\leq 31 ≤ italic_j ≤ 3 for i=1𝑖1i=1italic_i = 1 or i=4𝑖4i=4italic_i = 4. The value of δ⁢Fδ𝐹\updelta Froman_δ italic_F, δ⁢δlδsubscript𝛿𝑙\updelta\delta_{l}rom...
B
Formal automated theorem proving in logic is among the most advanced and abstract forms of reasoning materialised in the AI space. Future models capable of harnessing powerful internal language representation and abstract reasoning models, able to handle informal language outside of heavily curated formal environments ...
supervised translation. They compare this with unsupervised translation, where models attempt to find a mapping between two languages without any initial alignment data. Supervised RNN-based neural machine translation luong2017neural actually outperforms the later transformer-based lample2018phrase and MLM pre-trained ...
Need for developing robust interactive Natural Language theorem provers. Efficient guided exploration of large mathematical state spaces has no direct equivalent for mathematical natural language, and even pure mathematical reasoning without language is benefiting from approximate methods without using formal theorem p...
Autoformalisation through direct machine translation is difficult, and perhaps should be tackled with approximation. A long-studied and extremely challenging endeavour zinn1999understanding; zinn2003computational; autoformalisation involves developing automatic methods for converting informal mathematical text into la...
Formal automated theorem proving in logic is among the most advanced and abstract forms of reasoning materialised in the AI space. Future models capable of harnessing powerful internal language representation and abstract reasoning models, able to handle informal language outside of heavily curated formal environments ...
C
In that case, S=(111101)𝑆111101S=\left(\begin{array}[]{ccc}1&1&1\\ 1&0&1\end{array}\right)italic_S = ( start_ARRAY start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARRAY ) and the number of parame...
However, assuming that the blocks are represented in the same proportions in each network is a strong assumption that may lead to the model being of little practical use. In food webs, the proportion of species at a given trophic level may differ between networks that nevertheless share the same structure, for example...
A small sub-collection of 6666 networks with density ranging from .06.06.06.06 to .11.11.11.11. All networks are represented in 5555 or 6666 of the 7777 blocks, including the first three blocks. The sub-collection consists of 3333 of the 5555 networks of dataset 48484848, the separation being based on the collecting s...
Finally, let us consider two networks with partially overlapping structures. The two networks share block 1111 (for instance basal species) but the remaining nodes of each network cannot be considered as equivalent in terms of connectivity. One may think of species belonging to trophic chains with different connectivi...
Now imagine two networks with nested structures. Blocks 1111 and 3333 are represented in the two networks while block 2 only exists in network 1111. In this illustration, block 2222 may refer to a block of parasites which are not always included in food webs (Lafferty et al.,, 2008).
C
This is based on the observation of the learning curves of the three mechanisms (in Fig. 12 in Appendix A). Fast learning on translation and scaling and a slow one on rotation can be noticed for all models, which indicates that CNN models have greater difficulty learning the mechanism of rotation.
Vanilla CNN performs relatively better in translation learning, because the position of Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT (the original images in this case) is always in the center and independent of U𝑈Uitalic_U. However, while being able to estimate rotation angles accurately...
Considering CNN properties of translation-equivariant, positional information can be encoded and operated with CNN at higher efficiency. An extensive investigation into other inductive bias is necessary for a more solid claim to be made in the future.
Figure 4: The models in knowledge learning. (a) FactorNet: 𝐱tsubscript𝐱𝑡\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and 𝐱t+1subscript𝐱𝑡1\mathbf{x}_{t+1}bold_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT are concatenated in channel dimension before being fed into CNN; (b) Siamese Ne...
This is based on the observation of the learning curves of the three mechanisms (in Fig. 12 in Appendix A). Fast learning on translation and scaling and a slow one on rotation can be noticed for all models, which indicates that CNN models have greater difficulty learning the mechanism of rotation.
B
Owing to its importance in several real world problems (e.g. recommendation), developing specialized learning algorithms for PU setting have received renewed impetus in the machine learning community. Most of the recent research in this area can be broadly categorized into two major class of algorithms - based on how t...
Contrastive learning with supervision. Supervised Contrastive Learning (SCL) (Khosla et al., , 2020; Zhong et al., , 2021; Graf et al., , 2021; Assran et al., , 2020) is a supervised variant of infoNCE that considers multiple positive pairs from other samples belonging to the same class as anchor in addition to the aug...
Contrastive Loss.  Self-supervised learning has demonstrated superior performances over supervised methods on various benchmarks. Joint-embedding methods (Chen et al., 2020b, ; Grill et al., , 2020; Zbontar et al., , 2021; Caron et al., , 2021) are one the most promising approach for self-supervised representation lear...
Owing to its recent widespread success in both computer vision and natural language processing tasks (Chen et al., 2020c, ; Grill et al., , 2020; Radford et al., , 2021; Gao et al., , 2021; Dai and Le, , 2015; Radford et al., , 2018) self supervised pretraining followed by supervised finetuning has become the de-facto ...
Contrastive Loss.  Self-supervised learning has demonstrated superior performances over supervised methods on various benchmarks. Joint-embedding methods (Chen et al., 2020b, ; Grill et al., , 2020; Zbontar et al., , 2021; Caron et al., , 2021) are one the most promising approach for self-supervised representation lea...
B
There have been a wide range of approaches to generalize the SBM to multilayer networks. In Valles-Catala et al. (2016) a multilayer SBM is developed by fitting a new SBM to each layer, assuming that neither node-membership nor group-to-group connectivity is fixed across layers. Stanley et al. (2016) develop a related...
In Wang and Zeng (2019), the authors propose using a Tucker decomposition as a multilayer SBM, but limit their factor matrices to only take on binary values. Thus, the extent to which layer dependence is addressed is limited to the binary clustering of layers and is more similar to the strata work of Stanley et al. (2...
We build upon these motivations from previous work (Schein et al., 2016; De Domenico et al., 2015; Stanley et al., 2016; De Domenico and Biamonte, 2016; De Bacco et al., 2017; Kao and Porter, 2018) and develop the NNTuck as a natural way to identify a latent space in the dimension of the layers. Analogous to how the fa...
Recent work has productively cast the study of multilayer community structure in the language of multilinear algebra (Wu et al., 2019), furnishing tensor-based definitions of multilayer stochastic block models (SBMs)(Schein et al., 2016; De Bacco et al., 2017; Gauvin et al., 2014; Carlen et al., 2022; Tarrés-Deulofeu...
In Carlen et al. (2022) and De Bacco et al. (2017), a node’s membership vectors are held fixed across layers, but a new affinity matrix is fit for each layer. A similar model is proposed in Paul and Chen (2016) but with node membership vectors constrained to take on binary values and with a Bernoulli distribution assum...
D
While for questions with small subgraphs, in which entities do not have sufficient contexts, improvements are observed (e.g., +11.5% on MetaQA 1-hop and +5.5% on PathQuestion 2-hop). In Q&G-Encoders (Linear), based on G-Encoder (Linear), we further replace the LSTMs in the question encoder with feedforward neural netwo...
The model consists of a question encoder and a graph encoder that compute semantic representations (embeddings) of the question and subgraph entities, respectively, through several layers of encoding. We select answers from subgraph entities according to their distances to the question in the output embedding space and...
Given a question, if an entity is the correct answer, the reasoning chain of this question would be part of the KG context (i.e., the neighboring triples) of that entity. For example, considering the above question, the reasoning chain is part of the context of Jess Talamantes within three hops in the KG.
It can be observed that initially (i.e., in Layer-1) the correct answer is not at all close to the question in the embedding space. However, after more layers of encoding, more information consistent with the given question is passed to Regensburg by the graph encoder, and the question embedding is accordingly transfor...
For this question, three GCN layers are used in the graph encoder to encode entities in the subgraph that covers all paths of length one, two, and three starting from the topic entity Isabella of Portugal. The question is accordingly encoded by the question encoder with three layers of LSTMs.
D
A detailed analysis of the gate matrices in Figure 3 shows that the probability of measuring a state |1⟩delimited-|⟩1\lvert 1\rangle| 1 ⟩ in the sum qubit q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is given by sin2⁡(θ)⁢cos2⁡(φ)+cos2⁡(θ)⁢sin2⁡(φ)superscript2𝜃superscript2𝜑superscript2𝜃superscr...
This is the case when the input rotations are X𝑋Xitalic_X-rotations (nielsen2002quantum, , §1.3.1). Using other fractional rotations as generators gives combinations with different algebraic properties, investigated more thoroughly in a mathematical paper by Widdows
and to avoid redundancy and get more information out of each dimension, more compact distributional vector embeddings are often preferred. The work by Alexander and Widdows alexander2022quantum details the method used to classify words from their vector embeddings using a quantum support vector machine (QSVM), and dem...
When we say “I learned C++”, this indicates that a skill was acquired — so the verb learned in this case takes technologies as input and creates skills as output. It is easy to write down matrices that perform these operations (Table 3, right hand side).
This part of the process was run classically as a preprocessing step. The generated vectors were then used as input parameters for a feature map quantum circuit. Measurement of the feature map yields a value representing the relationship between two word vectors, which is stored in a kernel matrix (havlicek2019supervis...
A
Let 𝒢=(V,E,𝑨,𝑿)𝒢𝑉𝐸𝑨𝑿{\mathcal{G}}=(V,E,{\bm{A}},{\bm{X}})caligraphic_G = ( italic_V , italic_E , bold_italic_A , bold_italic_X ) be an undirected graph with N𝑁Nitalic_N nodes, where V𝑉Vitalic_V, E𝐸Eitalic_E, and 𝑨∈{0,1}N×N𝑨superscript01𝑁𝑁{\bm{A}}\in\{0,1\}^{N\times N}bold_italic_A ∈ { 0 , 1 } start_POSTS...
Pei et al. (2020) firstly draw attention to the limitation of GNN on less-homophilic graphs. Since then, various GNNs have been proposed to improve performance on these graphs. H2GCN (Zhu et al. 2020) show that proper utilization of ego-embedding, higher-order neighbourhoods, and intermediate embeddings can improve res...
Vandergheynst 2016), and GAT (Velickovic et al. 2018) are lacking the ability to work with less-homophilic graphs, they still stand out in several nice properties such as efficiency (Zeng et al. 2020), simplicity (Wu et al. 2019), and explainability (Ying et al. 2019). Our work aims to develop a graph restructuring met...
It is known that the Laplacian spectrum and spectral basis carry important information on the connectivity of a graph (Shuman et al. 2013). Lower frequencies correspond to global and smooth information on a graph, while higher frequencies correspond to local information, discontinuities and possible noise (Shuman et al...
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
C
The connection between stability and the total risk function is significant in at least two ways: first, it means that under general classes of myopic and self-interested behaviors on the part of subpopulations and learners, the total risk is driven to at least a local minimum.
For allocation dynamics like multiplicative weights, such configurations are clearly equilibria for any parameter choice ΘΘ\Thetaroman_Θ on the part of the learners. We thus consider the set of possible segmented equilibria and characterize which are asymptotically stable.
Second, it is a technically useful connection that will enable us to characterize and classify the stable equilibria for dynamics which are risk minimizing in the limit. We remark that Theorem 4.3 leaves open the question of stability for equilibria which are non-isolated minima of the total risk function.
For any learners and subpopulations who are risk minimizing in the limit, an equilibrium (α𝖾𝗊,Θ𝖾𝗊)superscript𝛼𝖾𝗊superscriptΘ𝖾𝗊(\alpha^{\mathsf{eq}},\Theta^{\mathsf{eq}})( italic_α start_POSTSUPERSCRIPT sansserif_eq end_POSTSUPERSCRIPT , roman_Θ start_POSTSUPERSCRIPT sansserif_eq end_POSTSUPERSCRIPT ) is asympt...
Figure 3: A summary of our main results on equilibria classification for a given participation α0subscript𝛼0\alpha_{0}italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and model parameters Θ0subscriptΘ0\Theta_{0}roman_Θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. These results hold for dynamics which are risk minimizing i...
B
In the second set of experiments, reported in Section 9.1.2, we demonstrate possible uses and outcomes of the calculation of 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT in various applications. We study several classifiers for which we only have...
In the case of binary classification, this describes two linear constraints for each g∈𝒢𝑔𝒢g\in\mathcal{G}italic_g ∈ caligraphic_G. However, since 𝐀∈ℳΔ,𝝅g,𝐩^g∈Δformulae-sequence𝐀subscriptℳΔsubscript𝝅𝑔subscript^𝐩𝑔Δ\mathbf{A}\in\mathcal{M}_{\Delta},{\boldsymbol{\pi}}_{g},\hat{\mathbf{p}}_{g}\in\Deltabold_A ∈ ca...
In this section, we discuss and demonstrate two phenomena that can arise when minimizing the discrepancy for binary classification over distributions in which all or almost all of the probability mass is concentrated on only two sub-populations. These phenomena can sometimes limit the usefulness of lower-bounding the d...
We discuss related work in Section 2. The setting and notation are defined in Section 3. We extend the equalized odds criterion to multiclass classifiers in Section 4. In Section 5, we discuss lower-bounding the error using label proportions when the classifier is known to be fair. In Section 6, we discuss possible way...
The experiments in Section 9.1.1 and Section 9.1.2 focus on classification problems in which there are many sub-populations with non-negligible probability mass. In these cases, the derived bounds are usually relatively tight. In contrast, in Section 9.1.3, we discuss and demonstrate phenomena that can occur when mini...
D
Data Sets: We collected six challenging real-life and generated 25 semi-synthetic data sets. For four out of these, the literature describes the existence of motifs though without actually annotating them, see Table 2 for an overview. Muscle Activation was collected from in-line speed skating (Mörchen and Ultsch, 2007)...
Finally (Right): We introduce elbow plots for a guided extraction of meaningful motif set sizes. Here, rapid changes in similarity when increasing k represent a characteristic change from one motif to another. Overall, we will show that these improvements reduce the runtime and human efforts to find motif sets consider...
ECG Heartbeats contains a patient’s (ID 71717171) heartbeat from the LTAF database (Petrutiu et al., 2007). It contains 3.0003.0003.0003.000 measurements at 128⁢H⁢z128𝐻𝑧128Hz128 italic_H italic_z, equal to 23⁢s23𝑠~{}23s23 italic_s. The heartbeat rate is 60606060 to 80808080 bpm. Known motifs are a calibration signal...
Figure 7. The Area Under the EF (AU_EF) captures the frequency of approximate repeats. The minima roughly capture known motifs in the two datasets, corresponding to the sleep spindles and K-Complex in sleep data (left) and heartbeats at a rate of 60606060 to 80808080 bpm in ECG data (right).
Ice Ice Baby by Vanilla Ice: This song contains one famous motif set with 20202020 repetitions roughly 4444s long from the introductory example. Learning-k (Section 5) took 3.43.43.43.4s. Given these silver standard parameters, all competitor methods find this riff but with up twice as large extent. k𝑘kitalic_k-Motif...
B
Chen et al. [35] proposed a saturated innovation update algorithm for the decentralized estimation under sensor attacks, where the interagent communication is noiseless. They proved that if the communication graph is undirected and fixed, the nodes are locally observable, and the number of attacked nodes is less than h...
Wang et al. [38] investigated a consensus plus innovation based decentralized linear regression algorithm over random networks with random regression matrices. They proved that if the regression matrices and communication graphs satisfy the stochastic spatio-temporal persistence of excitation condition, properly choosi...
Chen et al. [35] proposed a saturated innovation update algorithm for the decentralized estimation under sensor attacks, where the interagent communication is noiseless. They proved that if the communication graph is undirected and fixed, the nodes are locally observable, and the number of attacked nodes is less than h...
To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c...
At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fittin...
A
Numerical results confirm that using approximate calculations to solve the least-squares problem for AA saves computational time without sacrificing convergence to a desired accuracy. The proposed method has appealing properties for HPC since it reduces computational requirements, inter-process communications, and stor...
In this section we present numerical results to illustrate that AA with approximate calculations converges when the accuracy of approximate calculations is guided by the proposed heuristics. We applied the proposed solvers to example fixed-point problems, including a variety of linear systems considered in Section 5.1...
We provide rigorous theoretical bounds for AAR on linear fixed-point iterations that establish general guidelines to reduce the accuracy of the calculations performed by AAR while still ensuring that the final residual of the fixed-point scheme drops below a user-defined convergence threshold.
In Section 5, we illustrate the convergence property of AAR on linear fixed-point problems and demonstrate the computational advantage of the Reduced AAR solvers on examples including both linear and non-linear fixed-point problems. Even though the theoretical results are not directly applicable to the non-linear fixed...
The remainder of the paper is organized as follows. In Section 2.2 we briefly recall the stationary Richardson’s method that solves linear systems by recasting them as linear fixed-point iterations. In Section 3 we provide rigorous error bounds that allow approximate calculations of AA for linear fixed-point iterations...
D
Given the set of representative words for each topic, a document, and the desired topic, the tagging mechanism works as follows. All the words of the input document are lemmatized to their roots. Then, we identify the common words between the existing lemmatized tokens and the representative words for the desired topic...
Given the set of representative words for each topic, a document, and the desired topic, the tagging mechanism works as follows. All the words of the input document are lemmatized to their roots. Then, we identify the common words between the existing lemmatized tokens and the representative words for the desired topic...
For the tagging-based method, all the words of the input document are lemmatized to their roots using NLTK [32]. Then, we tag the words between the existing lemmatized tokens and the representative words for the desired topic, based on the top-N𝑁Nitalic_N=100 most representative terms for each topic.
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and...
For example, suppose that we pre-process the sentence below, as a part of an input document, from which we aim to guide the generation towards the topic “Business & Finance”. Following the aforementioned procedure, we will enclose with the special token [TAG], the words “businesses”, “billion” and “tax” since they belo...
D
Figure 2: Quantum circuits are built from cells which are designed specifically for the underlying hardware architecture. In this example, the hardware is organised as a 3D lattice, and the standard cell (blue) might be the one illustrated in Fig. 1.
There exist multiple applications of standard cells and tiling. First, tiling quantum circuits can inform the co-design of computing architectures, where the qubit layout, for example, is developed in parallel to the circuits to execute. Such 2D and 3D architectural co-design can be implemented with neutral atoms [10]...
Our cells are compatible, at the same time, with a square 2D and a cubic 3D arrangement of qubits. The three qubits of the Toffoli gate make it almost a perfect candidate for nearest neighbour 2D and 3D interactions. The AND gate from [30] is compatible only with 2D without requiring any SWAP gates. The Toffoli gate fr...
Neutral-atom quantum computers have demonstrated the ability to operate on thousands of qubits (e.g. [23]). These qubits are typically arranged in planar 2D layouts, while 3D configurations are feasible but more challenging to engineer. Notably, neutral-atom systems enable a technique called qubit shuttling which allo...
Standard cell design approaches are suitable for the design of near-optimal quantum computing systems within reasonable design time constraints. Our work focuses on 3D lattices of qubits. Such lattices can be easily implemented by neutral atom computers, as discussed in the following section.
C
The parameters stacked in between the layers are shaped in the size of the layer it is being stacked to. The reason we pass the parameters in each block is that if we only passed them in the first block, it would be difficult for the later blocks to retain them. This problem is somewhat similar to the degradation prob...
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p...
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ...
It can be observed that the default-to-param model achieves a higher mean PSNR and lower MAE compared to Param-to-Param model. We believe this is due to the more complex nonlinearity associated with the task of reparameterizing from any parameter than from a fixed parameter.
B
To overcome difficulties arising from neural network discretization, we develop a discretization approach that performs temporal discretizations before spatial discretizations. This approach leads to a computer-memory-efficient implementation of neural network-based algorithms. Since neural networks are advantageous du...
In this section, we present the structure-preserving EVNN discretization for solving both L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and generalized diffusions, As mentioned earlier, the goal is to construct a neural-network discretization based on the energy-dissipation l...
In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat...
In this section, we test the proposed EVNN methods for various L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and generalized diffusions. To evaluate the accuracy of different methods, we define the l2superscript𝑙2l^{2}italic_l start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-erro...
The rest of the paper is organized as follows. Section 2 reviews the EnVarA and some existing neural network-based numerical approaches for solving PDEs. Section 3 of the paper is devoted to the development of the proposed EVNN schemes for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradi...
D
In this section, we introduce the notion of projected barcodes. The projected barcodes of a multi-parameter persistence module F𝐹Fitalic_F on 𝕍𝕍\mathbb{V}blackboard_V (that is a γ𝛾\gammaitalic_γ-sheaf) is the family of barcodes obtained by considering the direct images of F𝐹Fitalic_F by various maps from 𝕍𝕍\mat...
One of the challenges of multi-parameter persistence is to provide a meaningful notion of distance between persistence modules which can be computed in a reasonable time complexity. Indeed, it has been shown that the usual interleaving distance between persistence modules is NP-hard to compute in the multi-parameter ca...
The fibered barcode has been successfully used in a variety of machine learning tasks as a summary of multi-parameter persistence modules [7]. Nevertheless, it is easy to build examples of γ𝛾\gammaitalic_γ-sheaves (hence persistence modules) with the same fibered barcodes (hence at matching distance zero) though they...
So far, the main invariant that has been developed for multi-parameter persistent modules, efficiently implemented, and which enjoys the desired stability properties, is the fibered barcode [19] (which is equivalent to the rank-invariant [6]). Roughly speaking, the fibered barcode of a n𝑛nitalic_n-parameters persisten...
Nevertheless, there are several bottlenecks to the fibered barcode approach. First, it is easy to exhibit two persistence modules at matching distance zero but having arbitrarily large interleaving distance (see section 5.1). Second, computing and storing an entire n𝑛nitalic_n-parameters persistence module is time an...
B
Figure 3: F1 against sample size for each variable ordering and network for the HC algorithm. Each plot starts at the sample size at which there are no single-valued variables. (Note that the red and black lines are coincident for the Sachs network which is why the former is not visible).
Figures 6 and 7 in Appendix A characterise incorrect edges in the learnt graph when optimal and worst variable ordering is used respectively. These figures are based on the same experiments used for Figure 3, and use the edge characterisations defined in Table 2. The number of edges is scaled by the number of edges in ...
We compare the sensitivity of the F1 score relative to the default alphabetic ordering and two other orderings which we term “optimal” and “worst”. In optimal ordering, the variables are ordered so that they are consistent with the topological ordering of the nodes in the reference graph. This optimal ordering ensures ...
By examining the way a DAG develops iteration by iteration in the simple HC algorithm, we find that arbitrary decisions about edge modifications play an important role in determining the accuracy of the learnt graph and thus, in judging the structure learning capability of an algorithm. This is particularly so when HC...
The remaining plots show the impact where the sample size is kept the same but where another factor is varied. The green plot shows the change in accuracy resulting from using the optimal ordering rather than the alphabetic one, and the red plot shows the change in accuracy moving from the worst to optimal ordering. Th...
B
Various weight-only quantization methods have been proposed to improve the compression ratio while preserving accuracy, often accompanied by dedicated kernels for practical acceleration through quantization (Jeon et al., 2020; Frantar et al., 2022; Lin et al., 2023).
In this paper, we introduce LUT-GEMM, a highly efficient matrix multiplication kernel designed to operate directly on quantized weights, thereby eliminating the need for an additional dequantization step. Leveraging an extended BCQ format, LUT-GEMM exhibits the capability to process both uniformly and non-uniformly qua...
Figure 4: (a) Normalized LUT-GEMM latency when a (m×n𝑚𝑛m\times nitalic_m × italic_n) weight matrix is quantized by 3-bit with different g𝑔gitalic_g values. (b) Relationship between latency and compression ratio when LUT-GEMM performs quantized matrix multiplications with m=n=12288𝑚𝑛12288m=n=12288italic_m = italic...
LUT-GEMM inherently accommodates quantized weights and full-precision activations, enabling the acceleration of the inference process while preserving the desired level of precision. Specifically, LUT-GEMM employs the binary-coding quantization (BCQ) format (Rastegari et al., 2016) to capitalize on simple arithmetic op...
In this paper, we present LUT-GEMM, a kernel designed to facilitate quantized matrix multiplications with quantized weights and full-precision activations. As shown in Figure 1, LUT-GEMM addresses two issues prevalent in previous quantization approaches: 1) accuracy degradation due to quantized activations and 2) the n...
D
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
Part of the work was done while the first and second authors attended the “Discrete Optimization” trimester program of the the Hausdorff Institute of Mathematics, Bonn. The authors are grateful to HIM for providing excellent working conditions and support.
The third author was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the ÚNKP-21-5 New National Excellence Program of the Ministry of Innovation and Technology of Hungary. The research has been implemented with the support provided by the Lendület Programme of the Hungari...
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
A
During the forward pass, the Heaviside function is retained, while a surrogate function replaces it during the backward pass. One simple choice for the surrogate function is the Spike-Operator [29], which exhibits a gradient resembling a shifted ReLU function. In our work, we go beyond the conventional surrogate gradie...
Based on the aforementioned analysis, the utilization of a temporal-wise attention mechanism in SNNs has exhibited substantial progress in effectively processing time-related data streams. Moreover, it has been observed in both biological neural networks [32] and ANNs [15] that recalibrating channel features within con...
In the realm of ANNs, the Squeeze and Excitation (SE) block, introduced by Hu et al. [15], has proven to be a highly effective module for enhancing representation. The SE block can be seamlessly incorporated into a network, requiring only a minimal increase in parameters to recalibrate channel information. By employing...
Previous studies in ANNs [15, 16] have often utilized the attention mechanism as a means to address the challenges posed by multidimensional dynamic problems. The attention mechanism, inspired by human cognitive processes, enables the selective focus on relevant information while disregarding irrelevant data. This app...
Despite significant progress, SNNs have yet to fully exploit the superior representational capability of deep learning, primarily due to their unique training mode, which struggles to model complex channel-temporal relationships effectively. To address this limitation, Zheng et al. [11] introduced a batch normalization...
B
⟨⋅,⋅⟩ℰh∂≔∑e∈ℰh∂⟨⋅,⋅⟩e≔subscript⋅⋅superscriptsubscriptℰℎsubscript𝑒superscriptsubscriptℰℎsubscript⋅⋅𝑒\langle\cdot,\cdot\rangle_{\mathcal{E}_{h}^{\partial}}\coloneqq\sum_{e\in% \mathcal{E}_{h}^{\partial}}\langle\cdot,\cdot\rangle_{e}⟨ ⋅ , ⋅ ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCRI...
\rangle_{\partial K},≔ ( ∇ ⋅ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ ⋅ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT + ( ∇ × italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ × italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIP...
\mathcal{T}_{h}}+\langle\hat{p}_{h},v_{h}\rangle_{\partial\mathcal{T}_{h}}( ∇ ⋅ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ ⋅ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT + ( ∇ × italic_u star...
\rrbracket\bigr{\rangle}_{\mathcal{E}_{h}^{\circ}}+\langle\gamma u_{h}\times n% ,v_{h}\times n\rangle_{\mathcal{E}_{h}^{\partial}},≔ start_ROW start_CELL ( ∇ ⋅ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ ⋅ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT caligraphic_T start_...
=(f,v_{h})_{\mathcal{T}_{h}},start_ROW start_CELL ( ∇ ⋅ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ ⋅ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT + ( ∇ × italic_u start_POSTSUBSCRIPT italic_h...
D
Threats to Validity: The internal threat to validity mainly lies in human mistakes in the study. Specifically, we may understand results of FlashSyn incorrectly, or make mistakes in the implementation of FlashSyn. All authors have extensive smart contract security analysis experience and software engineering expertise ...
The external threat to validity mainly lies in the subjects used in our study. The flash loan attacks we study might not be representative. We mitigate this risk by using diverse and reputable data sources, including academic papers (Qin et al., 2021; Cao et al., 2021) and an industrial database (SlowMist, 2023).
Threats to Validity: The internal threat to validity mainly lies in human mistakes in the study. Specifically, we may understand results of FlashSyn incorrectly, or make mistakes in the implementation of FlashSyn. All authors have extensive smart contract security analysis experience and software engineering expertise ...
or the action is very common (such as Uniswap that has been widely studied (Qin et al., 2021; Xia et al., 2021; Xu et al., 2021)). FlashSyn implements two numerical methods using external libraries. FlashSyn-poly utilizes sklearn (Pedregosa et al., 2011; Buitinck et al., 2013), and FlashSyn-inter employs scipy (Virtane...
For all verified smart contracts, their ABIs are made public to facilitate users to call functions and engage with the contracts. An ABI typically comprises (public or external) function names, argument names/types, function state mutability, and return types. During the process of selecting action candidates, certain...
A
For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ...
Meta RL has seen extensive empirical study [4; 39; 7; 26], and the connection to Bayesian RL has been made in a series of recent works [20; 14; 25; 39]. Most meta RL studies assumed infinite tasks during training (effectively drawing different random MDPs from the prior at each training iteration), with few exceptions...
To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. [39]. In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural networ...
For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ...
While significant empirical progress in meta RL has been made, the theoretical understanding of the problem is still limited. A central question, which we focus on in this work, is the probably approximately correct (PAC) analysis of meta RL, namely, how many training tasks are required to guarantee performance that is...
D
M⁢R⁢R=1|Q⁢r|⁢∑i=1|Q⁢r|1r⁢a⁢n⁢ki𝑀𝑅𝑅1𝑄𝑟subscriptsuperscript𝑄𝑟𝑖11𝑟𝑎𝑛subscript𝑘𝑖MRR=\frac{1}{|Qr|}\sum^{|Qr|}_{i=1}\frac{1}{rank_{i}}italic_M italic_R italic_R = divide start_ARG 1 end_ARG start_ARG | italic_Q italic_r | end_ARG ∑ start_POSTSUPERSCRIPT | italic_Q italic_r | end_POSTSUPERSCRIPT start_POSTSUBSCR...
recall=|relevant items∪retrieved items||relevant items|recallrelevant itemsretrieved itemsrelevant items\text{recall}=\frac{|\text{relevant items}\cup\text{retrieved items}|}{|\text{% relevant items}|}recall = divide start_ARG | relevant items ∪ retrieved items | end_ARG start_ARG | relevant items | end_ARG
Results. Table III reports the MRR performance for each model, in which EN →→\rightarrow→ ZH means that we used English queries to find Chinese synonym candidates (i.e., Chinese translations), and ZH →→\rightarrow→ EN was the reverse query direction. All three models outperformed the random baseline, indicating that a...
When the system places the relevant item in the first place, its MRR will be 1111. In our case, high MRR means relevant translations are identified earlier, which prevents people from reading through many irrelevant translations. Reversely, if a system hardly identifies relevant items or places those items in the latt...
Test Set Preparation. We evaluated our proposed framework by testing the query performance of frequently used medical entities selected from our collected healthcare Q&A corpora. Only five divisions (community groups in MedHelp), including General-Health, Women-Health, Dermatology, Ear-Nose-Throat, and Neurology, were...
C
The molecular system for alanine dipeptide in vacuum was parametrized using the forcefield CHARMM36mHuang et al. (2017) and prepared using CHARMM-GUILee et al. (2016). A 100100100100 ns MD simulation of alanine dipeptide in vacuum at 450450450450 K temperature and 1111 atm pressure was performed using Nose-Hoover therm...
One of the primary motivations behind our work is the recognition that model complexity can be an insufficient descriptor of human interpretability as shown in Fig. 1. In this case, if model complexity is used as a proxy for human interpretability, then both linear models shown in Fig. 1(a,b) will be assigned the sam...
A VAMPnetsMardt et al. (2018) deep neural network was constructed from two identical artificial neural network lobes, that take trajectory order parameters (OPs) at time steps t𝑡titalic_t and t+τ𝑡𝜏t+\tauitalic_t + italic_τ respectively as inputs. The input data was passed through several layers of neurons and final...
Variational approach for markov processes (VAMPnets) is a popular technique for analyzing molecular dynamics (MD) trajectories.Mardt et al. (2018) VAMPnets can be used to featurize, transform inputs to a lower dimensional representation, and construct a markov state modelBowman, Pande, and Noé (2013) in an automated m...
To establish that TERP indeed takes both the input data and the black-box model into account when generating explanations we subject our protocol to the sanity tests developed by Adebayo et al. Adebayo et al. (2018). We achieve this by taking the fine-tuned ViT model and randomizing the model parameters in a top-to-bo...
B
Bounded model checkers such as ESBMC are now mature software, used industrially (Gadelha et al., 2018) and capable of finding bugs in production software. We leverage this power of model checkers as a method for seed generation. During greybox fuzzing, if a particular branch has not been explored, ESBMC can provide a m...
An important FuSeBMC subsystem discussed in this paper is the Tracer, which coordinates the bounded model checker and the various fuzzing engines. The Tracer monitors the test-cases produced by the fuzzers. It selects those with the highest impact (as measured by a couple of metrics discussed in Section 3) to act as s...
In this paper, we presented FuSeBMC v4, a test generator that relies on smart seed generation to improve the state-of-the-art in hybrid fuzzing and achieve high coverage for C programs. First, FuSeBMC analyses and injects goal labels into the given C program. Then, it ranks these goal labels according to the given stra...
The Tracer subsystem determines the goals covered by test-cases produced by the bounded model checker and the fuzzer. Whenever a test-case is produced, Tracer compiles the instrumented program together with the newly generated test-cases and runs the resulting executable. Before the compilation, it performs additional ...
FuSeBMC begins by analyzing C code and then injecting goal labels into the given C program (based on the code coverage criteria that we introduce in Section 3.2.1) and ranking them according to one of the strategies described in Section 3.2.2 (i.e., depending on the goal’s origin or depth in the PUT). From then on, FuS...
A
Since C(α,β)superscript𝐶𝛼𝛽C^{(\alpha,\beta)}italic_C start_POSTSUPERSCRIPT ( italic_α , italic_β ) end_POSTSUPERSCRIPT is upper triangular, so is (C(α,β))−1superscriptsuperscript𝐶𝛼𝛽1\left(C^{(\alpha,\beta)}\right)^{-1}( italic_C start_POSTSUPERSCRIPT ( italic_α , italic_β ) end_POSTSUPERSCRIPT ) start_POSTSUPERSC...
Table 1: The action of operators on quasimatrices of Jacobi polynomials via their banded matrix representations. In the final column, k,j∈ℕ0𝑘𝑗subscriptℕ0k,j\in\mathbb{N}_{0}italic_k , italic_j ∈ blackboard_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. See [40, Appendix A] for the entries of these matrices.
We shall use the matrices acting on Jacobi polynomials given in Table 1 as building blocks to construct matrix representations of the operators we need. For example, the integration matrix (which is a representation of the standard integration operator) is diagonal and follows from the weighted differentiation matrix i...
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [...
The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in...
B
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
Let us first consider the best index value ranking in the unsupervised approach (Fig. 1c presented in the main text and Fig. S20), in which the lowest index value of L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is greater than the highest index value of L2subscript𝐿2L_{2}italic_L start_POSTSUBS...
B
The most straightforward way is to only update the classifier layer [15, 23, 26, 62], but the accuracy is low when the domain shift is large [12]. Later studies investigate other tuning methods including updating biases [12, 71], updating normalization layer parameters [53, 25], updating small parallel branches [12, 32...
Though there are low-cost efficient transfer learning algorithms like training only the final classifier layer, bias-only update [12], etc., the accuracy drop is significant (Figure 9), and existing training system can not realize the theoretical saving into measured saving. Furthermore, devices like microcontrollers a...
However, on-device training on tiny edge devices is extremely challenging and fundamentally different from cloud training. Tiny IoT devices (e.g., microcontrollers) typically have a limited SRAM size like 256KB. Such a small memory budget is hardly enough for the inference of deep learning models [47, 46, 7, 11, 43, 2...
The huge gap (>1000×\times×) makes it impossible to run on tiny IoT devices with current frameworks and algorithms. Current deep learning training systems like PyTorch [56], TensorFlow [4], JAX [10], MXNet [16], etc. do not consider the tight resources on edge devices. Edge deep learning inference frameworks like TVM [...
The success of deep learning is built on top of popular training frameworks such as PyTorch [56], TensorFlow [5], MXNet [16], JAX [10], etc. These systems usually depend on a host language (e.g. Python) and various runtime, which brings significant overhead (>300MB) and does not fit tiny edge devices.
D
12⁢((B+I)⁢y∗+c)=12⁢|(B−I)⁢y∗+c|.12𝐵𝐼superscript𝑦∗𝑐12𝐵𝐼superscript𝑦∗𝑐\tfrac{1}{2}((B+I)y^{\ast}+c)=\tfrac{1}{2}|(B-I)y^{\ast}+c|.divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ( italic_B + italic_I ) italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT + italic_c ) = divide start_ARG 1 end_ARG start_ARG 2 end_...
\frac{(I-\tilde{D})Ay^{\ast}}{2}=-\frac{(I-\tilde{D})Ay^{\ast}}{2}+\frac{(I-% \tilde{D})By^{\ast}}{2}-\frac{(I-\tilde{D})(b-c)}{2},divide start_ARG italic_I + over~ start_ARG italic_D end_ARG end_ARG start_ARG 2 end_ARG ( italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - italic_y start_POSTSUPERSCRIPT ∗ end_POSTSU...
I+D~2⁢(x∗−y∗)+I−D~2⁢A⁢x∗=(I−D~)⁢B2⁢y∗−(I−D~)⁢(b−c)2,𝐼~𝐷2superscript𝑥∗superscript𝑦∗𝐼~𝐷2𝐴superscript𝑥∗𝐼~𝐷𝐵2superscript𝑦∗𝐼~𝐷𝑏𝑐2\frac{I+\tilde{D}}{2}(x^{\ast}-y^{\ast})+\frac{I-\tilde{D}}{2}Ax^{\ast}=\frac{% (I-\tilde{D})B}{2}y^{\ast}-\frac{(I-\tilde{D})(b-c)}{2},divide start_ARG italic_I + over~ start_ARG ...
-B)}{2}y^{\ast}-\frac{(I-\tilde{D})(b-c)}{2},divide start_ARG italic_I + over~ start_ARG italic_D end_ARG + ( italic_I - over~ start_ARG italic_D end_ARG ) italic_A end_ARG start_ARG 2 end_ARG ( italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) = - divide star...
(A-BD)^{-1}\|\|\Delta b\|}{\|b\|}\frac{\|b\|}{\|x^{\ast}\|}divide start_ARG roman_max ∥ ( italic_A - italic_B italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ ∥ roman_Δ italic_b ∥ end_ARG start_ARG ∥ italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∥ end_ARG = divide start_ARG roman_max ∥ ( italic_A - ita...
B
This construction immediately defines permutation-based versions of the classic benchmarks such as OneMax, LeadingOnes, and Jump functions. We note that the sorting problem with the Ham fitness function regarded in [STW04] is exactly what we obtain from applying this construction to the classic OneMax benchmark. We are...
The classic LeadingOnes benchmark on bit-strings was defined by Rudolph [Rud97] as an example for a unimodal function that is harder for typical EAs than OneMax, but still unimodal. The LeadingOnes functions counts the number of successive ones from left to right, that is, we have
We also analyze the runtime of this heavy-tailed EA on the permutation-based LeadingOnes problem. Given that this is a unimodal problem and that the previous proofs obtained the asymptotically optimal runtime via local mutations (swapping two items) only, we do not expect a runtime different from Θ⁢(n3)Θsuperscript𝑛3\...
This construction immediately defines permutation-based versions of the classic benchmarks such as OneMax, LeadingOnes, and Jump functions. We note that the sorting problem with the Ham fitness function regarded in [STW04] is exactly what we obtain from applying this construction to the classic OneMax benchmark. We are...
We now define its permutation version, following our general construction from Section 4. To ease the notation, let g𝑔gitalic_g denote the function that counts the number of fixed points of a permutation, that is, the number i∈[1..n]i\in[1..n]italic_i ∈ [ 1 . . italic_n ] of elements that are “in place”, that is, that...
A
\big{(}\frac{\eta_{i}}{1-\eta_{i}}\big{)}^{\omega_{i}}}.= divide start_ARG italic_e start_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log divide start_ARG italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_A...
Such an adaption of the constraints is not needed in our proposed approach and therefore makes our approach more flexible. We notice however that the impact of the second level in the case of the projected gradient is stronger than in the geometric case. We believe that this is due to the fact that our coarse model loo...
We point out that the computation of Riemannian means typically requires to solve an optimality condition of the form (4.28) by a fixed-point iteration. In the present case, the chosen geometry yields the corresponding geometric mean (4.26) in closed form.
We adapt the coarse grid model (3.3) of the objective function at x0=R⁢y0subscript𝑥0𝑅subscript𝑦0x_{0}=Ry_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_R italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT to the present geometric setting,
Retractions [AMS08, Def. 4.1.1] and their inverses are basic ingredients of first-order optimization algorithms on Riemannian manifolds. The main motivation is to replace the exponential map with respect to the metric (Levi Civita) connection by an approximation that can be efficiently evaluated or even in closed form....
B
To apply our reductions, Lemma 16 and Lemma 17, and complete the proof of Theorem 6 we now require a function f:{0,1}d→{0,1,…,d}:𝑓→superscript01𝑑01…𝑑f:\{0,1\}^{d}\to\{0,1,\dots,d\}italic_f : { 0 , 1 } start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → { 0 , 1 , … , italic_d } that is
Razborov’s original work [35] proves a similar result for finding a perfect matching. However, the bound is super-polynomial instead of truly exponential. In what comes next, we could have also used the indicator function of a perfect matching in a graph to obtain another separation, albeit weaker, result.
The Tardos function, introduced in [43], will satisfy these conditions. The Tardos function builds upon the seminal work of Razborov, [35], who studied the hardness of monotone computation for perfect matching. The function is constructed as a graph invariant and is always sandwiched between the clique and chromatic nu...
The function hℎhitalic_h we use is the harmonic extension of a graph invariant function, introduced by Éva Tardos in [43]. The Tardos function and its properties build upon the seminal works of Razborov [35], and Alon and Boppana [1]. The mentioned works constitute a highly influential line of work, about the limitatio...
To supply some further intuition, beyond the equivalent definitions, it is instructive to think about graph properties. In this case, for a graph with vertex set [n]delimited-[]𝑛[n][ italic_n ], the domain is the adjacency matrix of the graph {0,1}n×nsuperscript01𝑛𝑛\{0,1\}^{n\times n}{ 0 , 1 } start_POSTSUPERSCRIPT ...
B
\quad\text{ and }\quad a_{\max}(\boldsymbol{y})=\max_{x\in\bar{D}}a(% \boldsymbol{x},\boldsymbol{y})<\infty.0 < italic_a start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( bold_italic_y ) = roman_min start_POSTSUBSCRIPT italic_x ∈ over¯ start_ARG italic_D end_ARG end_POSTSUBSCRIPT italic_a ( bold_italic_x , bold_italic_...
\subseteq\{1,\ldots,s\}}bold_italic_γ = ( italic_γ start_POSTSUBSCRIPT fraktur_u end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT fraktur_u ⊆ { 1 , … , italic_s } end_POSTSUBSCRIPT denotes a collection of positive weights, C𝜸,n,s>0subscript𝐶𝜸𝑛𝑠0C_{\boldsymbol{\gamma},n,s}>0italic_C start_POSTSUBSCRIPT bold_italic_γ , itali...
In DG, the idea is to modify the variational formulation, and we can no longer exploit conformity to obtain regularity bounds for the DG solutions to (1). This means that the regularity analysis must be rewritten for the DG system, which will also affect the choice of the optimal QMC rule for the computation of the exp...
In the case of conforming finite element discretizations for the elliptic PDE problem, it is enough to analyze the parametric regularity of the continuous problem. The parametric regularity results are inherited by the conforming FE solution. Below, we briefly recap the main parametric regularity results for the affin...
A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ...
B
Pr⁡[X≤𝐄⁡[X]−t]≤exp⁡(−2⁢t2∑i=1n(bi−ai)2).Pr𝑋𝐄𝑋𝑡2superscript𝑡2superscriptsubscript𝑖1𝑛superscriptsubscript𝑏𝑖subscript𝑎𝑖2\displaystyle\Pr\left[{X}\leq\operatorname{{\mathop{\mathbf{E}}}}[{X}]-t\right% ]\leq\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}\right).roman_Pr [ italic_X ≤ bold_E [ italic_X ]...
We would like to highlight a couple of points regarding Theorem 1. First, this is the first lower bound result that addresses the local agent adaptivity in the CL models. In particular, it shows that the capacity of each agent to utilize newly observed information within each round does not contribute to reducing the ...
Implicit Forms of Distribution Classes.   Our first technical innovation is that we implicitly define the classes of distributions for the generalized round elimination by quantifying the relationship between each distribution in the class and the original hard input distribution. The discussion below is again a simpli...
To handle this challenge, we choose to explicitly analyze for each heavy arm its probability of being the best arm after the first round of pulls, and then show that the sum of these probabilities is small. This analysis is much more complicated than that for the non-adaptive algorithms. We try to illustrate the main i...
In the rest of this section, we first introduce the hard input distribution that we use to prove the lower bound and discuss its properties. We then introduce the classes of distributions on which we will perform the generalized round elimination. After these preparation steps, we present our main lower bound proof for...
D
Moreover, failure in meeting the inequality does not deteriorate the global and subsequential convergence of SPIRAL, as long as ‖dk‖normsuperscript𝑑𝑘\|d^{k}\|∥ italic_d start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ∥ is scaled whenever necessary, as demonstrated in Theorem 4.7. As a final remark, although failur...
Even though in Algorithm 1 quasi-Newton directions based on the residual mapping were suggested (cf. (3.5)), any superlinear direction can be employed in the algorithm. As a result, our theory provides a direct globalization strategy for works that employ quasi-Newton direction with only local convergence guarantees. F...
In the finite sum setting, approaches proposed by [47] and [60] have utilized quasi-Newton updates with global convergence guarantees and linear convergence rates. Furthermore, [74] has extended the utilization of quasi-Newton directions to decentralized learning scenarios.
To attain a superlinear convergence rate, the IQN method [45] has integrated quasi-Newton directions with incremental updates, albeit with only local convergence guarantees. Conversely, the approach introduced in [59] also exhibits a superlinear convergence rate but necessitates Hessian evaluation. It is noteworthy tha...
In its outer loop, one distinguishing characteristic of the proposed algorithm, which sets it apart from stochastic algorithms like SVRG and SARAH, is its utilization of quasi-Newton directions integrated with a linesearch while preserving the advantageous low-memory characteristic. In the context of this study, variou...
A
⌈I3+12⌉+1,…,I3.subscript𝐼3121…subscript𝐼3\displaystyle\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}.⌈ divide start_ARG italic_I start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT + 1 end_ARG start_ARG 2 end_ARG ⌉ + 1 , … , italic_I start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT .
This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ...
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl...
Having computed the multiplications in (2), the rest of computations in (3) are not expensive. So this version of the t-product is faster than its naive implementation and we use it in all our simulations555The Matlab implementation of this multiplication and related tubal operations are provided in the toolbox https:...
In this section, we test the proposed randomized algorithm using synthetic and real-world data. All numerical simulations were performed on a laptop computer with 2.60 GHz Intel(R) Core(TM) i7-5600U processor and 8GB memory. In all our simulations, we set the power iteration q=1𝑞1q=1italic_q = 1. The compression ratio...
C
For each of the 10 runs, we optimize the parameter w𝑤witalic_w (weight) by an internal 3-fold cross-validation procedure performed on the labeled portion of the training set. The semi-supervised methods also use the available unlabeled examples. The values of the parameter w𝑤witalic_w vary from 0 to 1 with a step of...
The algorithms are evaluated by means of the area under the Precision-Recall curve (AUPRC). Since the considered tasks are MLC and HMLC, we use a variant of the AUPRC – the area under the micro-averaged average Precision-Recall curve (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end...
The statistical test is applied to the predictive performances (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of the supervised and semi-supervised single trees (SL-PCT, SSL-PCT and SSL-PCT-FR) on the datasets considered in this study: 12 for multi-label classification and 1...
Figure 3 presents the learning curves in terms of the predictive performance (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 hierarchical multi-label classifi...
Figure 2 presents the predictive performance (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 MLC datasets, with an increasing amount of labeled data.
A
The first and second waypoints are utilized by PID agent for lateral and longitudinal control of the vehicle. After predicting the second waypoint, the latent space is adjusted using the latest waypoint (the second one), ensuring that the MLP agent receives a comparable level of information abstraction as the PID agen...
During the training process, waypoints represent the vehicle’s locations at time t+1𝑡1t+1italic_t + 1, t+2𝑡2t+2italic_t + 2, and t+3𝑡3t+3italic_t + 3 seconds in the future, projected onto a Cartesian coordinate system where the position (0,0) corresponds to the vehicle’s current location at time t𝑡titalic_t. In ess...
The first and second waypoints are utilized by PID agent for lateral and longitudinal control of the vehicle. After predicting the second waypoint, the latent space is adjusted using the latest waypoint (the second one), ensuring that the MLP agent receives a comparable level of information abstraction as the PID agen...
The purpose of the online test is to evaluate the model’s drivability in driving the vehicle. The model must drive the vehicle safely by following a set of route points while avoiding obstacles (e.g., a vehicle stopped on the left side of the road). The experiment is conducted three times for each condition and on dif...
Incorporating the third waypoint into the loss calculation enhances the training signal for DeepIPC, enabling more accurate predictions of the vehicle’s future positions. Nonetheless, we restrict the number of predicted waypoints to three to ensure that the MLP agent processes information congruently with the PID agen...
D
It is NP-complete to decide whether a given graph G𝐺Gitalic_G has two disjoint subsets U1,U2⊆V⁢(G)subscript𝑈1subscript𝑈2𝑉𝐺U_{1},U_{2}\subseteq V(G)italic_U start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_U start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ italic_V ( italic_G ) such that, for i=1,2𝑖12i=1,2italic_i = 1 , ...
We reduce from the 3333-Coloring problem. Recall that the task of 3333-Coloring is to decide whether a graph G𝐺Gitalic_G admits a proper 3333-coloring, that is, its vertices can be colored by three colors in such a way that adjacent vertices receive distinct colors. Equivalently,
To see this, let G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT be the graph obtained from G𝐺Gitalic_G by constructing a new vertex wu⁢vsubscript𝑤𝑢𝑣w_{uv}italic_w start_POSTSUBSCRIPT italic_u italic_v end_POSTSUBSCRIPT for every edge u⁢v∈E⁢(G)𝑢𝑣𝐸𝐺uv\in E(G)italic_u italic_v ∈ ita...
This problem is very closely related to 3-coloring, in particular, if instead of having vertex cover number at most 1 we would ask Uisubscript𝑈𝑖U_{i}italic_U start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to be an independent set and if every edge of G𝐺Gitalic_G would be contained in a triangle, this problem in fact...
We supplement our main results with a second NP-completeness proof for a problem closely related to computing the tree-independence number and the algorithm of Theorem 1.1. We consider the problem where we are given a graph G𝐺Gitalic_G, two non-adjacent vertices u𝑢uitalic_u and v𝑣vitalic_v, and an integer k𝑘kitalic...
A
For ModelNet40, since clients with multi-view data are of similar importance, their local embeddings exhibit similarities and demonstrate linear separability. A scrutiny of these local embeddings confirms that the unique characteristic of input features in each client lead to varied local embeddings. Consequently, we e...
Figure 3: Client-level explainability of VIM. Row 1 visualizes the input features. Row 2 shows the weights norm of linear heads. Row 3 shows the test accuracy when each client’s test input features are perturbed (red line denotes the clean test accuracy). Row 4 shows the weights norm of linear heads under only one nois...
The obtained weights norm in Figure 3 row 4 shows that VIMADMM can automatically detect the noisy client and lower its weights (compared to the clean one in row 2). Table XIII in Appendix -D shows that VIMADMM outperforms baselines with faster convergence and higher accuracy under noisy clients.
Given a trained VIMADMM model, we plot the weights norm of each client’s corresponding linear heads in Figure 3 row 2. Combining it with row 1, we find that the client with important local features indeed results in high weights555Here the weights of clients refer to the weights of the client’s corresponding linear hea...
to verify the client-level importance indicated by the linear heads. For each time, we only perturb the features of one client and keep other clients’ features unchanged. The results in Figure 3 row 3 show that perturbing the client with high weights affects more for the test accuracy, which verifies that clients with ...
C
In both settings, we use the first 80% of the edges as the training set, 10% of the edges as the validation set and the rest 10% edges as the testing set. Differently, in the inductive setting we predict future edges of nodes never seen in the training set, while in the transductive setting we predict future edges of n...
On the Wikipedia and Reddit datasets, CTDNE and DyGNN obtain worse results over the static models. We guess that this may be because CTDNE and DyGNN fail to model the edge features, leading to the information loss. Table IV and Table V show the experiment results in terms of AP and AUC for the future link prediction ta...
We further quantitatively verify the robustness of our model by adding different levels of noise to the UCI dataset. In order to demonstrate its robustness, we compare our method with DyRep and TGN, where DyRep achieves the second result in Table III and TGN achieves the second best result based on Table IV and V.
As shown in Table III, our Ada-DyGNN model consistently outperforms all the static and dynamic approaches in terms of MRR. This illustrates that designing robust knowledge adaptation mechanism is beneficial to dynamic graph neural networks. For the transductive setting, our Ada-DyGNN relatively improves the performance...
For the inductive setting, our Ada-DyGNN relatively improves the MRR by 437.0%, 2.6%, 20.2% times over DyRep on the UCI, Wikipedia, Reddit datasets, respectively. The dynamic graph neural network models, including DyRep, TGAT, Jodie, TGN and our Ada-DyGNN, generally achieve better performance than three static models G...
C
The key methods of our Progressive Feature Learning framework are that we design Progressive Mapping and Progressive Uncertantity to span the cross-view and cross-cloth feature in a cascaded way. Extensive experiments were conducted to validate the effectiveness of our framework.
Yongzhen Huang received the B.E. degree from Huazhong University of Science and Technology in 2006, and the Ph.D. degree from Institute of Automation, Chinese Academy of Sciences in 2011. He is currently an Associate Professor with School of Artificial Intelligence, Beijing Normal University, and works in cooperation ...
Xuqian Ren received the B.E. degree from University of Science and Technology Beijing in 2019, received the M.S. degree from Beijing Institute of Technology in 2022. She is currently a Ph.D. candidate with Computer Science Unit, Faculty of Information Technology and Communication Sciences, Tampere Universities, Finlan...
Saihui Hou received the B.E. and Ph.D. degrees from University of Science and Technology of China in 2014 and 2019, respectively. He is currently an Assistant Professor with School of Artificial Intelligence, Beijing Normal University, and works in cooperation with Watrix Technology Limited Co. Ltd. His research inter...
Xu Liu received the B.E. and Ph.D. degrees from University of Science and Technology of China in 2013 and 2018, respectively. He is currently a Research Scientist with Watrix Technology Limited Co. Ltd. His research interests include gait recognition, object detection and image segmentation.
B
(1) LambdaX is the heuristic searching version of the DPAV DQN. The floating number X is the initial value λ0subscript𝜆0\lambda_{0}italic_λ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT with the range (0,1)01(0,1)( 0 , 1 ). And LambdaX (e.g. Lambda0.5, Lambda0.6) searches different floating numbers X for initial λ0subscript...
At the first few training epochs, we notice that the averaged maximum value of DPAV DQN is negative which is consistent with the averaged reward of its evaluation shown in Figure 3(d), because at the early training stage, the policy quality is too low to finish most of the dialogues so the averaged reward is low and th...
Our DPAV DQN method performs better than the baselines in terms of general performance. Since the training starts with the experience pool initialized by the the same rule-based dialogue policy, the models’ performance in the very first few episodes is very similar. After that, the performance improved for all models,...
This work is implemented with PyTorch toolkit. Compared with the standard DQN algorithm, we change the loss with the one defined by DPAV DQN in Algorithm 1. For these RL-based dialogue policies, action value network Q⁢(⋅)𝑄⋅Q(\cdot)italic_Q ( ⋅ ) is a MLP with one hidden layer of 80 hidden nodes. ReLU is the activation...
The evaluation metrics are success rate and averaged reward. Success rate is the ratio of the number of tasks successfully completed by the dialogue system in evaluation to the total number of dialogues in the test set. Averaged reward refers to the average of the cumulative rewards obtained by the dialogue system for ...
C
Therefore, we develop a methodology that aims to constrain the variability of future actions based on the human intention estimated from past observations. We predict a hierarchical structure from a sequence of videos, each depicting a particular human action. From this given video clip sequence, we define two differe...
Therefore, we develop a methodology that aims to constrain the variability of future actions based on the human intention estimated from past observations. We predict a hierarchical structure from a sequence of videos, each depicting a particular human action. From this given video clip sequence, we define two differe...
Following the division of our methodology, we define our framework as a two-step. First, we propose a Hierarchical Multitask Multi-Layer Perceptrons (MLP) Mixer (H3M), to classify each observed video to an action label, as well as to extract the overall intention of the human. The MLP Mixer-based architecture [32] has ...
Long-Term Anticipation of human actions needs to exploit temporal dependencies among the observed actions to generate plausible human action sequences in the future. Our two-step approach first aims at understanding the observed actions through a Hierarchical MultiTask MLP Mixer (H3M), described in Section 3.2 in a bot...
Finally, we investigate the performance of our whole framework based on the end-to-end evaluation. First, H3M classifies the actions and the intention from the observed clips. Then, based on these predictions, our I-CVAE model anticipates the Z=20𝑍20Z=20italic_Z = 20 actions in the future. In Table 4 we evaluate the L...
B
{s}))bold_italic_c = divide start_ARG 1 end_ARG start_ARG | caligraphic_S | end_ARG ∑ start_POSTSUBSCRIPT bold_italic_s ∈ caligraphic_S end_POSTSUBSCRIPT italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_ϕ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_s ) ), where ϕ0subscriptitalic-ϕ0\phi_{0}italic_ϕ st...
In the future, we plan to investigate new strategies to estimate the mean and variance of the prior one-class distance distribution. Also, in producing native anomalies, tailored perturbation methods can be extended to a heuristic generation process.
These six perturbation functions can simulate abnormal behaviors in time series data. Fig. 5 presents a base time series sub-sequence 𝒔𝒔\bm{s}bold_italic_s and corresponding native anomaly examples generated via these six data perturbation operations within ΩΩ\Omegaroman_Ω.
Anomaly Types in Native Anomaly Generation. We define six perturbation functions with fixed parameters in ΩΩ\Omegaroman_Ω. This component is also a good handle to embed readily accessible prior knowledge into the learning process. Some specific real-world applications may have their own definitions of anomalies.
The bottom panel of Fig. 7 shows the anomaly scores produced by our method COUTA. COUTA successfully identifies all of these anomaly cases with distinguishably higher scores on true anomalies and consistently lower scores on normal moments. We use three pre-defined fixed operations in creating native anomaly examples. ...
C
Following the seminal work by Reiter and Dale (Reiter and Dale, 1997), the most comprehensive survey on D2T to-date has been that by Gatt and Krahmer (Gatt and Krahmer, 2018). Although several articles have taken a close examination of NLG sub-fields such as dialogue systems (Santhanam and Shaikh, 2019), poetry genera...
From a review of 284 correlations reported in 34 papers, Reiter (Reiter, 2018) notes that the correlations between BLEU and human evaluations are inconsistent - even in similar tasks. While automated metrics can aid in the diagnostic evaluation of MT systems, the author showcases the weakness of BLEU in the evaluation ...
In this section, we take a deeper look into the specifics of evaluation for D2T systems. Traditionally, the evaluation of D2T systems is compartmentalized into either intrinsic or extrinsic measures (Belz and Reiter, 2006). The former either uses automated metrics to compare the generated narrative to a reference text...
The practice of automating the translation of data to user-consumable narratives through such systems is known as data-to-text generation, as depicted in Fig. 1. Although encompassed by the general umbrella of Natural Language Generation, the nuance that differentiates D2T from the rest of the NLG landscape is that the...
As such, neural D2T borrows heavily from advances in other facets of NLG such as neural machine translation (NMT) (Bahdanau et al., 2015; Wu et al., 2016) and spoken dialogue systems (SDS) (Wen et al., 2015, 2016; Dušek and Jurcicek, 2016). As such, the pertinence of such a survey also spans highlighting the stages of ...
D
In Fig. 9, we visualized some “unseen” visual relation triplet categories mined by Neg-NSD that never appear in the original VG dataset. Some of these triplets are easily overlooked by the annotators, such as the relation against between bike and bike, or the relation along between rock and street. These harvested new ...
Results. From the results in TABLE III and TABLE VII, we have the following observations: 1) Compared to the two strong baselines (i.e., Motifs and VCTree), our NICE can consistently improve model performance on metric mR@K over all three tasks (e.g., 5.9% ∼similar-to\sim∼ 14.3% and 3.7% ∼similar-to\sim∼ 14.7% absolute...
Fig. 10 demonstrates some qualitative results generated by Motifs, Motifs+NICE, and Motifs+NICEST. From Fig. 10, we can observe that Motifs tends to predict coarse-grained (i.e., head) predicates, such as near, while Motifs+NICE tends to predict fine-grained (i.e., tail) predicates, such as sitting on and covering. Th...
To prove the generality of NIST, we used the two most common baseline models (i.e., Motifs [24] and VCTree [26]) as teachers biased towards head predicates (i.e., high Recall), and two models trained using unbiased methods (i.e., Motifs-TDE [29] and Motifs-NICE) as teachers biased towards the tail (i.e., high mean Reca...
Figure 10: Scene graphs generated by Motifs (left) and Motifs+NICE (Middle), Motifs+NICEST (right) on PredCls. Red predicates are error (i.e., not GT and unreasonable), Green predicates are correct (i.e., GT), and Brown predicates are reasonable (not in GT but reasonable). Only detected boxes overlapping with GT are s...
B
In PSM, an attacker can get its optimal strategy from Equations (13) and (14). When both R⁢E⁢RA𝑅𝐸subscript𝑅𝐴RER_{A}italic_R italic_E italic_R start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT values are positive, the attacker’s optimal strategy is to launch PSM attacks. Otherwise, it should take either honest mining ...
To further increase the attacker’s profits, we proposed A-PSM at the cost of acquiring the block height of the public branch timely. A-PSM assures an attacker of revenue no less than selfish mining. It can even outperform honest mining when the overall mining power is no more than 50% in the attacker’s private branch (...
In A-PSM, an attacker can get its optimal strategy from Equations (20) and (21) similarly. In Figure 14(c), we graphically illustrate the optimal strategy for attackers in A-PSM. As we can see from Figure 14(c), A-PSM can outperform selfish mining even if the attracted miners only control less than 1% of mining power. ...
We also discuss the scenario of multiple attackers. We evaluate and compare the revenue of both attackers and rational miners in the context of PSM and A-PSM strategies with the one of selfish mining and honest mining. In PSM, attracted miners can have more profits when their mining power is less than the attacker. An ...
In PSM, an attacker can get its optimal strategy from Equations (13) and (14). When both R⁢E⁢RA𝑅𝐸subscript𝑅𝐴RER_{A}italic_R italic_E italic_R start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT values are positive, the attacker’s optimal strategy is to launch PSM attacks. Otherwise, it should take either honest mining ...
B
At the EoS, gradient descent would still be moving into regions of higher curvature were it not being constantly repelled from these high-curvature regions by unstable dynamics. As we confirm below, these findings also generalize to preconditioned gradient descent (with a static preconditioner).
However, even though adaptive gradient methods train at the “Adaptive Edge of Stability” (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods in the non-adaptive EoS regime: whereas non-adaptive optimizers in the non-adaptive EoS regime are blocked from accessing high-cu...
On quadratic objective functions, this behavior would lead to divergence. However, neural network training objectives are not quadratic, and gradient descent typically does not diverge; instead, it enters a regime called the Edge of Stability (EoS) [8] in which the sharpness hovers just above, or oscillates around, the...
In contrast to gradient descent (and preconditioned gradient descent), adaptive gradient methods do not evolve as linear recurrence relations on quadratic functions. Thus, it is a priori unclear whether their local stability can be modeled using an eigenvalue condition.
However, it has not been clear whether these findings have relevance for adaptive gradient methods. Because of adaptive preconditioning, adaptive gradient methods do not evolve as linear recurrences on the local quadratic Taylor approximation, and thus it is not clear why their local stability would be well-modeled by ...
D
In our simulations, we do not observe a misfolded state of CLN025 shown to be highly populated in several studies 78, 79 employing different force fields (Amber99 80 and Amber99-SB 58, respectively) compared to CHARMM27 here 74. This misfolded state is also not observed in the long unbiased simulation from Ref. 76 tha...
In Fig. 4, we can see the lower-lying free-energy basins in the reweighted stochastic embeddings are captured by both mrse and stke. We can also notice a slight difference between the metastable states lying higher in free energy. Specifically, mrse captures more states below a threshold of 25 kJ/mol in comparison to t...
Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol. We can compare our results to the unbiased simulation data from...
We can observe that the free-energy landscape in the low-dimensional manifold calculated by mrse is highly heterogeneous, with multiple partially unfolded intermediate states and many possible reaction pathways, as shown in Fig. 4(a𝑎aitalic_a). Such a complex free-energy landscape shows that the dynamics of CLN025 is...
Overall, both the separation of the CLN025 metastable states and the free-energy landscapes calculated for the low-dimensional embeddings suggest that the proposed framework can be used to find slow CVs and physically valid free-energy estimates. The presented results (Fig. 4) clearly show that using our approach, we c...
B
The American Heart Association division of the left ventricle (Manuel et al., 2002) is well established and was developed to standardise the nomenclature of the various subregions of the left ventricle. For consistency with the figures here, the longitudinal axis is taken to run up the centre of the ventricular cavity,...
In order to compare the divisions, we must make the AHA regions. This is straight forward with the definitions given by Manuel et al. (Manuel et al., 2002). The 17 subregions can be seen in Fig. 13a in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT with the same view as is seen in...
Such a map is useful as it could ease knowledge exchange between modellers and stakeholders in these models, such as interested clinicians. In the specification of the AHA regions, the septum corresponds to regions 3, 4, 9, 10, & 15. Here, the septum corresponds to regions 3, 9, 14, & 17. The alignment between the subd...
Other than the physiologically motivated subdivision augmented with what is essentially a nearest neighbour algorithm based on a 3D analogue of a taxicab metric, it is possible to divide the ventricle with other metrics. Perhaps the easiest to implement is a Euclidean distance based algorithm.
Given that this division is geometrically based, and the one discussed here is physiologically based, an exact correspondence should not be expected between the AHA subdivision and any seen here. However, the AHA division is physiologically motivated and is essentially intended as a tool to standardise and clarify dis...
D
GCN models are constrained due to aggregating only local neighbors. For large enough graphs, GCN may fail to capture distant (long-range) dependencies entirely. Efforts have been made towards both building deeper GNN models [10] and building expressive GNN layers on long-range dependencies. For example, [11] proposed t...
We consider the pair-wise similarity between the role of a node and its neighbors to be crucial, since an OD pair’s source or destination can bring its impact towards close neighbours of nodes of interest and formulate their embedding as an “augmented source.” GAT is reported to be a well-suited solution to capture suc...
Based on the above results, one can see that our model outperforms all baselines in terms of accuracy, inference speed and generalization ability to open-world input. For Table IV, we observed that our model outperforms in terms of accuracy. We consider this verifies the validity of both our modified problem formulatio...
This work is one of the most related approaches to the one presented here. Contrary to the aforementioned contributions, which are based on transductive settings, our work focuses instead on an inductive setting. In our model, we are incorporating path-to-node attention, but passively between the path’s structurally si...
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph, which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal...
C
where ζhksuperscriptsubscript𝜁ℎ𝑘\zeta_{h}^{k}italic_ζ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT and ξhksuperscriptsubscript𝜉ℎ𝑘\xi_{h}^{k}italic_ξ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT are defined as ...
To focus our analysis on the contrastive learning for the transition dynamics, we only consider the setting where the reward function rh⁢(⋅,⋅)subscript𝑟ℎ⋅⋅r_{h}(\cdot,\cdot)italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is known. One might further modify the proposed algorithm to the unknown reward...
The proof idea for Lemma 5.2 is nearly identical to the one for Lemma 5.1 with extending the action space from 𝒜𝒜\mathcal{A}caligraphic_A to 𝒜×ℬ𝒜ℬ\mathcal{A}\times\mathcal{B}caligraphic_A × caligraphic_B. We defer the proof to Appendix D.2. Based on Lemma 5.2, we further give the analysis of Theorem 4.2.
Single-Agent MDP. An episodic single-agent MDP is defined by (𝒮,𝒜,H,r,ℙ)𝒮𝒜𝐻𝑟ℙ({\mathcal{S}},\mathcal{A},H,r,\mathbb{P})( caligraphic_S , caligraphic_A , italic_H , italic_r , blackboard_P ), where 𝒮𝒮{\mathcal{S}}caligraphic_S is the state space, 𝒜𝒜\mathcal{A}caligraphic_A is the action space, H𝐻Hitalic_H is...
Zero-Sum Markov Game. Our work further studies the zero-sum two-player Markov game that can be defined by (𝒮,𝒜,ℬ,H,r,ℙ)𝒮𝒜ℬ𝐻𝑟ℙ({\mathcal{S}},\mathcal{A},\mathcal{B},H,r,\mathbb{P})( caligraphic_S , caligraphic_A , caligraphic_B , italic_H , italic_r , blackboard_P ), where 𝒮𝒮{\mathcal{S}}caligraphic_S is the inf...
B
We propose combining importance sampling with the multilevel DLMC estimator to address rare events. We employ the time- and state-dependent control developed by (Ben Rached et al., 2023) for all levels in the multilevel DLMC estimator. Numerical simulations confirm a significant variance reduction in the multilevel DL...
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha...
We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ...
We extend the DLMC estimator introduced by (Ben Rached et al., 2023) to the multilevel setting and propose a multilevel DLMC estimator for the decoupling approach (dos Reis et al., 2023) for MV-SDEs. We include a detailed discussion on the bias and variance of the proposed estimator and devise a complexity theorem, di...
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
D
All the samples are considered successful in accomplishing the proposition. Because the approaching time is half the one in Section 5.2, the orbital injection error is higher. Thus, the autonomous spacecraft had initial trouble maintaining the orbit within reasonable bounds, which can be observed in the spikes of a few...
In the case of Eros, the results of the Monte Carlo analysis, as illustrated in Figure 6, align with the discussion in Section 5.2. Once again, the spacecraft executes its operation successfully. It is effectively inserted into orbit and completes the transfer smoothly, as depicted in Figure 6a. The histogram in Figur...
The large orbital insertion burn introduces much uncertainty in the estimation, as expected from the analysis in Section 5.2. The errors in position can spike to the order of a few hundred meters and centimeters per second for the velocity. Nevertheless, they rapidly decrease in the orbital operation, with the mean rem...
A histogram for the budget Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V is shown in Figure 5b. The consumption of Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V is within 7.6 to 10.4 m/s having a mean of 8.41 m/s, with most of the consumption again in the Monte Carlo-Lambert guidance. Regarding the errors in estimating the spacecraft’s state, they...
We conducted another Monte Carlo simulation for the asteroid Bennu, this time considering a σRsubscript𝜎𝑅\sigma_{R}italic_σ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT of 0.05 (i.e., 5%) to evaluate the feasibility of using a shape model with considerable errors. Figure 10 presents the results for this scenario. ...
C
{j}\big{)}^{-1}A_{j}^{\ast}\Big{)}^{-1}\;.italic_G : italic_X ↦ ( ∑ start_POSTSUBSCRIPT italic_j ∈ [ italic_m ] end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_A start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIP...
Figure 1: Empirical performance of the map G𝐺Gitalic_G (Eq. (1.4)) in comparison with first-order Riemannian Optimization methods (using the Manopt implementation (Boumal et al., 2014)): Riemannian Steepest Descent (Riem-SD), Riemannian Trustregions (Riem-TR) and Riemannian Conjugate Gradient (Riem-CG). The Brascamp–L...
The map G𝐺Gitalic_G resembles the alternate scaling algorithm for Brascamp-Lieb constants (Garg et al., 2018, Alg. 1). The resemblance of both approaches derives from an exploitation of the difference-of-convex structure of problem 1.3 (see also (Weber and Sra, 2023)). However, the Thompson geometry perspective employ...
We will present a detailed non-asymptotic convergence analysis of a fixed-point approach based on iterating (1.4) for simple input data (see Def. 10). The key insight of our work is to analyze the nonlinear map G𝐺Gitalic_G through the lens of nonlinear Perron–Frobenius theory, which views ℙdsubscriptℙ𝑑\mathbb{P}_{d}...
The simplicity of the map (1.4) makes a fixed-point approach particularly attractive, since it avoids expensive Riemannian operations such as exponential maps and parallel transports, which are required by standard Riemannian optimization approaches. Empirically, iterating map (1.4) exhibits linear convergence – see F...
D
This section lays the groundwork for extracting multiscale topological signatures from weighted graphs. First, we model higher-order interactions among vertices in the graph using simplicial complexes, similar to the approach used in loop centrality [centrality]. This captures interactions beyond simple pairwise connec...
Many complex networks, such as social networks [socnet] and telecommunication networks [telnet] use graph-based centrality measures to determine the relative significance of nodes or cycles in the network. The derivations of the measures of central tendency in Statistics reflect the idea that a single value can represe...
The simplices of an abstract simplicial complex provide the fundamental building blocks for constructing chains in chain space. Each k𝑘kitalic_k-simplex acts as a single unit within a k𝑘kitalic_k-chain. By combining these simplices with coefficients from ℤ/2⁢ℤℤ2ℤ\mathbb{Z}/2\mathbb{Z}blackboard_Z / 2 blackboard_Z (w...
This section lays the groundwork for extracting multiscale topological signatures from weighted graphs. First, we model higher-order interactions among vertices in the graph using simplicial complexes, similar to the approach used in loop centrality [centrality]. This captures interactions beyond simple pairwise connec...
We dive into the concept of simplicial homology and its application in modeling graphs. Although graphs represent connections between pairs of nodes (edges), simplicial complexes offer a more versatile framework. They allow us to capture not only pairwise interactions but also higher-order relationships between multipl...
D
where ϕitalic-ϕ\phiitalic_ϕ is the domain set, and Mμ⁢(ϕ)subscript𝑀𝜇italic-ϕM_{\mu}(\phi)italic_M start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT ( italic_ϕ ) and Mσ⁢(ϕ)subscript𝑀𝜎italic-ϕM_{\sigma}(\phi)italic_M start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_ϕ ) represent the statistics for the ϕitalic-ϕ\...
Remark. In our multi-task learning framework, using the independent BN can effectively mitigate the interference of different domains, as shown in Eq. 4. In addition, in our method, most of the modules are shared for all domains, which can sufficiently exploit all samples to reduce the third item in Eq. 4. Hence, our m...
In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis...
Based on the upper bound of the generalization error in Eq. 4, the proposed method needs to satisfy two requirements: 1) most of modules in the model are shared for all domains, which can be sufficiently trained by all samples, and 2) the model can reduce the interference of domain gap between different domains. Theref...
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ...
A
Apart from the receptive field mismatch, the spatial size of feature maps in ConvNets also significantly affects the transferability of adaption. Earlier attempts to use adapters to transfer ConvNets usually downsample the feature’s spatial size for memory and parameter efficiency.
In summary, it is crucial to design the architecture and adapting scheme of the PET module computing Δ⁢𝐡Δ𝐡\Delta\mathbf{h}roman_Δ bold_h for ConvNets to have the same spatial size of feature maps and the same receptive field of convolutions for transferability.
For simplicity, we adopt the same activation function used in the backbone as the non-linearity at the middle of the bottleneck. The effective receptive field of the modulated feature maps produced by Conv-Adapter is thus similar to that of the adapted blocks in the backbone.
Apart from the receptive field mismatch, the spatial size of feature maps in ConvNets also significantly affects the transferability of adaption. Earlier attempts to use adapters to transfer ConvNets usually downsample the feature’s spatial size for memory and parameter efficiency.
They can all process the sequential features globally with long-range dependencies as the computing blocks in Transformers. Although it is possible to apply linear layers, or equivalently 1×1111\times 11 × 1 convolutional layers [48], to adapt the feature maps of ConvNets, it is yet intuitive that this might produce in...
A
Online learning methods enable model updates incrementally from sequential data, offering greater efficiency and scalability than traditional batch learning. Regularization technique is widely used in online convex optimization problems [40]. Online Mirror Descent, an extension of Mirror Descent [41], utilizes a gradie...
Note that in the definition of Regret, 𝚯^(k)superscript^𝚯𝑘\mathbf{\hat{\Theta}}^{(k)}over^ start_ARG bold_Θ end_ARG start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT and λ^⁢(t)(k)^𝜆superscript𝑡𝑘\hat{\lambda}(t)^{(k)}over^ start_ARG italic_λ end_ARG ( italic_t ) start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPE...
The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl...
We introduce a novel method for identifying changepoints in dynamic systems governed by general PDEs dynamics. Our approach works with piecewise-constant time-changing parameters and leverages total variation regularization on the first-order differences of parameters. We also propose an online learning strategy that ...
In this work, we present an innovative methodology that combines changepoints detection with PINNs to address changes and instabilities in the dynamics of PDEs. This approach marks the first exploration into simultaneously detecting changepoints and estimating unknown parameters within PDE dynamics based on observed d...
D
For example, b⁢a⁢b⁢c⁢d𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and a⁢b⁢c⁢b⁢c⁢d𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by a⁢b⁢c⁢d𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but a⁢b⁢c⁢b⁢d⁢a𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital...
change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once. If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w.
for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=x⁢a⁢a⁢y𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ...
Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that, provided only tha...
(go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive, let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let
C
As a first example of where we believe that information densities could be used, we consider the regularization of inverse problems. In a large number of practical applications, one regularizes inverse problems by adding a penalty term to the misfit function for the purpose of penalizing undesirable aspects of the rec...
As a first example of where we believe that information densities could be used, we consider the regularization of inverse problems. In a large number of practical applications, one regularizes inverse problems by adding a penalty term to the misfit function for the purpose of penalizing undesirable aspects of the rec...
accuracy of measurements on the one hand, and the uncertainty in the recovered parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us...
A practical question is how large this factor β𝛽\betaitalic_β should be. Many criteria have been proposed in the literature [34], but, in practice many studies do not use any of these automatic criteria and instead choose values of β𝛽\betaitalic_β that yield reasonable results based on trial and error.
One could similarly analyze papers from many other disciplines that use inverse problems. They may be using different words, but a common feature of the many definitions of resolution, adjoints, sensitivity, and identifiability that can be found in the literature, is that most of these notions originate in, and were de...
C
To display similarities between manuscripts, we compute multiple measurements. The similarities are defined by the Euclidean distance between average vectors. For image similarity, we use the image embeddings of all images belonging to a manuscript. Label and description similarity are defined in a similar way, by usi...
On mouseover, the image is shown. A lasso selection can be used at the points to select a set of images for the labeling process. The selected points are increased in size to better highlight them. The re-annotation space is accessed from a button in the left-hand drawer. The current state of the graph and the point c...
The focus of our work lies on image labeling in combination with distant viewing, which is similar to visual annotation systems, interactive labeling and other human-in-the-loop processes. In order to label visual material it is important to see the object of interest, the associated metadata and the possible relation...
The annotation space (Figure 5) allows one to see the current annotation status of a number of images as well as to add and remove labels and zoom in on the details of the images. It is also possible to add new labels that are not part of the dataset and to filter images based on metadata.
We designed visualizations as part of our research thinking process to combine two (partially-)annotated image datasets representing the same genre, but originating in two different research initiatives exhibiting differences and inconsistencies in their vocabulary. The visualizations were used for labeling purposes a...
B
Additionally, we report the statistical analysis results of our predicted similarities across different sampling numbers in Table IX. Note that “std” means the standard deviation (average score of all target tasks), and “| ours-avg |” denotes the absolute difference (average score) between the predicted similarities i...
To this end, we first propose a new metric to better predict the prompt transferability, and then improve the Prompt transfer via knowledge Distillation (PanDa for short). Specifically, for ❶, different from the prior metric [10] that simply uses the similarity of prompt parameters as prompt transferability, our propos...
In addition to the above analysis on PanDa approach, we also explore our proposed metric and other prior metrics in detail. Specifically, to the best of our knowledge, there are two existing prior metrics about measuring prompt transferability: 1) the metric (namely Eavg) in SPoT [10] that computes the cosine similarit...
In addition to the above analysis on PanDa approach, we also explore our proposed metric and other prior metrics in detail. Specifically, to the best of our knowledge, there are two existing prior metrics about measuring prompt transferability: 1) the metric (namely Eavg) in SPoT [10] that computes the cosine similarit...
To verify the effectiveness of prompt transferability metric, we follow SPoT [10] and use the metric to predict the Top-3 source tasks for each target task and report the best of Top-3 transfer results in Table III-C3 as well. For reference, the oracle best transfer results are also reported. It can be seen, compared t...
B
While the security industry has reported some insights (Netscout, 2022b, a; Google, 2023; Microsoft Threat Intelligence, 2023), empirical quantitative academic work analysing the link between armed conflicts and cybercrime has been limited. A notable report is by a Czech university’s incident response team, showing ne...
Unlike state-sponsored activities, defacement and DDoS attacks can be systematically collected and measured with reasonable coverage. Defacements are available on online archives (Kurzmeier, 2020), while DDoS attacks can be collected through honeypots (Thomas et al., 2017; Krämer et al., 2015; Nawrocki et al., 2023). ...
One type of attack linked with the low-level cybercrime actors is website defacement (Romagna and van den Hout, 2017), which accounted for around 20% of online attacks in 2014 (Hackmageddon, 2015) and is often organised into discrete campaigns (Maggi et al., 2018). Attackers (or defacers) gain unauthorised access using...
Information warfare has long been part of ‘hybrid’ modern conflicts, especially around the control of communications (Hoffman, 2007; Libiseller, 2023). The enemy’s ability to spread news and propaganda can be degraded by targeting crucial sites, public services, broadcast and telecom infrastructure. Censorship is ofte...
We do not dispute claims about the prevalence of state-sponsored attacks such as malware and phishing (Google, 2023; Microsoft Threat Intelligence, 2023), but rather provide additional perspectives on the role of low-level cybercrime actors. Some cybercrime-related activities may indeed contribute to the war effort. Le...
B
Table 3. The average transferability of a sample when ranking its potential perturbations for the CIFAR10 and ImageNet datasets over 100 trials. Columns represent the various ranking methods, and rows indicate the combination of victim and surrogate model architectures, ensuring that F0subscript𝐹0F_{0}italic_F start_P...
Table 4. The average transferability of a sample when ranking its potential perturbations for the X-Ray and Road Sign datasets over 100 trials. Columns represent the various ranking methods, and rows indicate the combination of victim and surrogate model architectures, ensuring that F0subscript𝐹0F_{0}italic_F start_P...
In Tables 3 and 4 we present the results when ranking images for different k𝑘kitalic_k after applying the best perturbation to each image. Table 3 presents the findings for the CIFAR10 and ImageNet datasets, while Table 4 provides the results for the X-Ray and Road Sign datasets. Each cell within these tables indicate...
Table 3. The average transferability of a sample when ranking its potential perturbations for the CIFAR10 and ImageNet datasets over 100 trials. Columns represent the various ranking methods, and rows indicate the combination of victim and surrogate model architectures, ensuring that F0subscript𝐹0F_{0}italic_F start_P...
Figure 5. E1.1 Results - The performance of ranking strategies for the X-Ray (top) and Road Sign (bottom) datasets. Each cell plots the transferability at k𝑘kitalic_k success rate for adversarial examples when ranked using different strategies across varied surrogate and victim model architectures. Columns represent ...
A
Given that exponential concentration leads to trivial data-independent models, it is important to determine when kernel values will, or will not, concentrate. In this section, we investigate the causes of exponential concentration for quantum kernels.
This concentration of quantum kernels can in broad terms be viewed as a result of the fact that it can be extremely difficult to extract any useful information from the (necessarily) exponentially large Hilbert space (especially in the presence of noise).
In broad terms, the exponential concentration of quantum kernels may be viewed as stemming from the fact in certain situations it can be difficult to extract information from quantum states. In particular, we identify four key features that can severely hinder the information extraction process via kernels. These inclu...
The problem of exponential concentration for the fidelity quantum kernel was first observed in Ref. [6] and later analyzed in Ref. [7, 44, 45] in the context of generalization. Ref. [7] discusses exponential concentration in the context of a projected quantum kernel for a specific example embedding. On the other hand,...
We show that analogous to the causes of BPs for QNNs there are at least three different mechanisms that can lead to the exponential concentration of the encoded quantum states, including (i) the expressivity of the encoded quantum state ensemble, (ii) the entanglement in encoded quantum states with a local observable a...
B
The method for processing the data of our RGB+BB+3DN method is inspired by Izquierdo et al.’s work [10]. In [10], their best performing model, GoogleNet + LSTM yields 74.4% for lane change classification. Because 3D models can better extract spatio-temporal features than 2D CNNs, our RGB+BB+3DN method achieves top-1 c...
To investigate whether our models recognize the motion clues and learn spatio-temporal information, we generated class activation maps on X3D-S and RGB video data. The features of the last convolution layer were used to calculate the CAMs. The calculated scores were normalised across all patches and frames before visua...
Table 4.2 illustrates the classification and prediction results on video combined with bounding box data. As can be observed, regardless of the classification or prediction results, the performance of each method does not vary much. The best accuracy is only 3.00% higher than the lowest one, which is very different fr...
The 3D CNNs we employ in this work are initially designed for recognising general human behaviours and trained on human behaviours datasets such as Kinetics-400 and Kinetics-600. These datasets are formed by video clips with relatively high frame rates (25 fps) [3]. Therefore, in order to efficiently extract motion cl...
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per...
A
ct=ci=…=ci+k−1subscript𝑐𝑡subscript𝑐𝑖normal-…subscript𝑐𝑖𝑘1c_{t}=c_{i}=\ldots=c_{i+k-1}italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = … = italic_c start_POSTSUBSCRIPT italic_i + italic_k - 1 end_POSTSUBSCRIPT and argmax𝒜⁢(p)normal-argmax𝒜𝑝\ope...
k𝑘kitalic_k-anonymity. Let pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and pbsubscript𝑝𝑏p_{b}italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT be two programs written by developers a𝑎aitalic_a and b𝑏bitalic_b with pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_PO...
pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≡ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT but pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≠ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBS...
representation and semantics: If we have pa=pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}=p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, it directly follows that pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUB...
\ \Longleftrightarrow\leavevmode\nobreak\ \leavevmode\nobreak\ p_{a}\equiv p_{% b}.caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) = caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) ⟺ italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≡ italic_p start_POSTSUBSCRIP...
A
\rangle/\tau)}{\sum\nolimits_{i=1}^{C}\exp(\langle e(\boldsymbol{x}),g(% \boldsymbol{s}_{i})\rangle/\tau)},c\in\{1,\cdots,C\},italic_p ( italic_c | bold_italic_x ) = divide start_ARG roman_exp ( ⟨ italic_e ( bold_italic_x ) , italic_g ( bold_italic_s start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) ⟩ / italic_τ ) end_A...
To study the effectiveness of multi-task prompt tuning, our initial attempt involves adapting CoOp by sharing a class-agnostic prompt context across all target tasks and tuning this context on joint multi-task few-shot training set (see Fig. 1(c)). However, we observe that this approach of hard prompt sharing exhibited...
During model training, CoOp solely focuses on learning the context by minimizing the cross-entropy loss on the training set while maintaining all other parameters constant. Depending on the setup, we can obtain two variants of CoOp: a class-agnostic version where all classes share a common context, and a class-specifi...
Task Feature Extraction. To extract task features, we assign a concise text description as the task name for each task, ensuring that the text effectively captures the task’s essence. For instance, for the DTD dataset [50], the task name “texture classification” is more descriptive than “image classification”. Leverag...
For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede...
B