context
stringlengths
250
4.88k
A
stringlengths
250
4.73k
B
stringlengths
250
3.79k
C
stringlengths
250
8.2k
D
stringlengths
250
4.17k
label
stringclasses
4 values
”I was gonna say, like, I think the main reason I like this app [CO-oPS] than other ones, because it’s collaborative, rather than just like entirely one sided, or it’s just like the parents controlling everything. It’s like we are equal.” -T19, Female, 16 years
“Um, so like, an advantage would be like, you could download anything. And like, you could also hide it. So like, even if it [the app installed] was something weird, no one would know. You could also have your privacy for apps that you know, you don’t want people to know about.” -T16, Female, 14 years
”Because you have to physically go into settings and change those, because they [Parents] may not know how to go into the place and they can’t do it. If you’re able to do it from here [CO-oPS], that would be a thing like I need in this situation.”-T5, Male, 16 years
“I was puzzled when you [Interviewer] pointed out that I can’t deny access. You know, when I looked at his [Teen’s] apps, I couldn’t actually turn on or off stuff [change Teen’s app settings]. I could see what was there and what was happening, but I can’t change it. So my gut reaction was like, Oh, I don’t like that.”...
”I would mostly use it [CO-oPS] to check my family’s apps because that’s one way specifically to point at the permissions that they’re allowing or they’re not allowing to apps? Because, I don’t know, most of the time, how they installed those apps, what they’re allowing what, they’re sharing what. Because I see a coup...
B
where Ez,r={ξ:f⁢(ξ)1/D⁢d⁢(z,ξ)≤r}subscript𝐸𝑧𝑟conditional-set𝜉𝑓superscript𝜉1𝐷𝑑𝑧𝜉𝑟E_{z,r}=\{\xi:f(\xi)^{1/D}d(z,\xi)\leq\sqrt{r}\}italic_E start_POSTSUBSCRIPT italic_z , italic_r end_POSTSUBSCRIPT = { italic_ξ : italic_f ( italic_ξ ) start_POSTSUPERSCRIPT 1 / italic_D end_POSTSUPERSCRIPT italic_d ( italic_z ,...
Figure 11: Dimension-1 persistence diagrams of different filtration functions for the noisy Voronoi dataset with confidence bands. Blue points are points in the dimension-1 empirical persistence diagram. The green solid lines and the orange dashed lines are the confidence bands constructed by subsample and oracle boot...
Consider, again, the additive noise-corrupted “Antman” two-square dataset on the right of Figure 5. The persistence diagrams for the distance-to-measure filtration and the RDAD filtration are shown in Figure 8 with different confidence bands. Note that in both figures the bands constructed by oracle bootstrapping and b...
Figure 2: The distance filtration and its persistence diagrams. In the first subplot is a sample of points near a circle. Unions of balls centered at these points with different radii are shown the subsequent subplots. The last subplot shows the persistence diagrams of these unions of balls. The red diamond points corr...
Figure 8: Dimension-1 persistence diagrams of different filtration functions for the additive noise-corrupted “Antman" two-square dataset with confidence bands. Blue points are points in the dimension-1 empirical persistence diagram. The green solid lines and the orange dashed lines are the confidence bands constructe...
D
As for subject variation problem, works such as (Chen et al. 2013) provide a solution for enhancing the generalizability of AU recognition model by training personalized AU classifiers for each subject and works such as (Zen et al. 2016; Wang and Wang 2018) make attempt to relieve the subject-related prediction bias t...
Although these works have realized that the data distribution of training subjects differs from that of unseen subjects, they are still based on the assumption that the data distribution of source and target domains shares some similarities. In contrast, we formulate the causalities among variables in AU recognition ta...
We formulate subject variant problem in AU recognition using an AU causal diagram to explain the whys and wherefores. To the best of our knowledge, this is the first work to explain this problem with the help of causal inference theory and make attempt to remove the effect caused by subject variation via causal interv...
To this end, we formulate subject variation problem by constructing a causal diagram to analyze the causalities among facial images, subjects, latent AU semantic relations, and estimated AU occurrence probabilities. Our causal inference framework not only fundamentally explains how subject-specific AU semantic relatio...
To answer the whys and wherefores of subject variation problem, we use a structural causal model (Pearl et al. 2000) to illustrate the causalities among variables in AU recognition models. As shown in Fig. 2, there are four variables involved in our AU causal diagram, which are facial images X𝑋Xitalic_X, subjects S𝑆...
A
As shown in Fig. 1(b), the basic idea of bodyless block propagation (BBP) is that only the blockheader is transmitted between nodes, and a new block is pre-validated so that the in situ block validation during the block propagation is just a simple comparison of the pre-computed global state and the global state embedd...
When a node receives the new blockheader, for validation, the node only needs to compare the global state in the blockheader and the global state stored in the pre-validation module. However, the ideal scenario may not arise automatically in practice. We need to overcome several technical challenges for that to happen.
To implement BBP, we propose to extend the architecture of Ethereum Tx-Pool implementation by introducing a pre-packed blockbody (PPB) module and a pre-validation module, as depicted in Fig. 3. The PPB contains a selected subset of transactions from the Tx-Pool, and it is used to generate the next block by the mining ...
As shown in Fig. 1(b), the underpinning of BBP is that each node anticipates the transactions in the upcoming next block and pre-packs a blockbody based on the anticipated transactions. The node pre-executes and pre-validates the anticipated transactions before the next block is mined. To the extent that the anticipat...
To improve BBP in Ethereum, we put forth a pre-validation algorithm to accelerate block propagation. For block validation in Ethereum and most blockchains that support smart contracts, nodes need to execute the transactions to compute a updated global state. Since the execution order of the transactions matters to the ...
B
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
A
The effectiveness of AI-based automated speech therapy tools depends on their performance compared to the conventional mode of speech therapy provided by SLPs. Moreover, automated speech therapy tools providing wrong feedback can be disastrous to children’s speech improvement. Few studies (4 out of 24) compared the re...
There were 91 unique authors identified from the included studies. The VOSviewer software was used to calculate the most impactful authors, generate co-authorship clusters, and perform co-occurrences of keyword analysis (Van Eck NJ, \APACyear\bibnodate). All the authors were counted irrespective of the authorship orde...
This systematic literature review was based on the PRISMA Statement to analyze papers on AI-based automated speech therapy tools for persons with SSD. We extracted relevant data from the included articles based on four predefined research questions: Types of SSD addressed; Level of autonomy achieved by such tools; Mode...
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to...
D
We consider the case that some points may have significantly higher noise perturbations than others. In this setting, we randomly select some points, and we generate some points with c⋅σ⋅𝑐𝜎c\cdot\sigmaitalic_c ⋅ italic_σ coordinate-wise variance, where c=8𝑐8c=8italic_c = 8 for our experiments (recall that the noise...
This gives us initial theoretical evidence that in the random-mixture model with outliers, our simple outlier detection method can detect outliers when a non-negligible fraction of the points are outliers. Next, we use simulations of our model to test the efficacy of our outlier detection method and its impact on the ...
This shows that our outlier detection method is adept at detecting different kinds of outliers, outperforming popular outlier detection tools in some settings, and being competitive to them in others. We also observe that as the overall noise in the dataset increases, the performance of our method compared to the othe...
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt...
Outlier detection has been an active area of study in unsupervised learning, providing several influential algorithms. In a recent, comprehensive benchmarking of outlier detection algorithms, [HHH+22] compared the performance of several unsupervised learning algorithms on different datasets. They found that for unsuper...
B
We formulate the dialog with NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT rounds of QA interactions. Specifically, given the visual input data with partially missing visions, the AI system is given NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT chances to ask ...
Fig. 1: The overall architecture of our proposed SI-Dial framework. We first obtain the preliminary objects from the object detector based on the incomplete visual input, and propose to conduct an interactive dialog process. Note that the dashed lines denote the operations only after the dialog is completed) for the f...
Having obtained the interactive dialog xh⁢i⁢s,NRsubscript𝑥ℎ𝑖𝑠subscript𝑁𝑅x_{his,N_{R}}italic_x start_POSTSUBSCRIPT italic_h italic_i italic_s , italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT end_POSTSUBSCRIPT as the supplementary source to missing visual input, we update the preliminary objects O′superscri...
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
Overall speaking, our proposed SI-Dial takes the preliminary object representations (i.e., node and edge features from the object detector) as input, and outputs the updated representations with supplementary information incorporated from the dialog interactions:
D
In the one-dimensional facility location problem, agents are located on the real line, and a planner’s goal is to build one or more facilities on the line to serve the agents. The cost of an agent is her distance to the nearest facility. The problem asks to locate facilities to minimize the total cost of all agents (th...
We use the same entrance fee function as that in the proof of Theorem 10 except that we set the entrance fee at the location of agent 00 to 00. Then in order to get an approximation ratio less or equal to 3333, one of the two facilities must always be located in the position of the new agent 00 with probability 1111. T...
A key conversion in the models is that now each agent becomes strategic and may misreport her position to decrease her cost. These new problems are called the facility location games, which require to design mechanisms that elicit the true positions of agents and output facility locations to (approximately) minimize th...
In the one-dimensional facility location problem, agents are located on the real line, and a planner’s goal is to build one or more facilities on the line to serve the agents. The cost of an agent is her distance to the nearest facility. The problem asks to locate facilities to minimize the total cost of all agents (th...
The position of each agent is private information. We want to design strategyproof mechanisms that guarantee that the agents report their true positions and locate the facilities based on the reports such that either the total or the maximum cost approximates the optimal value of the corresponding optimization problem ...
B
Connected planar NSF (K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT,gem,W4subscript𝑊4W_{4}italic_W start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT,butterfly,K1,5subscript𝐾15K_{1,5}italic_K start_POSTSUBSCRIPT 1 , 5 end_POSTSUBSCRIPT,H𝐻Hitalic_H,snail,press,C4,…,Cksubscript𝐶4…subscript𝐶𝑘C_{4},\dots,C...
There is some more bibliography to add to the already vast literature [3, 10, 23, 31, 33, 35] on dominating induced matchings. As we have seen before the papers on perfect edge domination are less frequent. There is a paper [16] where the authors describe ILP formulations for the PED problem, together with some experim...
We wonder if there is some algorithmic relation between efficient and perfect edge domination. More specifically, we remark that there are graph classes which admit polynomial time solutions for solving the efficient edge domination problem while being hard for solving the perfect edge domination problem. However, we ...
We say that G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) is a neighborhood star-free graph, NSF graph for short, if for every vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V with degree at least 2, G⁢[N⁢[v]]𝐺delimited-[]𝑁delimited-[]𝑣G[N[v]]italic_G [ italic_N [ italic_v ] ], is not a star. In other words, every ...
e∈E𝑒𝐸e\in Eitalic_e ∈ italic_E is dominated exactly by one edge then E′superscript𝐸′E^{\prime}italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is called an efficient edge dominating set (EED). On the other hand, if we relax the definition, and let each of e∈E∖E′𝑒𝐸superscript𝐸′e\in E\setminus E^{\prime}italic_...
B
Without loss of generality we can additionally assume that rank⁢(S~)=nrank~𝑆𝑛\mathrm{rank}(\tilde{S})=nroman_rank ( over~ start_ARG italic_S end_ARG ) = italic_n, since otherwise (11) can be reduced to an equivalent form over a smaller set of parameters 11.
For each 𝒮(h)superscript𝒮ℎ\mathcal{S}^{(h)}caligraphic_S start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT, intersect the set with the associated mp-QP partition from (ii). Assemble a solution over the whole of 𝒮𝒮\mathcal{S}caligraphic_S from the resulting functions.
The problem (11) now looks like a standard mp-QP whose solution can be computed parametrically in x𝑥xitalic_x. Putting aside for the moment the issue of possible degeneracy, this parametric solution could even be computed over the whole of 𝒮𝒮\mathcal{S}caligraphic_S. Our proof will therefore proceed as follows (see ...
Note that standard methods for proving that the parametric solution of (10) in x𝑥xitalic_x is PWA continuous can not be applied because the right hand side S⁢(x)𝑆𝑥S(x)italic_S ( italic_x ) of the inequalities is not affine. We can be sure, however, that the problem has a solution for any x∈𝒮𝑥𝒮x\in\mathcal{S}itali...
The LICQ is sufficient to exclude the case where more than m𝑚mitalic_m constraints are active at a given feasible point v𝑣vitalic_v, thereby avoiding primal degeneracy 5 §4.1.1. For any region 𝒮(h)⊆𝒮superscript𝒮ℎ𝒮\mathcal{S}^{(h)}\subseteq\mathcal{S}caligraphic_S start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSC...
B
We present diverse experiments in which the ODE parametrizes a rigid-body transformation of the foreground objects. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamic...
Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine lea...
An exeption is the work by Song et al. that use the solution of an ODE as regularization of a motion network to crate dynamic NeRFs [47]. In contrast to our work, this approach does not enforce the physics to be exact. While the majority of works on implicit representations focuses on shape, [45] show the generality of...
Several of the previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations [31, 16, 11, 10, 53, 59, 29, 60], or other general physics models [27]. While those are a elegant approaches that allow the model to adapt to different physical systems, they have two drawbacks. First, ...
In this work we presented a solution for identifying the parameters of a physical model from a video while also creating a photorealistic representation of the appearance of the scene objects. To this end, we proposed to combine neural implicit representations and neural ODEs in an analysis-by-synthesis fashion. Unlike...
A
Quantum communication networks (QCNs) utilize quantum mechanics principles to enhance information transfer. QCNs transmit data using quantum states that are entangled and can exist in a superposition of multiple states simultaneously, offering greater efficiency than classical networks [1]. However, these quantum stat...
Here, the stored quantum vectors are initialized to different clusters either in an arbitrary fashion or by utilizing efficient heuristic approaches. Then, multiple iterations are performed such that in each iteration, the goal is to minimize the loss function in (2), which ensures that each vector is assigned to the c...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
The majority of QCN models optimize the quantum resource allocation and network overall performance by embedding classical data into quantum states that are shared over quantum channels between distant nodes [3, 4, 5, 6]. Additionally, numerous approaches have been proposed to develop resource-efficient QCNs, including...
In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m...
C
We consider the case that the system is unconcealable. Then, under some system behavior, the eavesdropper can infer the occurrences of the secret events by utilizing its knowledge of the system model and by analyzing the observable behavior of the system.
The first condition requires that, as the interface of the system, the defensive function should be able to react to every observable sequence of events that can be generated by the system, such that defensive actions (deletions, insertions, or replacements) can be utilized.
The defensive function proposed in this section can alter observable output events of the system G𝐺Gitalic_G by deletions, insertions, or replacements. The problem of enforcing concealability of the system aims to determine whether the defensive function is C𝐶Citalic_C-enforcing, i.e., given constraints in terms of h...
(ii) We consider the problem of concealability enforcement when the system is unconcealable, using a so-called defensive function placed at the output of the system to obfuscate the eavesdropper observations by appropriately modifying the original observations generated by the system (via event deletions, insertions, o...
Then, the notion of C𝐶Citalic_C-enforceability is introduced, which characterizes the ability of a defensive function to manipulate the observations of output events such that the occurrences of secret events can be concealed from the eavesdropper regardless of system activity.
B
Indeed, an RL agent can automatically grasp relevant environment statistics by playing against the environment and eventually discover a strategy that can provide the best long-term reward. However, several challenges need to be overcome if it is adopted.
The proposed approach can coordinate the interference among concurrent backhaul and access links, promptly monitor and refill IAB-node buffers to prevent downstream access transmission starvation, adjust transmission beams to serve mobile UEs, and adapt to intermittent blockages caused by randomly moving 3D obstacles.
We provide an adaptive MARL-based framework that supports real-time operations and takes into account (1) physical constraints, including link interference, duplexing modes (i.e., full-duplex, half-duplex), hardware limitations, etc., (2) the amount of data cached in IAB-nodes’ buffers, to avoid multi-hop flow starvati...
The sector-based access transmissions in (1) allow to reduce the impact of the varying UE locations on the action policies, making them more stable and robust against mobility. In addition, this permits the same action space to remain widely applicable even if the number of UEs in the service area may not be constant....
First, access links are intermittently available due to UE mobility, which makes centralized single-agent RL (SARL) approaches infeasible. Indeed, their decision space is based on a set of potential concurrent transmissions (i.e., compatible links), which unfortunately changes as users randomly move around. Second, ran...
D
Tetenov (2016) pursued a similar analysis of Phase III incentives, and concluded that a Type I error of up to 15% is incentive-aligned for an average drug. In contrast, when we consider unusually low Phase III costs and unusually profitable drugs, we find that the FDA could already be at risk of violating incentive-ali...
The primary implication of this analysis is that if the standard of evidence required by the FDA is loosened, it may cease to be incentive-aligned for the more profitable drugs. The right standard of evidence for the FDA is a source of ongoing debate, and some call for much looser protocols. For example, the Bayesian ...
There are important limitations to the above analysis. In particular, our calculation omits additional regulatory checks against approving ineffective drugs and punishments for agents who intentionally run clinical trials for drugs they believe to be ineffective. These considerations include: additional evidence standa...
Case 2: large profit. Suppose that companies who receive approval make $1 billion in profit, 100 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose to run trials: their expected profit from seeking approval is $40 million. On average, 5% of such agents would receive approval, s...
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F...
B
Concerning PPIR(FHE)-v1, we note the uneven computation time and bandwidth usage between clients, due to the asymmetry of the encryption operations and communication protocol (Figure 1). PPIR(FHE)-v2, which shares the computational workload between the two parties and avoids DotProduct, allows obtaining an important sp...
Whole body PET data: affine registration (SSD). Table 2 compares Clear, PPIR(MPC), PPIR(FHE)-v1 and v2, showcasing metrics resulting from the affine transformation of whole-body PET images. Notably, registration through PPIR(MPC) yields negligible differences compared to Clear in terms of the number of iterations, int...
We demonstrate and assess the different versions of PPIR illustrated in Section 3 on a variety of image registration problem, namely: (i) SSD for rigid transformation of point cloud data, (ii) SSD with linear and non-linear alignment of whole body positron emission tomography (PET) data; (iii) SSD and MI for mono- and ...
Brain MRI and PET data. The registration of brain gray matter density images was performed by non-linear registration based on SSD, without gradient approximation, based on a cubic spline model (one control point every five pixels along both dimensions), with multiresolution steps r1subscript𝑟1r_{1}italic_r start_POST...
Brain MRI data and whole body PET data: non-linear registration (SSD). Table 3, comparing Clear and PPIR(MPC), PPIR(FHE)-v1 and v2, showcases the metrics resulting from spline-based non-linear registration between grey matter density images without the application of gradient approximation. Additionally, the table incl...
D
MEKD outperforms most methods in the task of out-of-domain distillation, while DB3KD achieves higher performance due to the use of robust labels [54]. However, DB3KD leads to a very high data exchange cost between the server and client, since it requires multiple queries to find a mixed image located in the decision bo...
DCGAN composes of a generator realized by transposed convolution layer and a discriminator realized by an ordinary convolution layer, which greatly reduces the number of network parameters and improves the image generation effect. As an extension of our method, we believe that generative models of different architectur...
Different from aligning logits directly, we theoretically provide a new optimization direction from logits to cell boundaries, and propose a new method of MEKD. Taking a generator as an inverse mapping of the teacher function does not leak information about the internal structure or parameters of the teacher, because i...
It is equivalent to a teacher assistant transferring the teacher’s knowledge to the student. Fig. 2 illustrates the architecture of MEKD. We freeze the generator and graft it behind the teacher and student model in the same way, using the softened logits of both models as the generator input.
Regardless of the method used, the essence of KD is to learn the mapping function of the teacher model from input to output, i.e., fTsubscript𝑓𝑇f_{T}italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT. However, it is hard to deduce the mapping function from the existing parameters of the teacher model.
B
The second reason is efficient compression of smooth functions. It is known that for functions with m𝑚mitalic_m continuous derivatives n𝑛nitalic_n-th coefficient is O⁢(n−m)𝑂superscript𝑛𝑚O(n^{-m})italic_O ( italic_n start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT ) for both Chebyshev [MH02, Theorem 5.14] and ...
All these properties combined make trigonometric and Chebyshev polynomials especially suitable for PDE solutions by spectral methods [Boy01] and adaptive lossless computations with functions [Tre07]. The later goal is fully realized in the Chebfun software.444https://www.chebfun.org Chebfun demonstrates that computati...
It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser...
First, the output of the neural operator is a neural network, i.e., essentially a black-box function. In classical numerical methods such as FEM [Cia02], spectral methods [Boy01] and others, the parametrization allows for extracting bounds on function, its derivatives, and any other local or global information in a co...
Our solution to both of outlined problems is to detach the process of function approximation (interpolation) from the mapping between functional spaces, and explicitly fix the highest possible resolution. To do that, for both domain and codomain of neural operator we consider functions represented by finite series of t...
A
However, the computation complexity of TaT map will become intractable when it comes to a large feature map. Assuming the spatial dimensions of the feature map are H𝐻Hitalic_H and W𝑊Witalic_W, this means the computation complexity will reach 𝒪⁢(H2⋅W2)𝒪⋅superscript𝐻2superscript𝑊2\mathcal{O}(H^{2}\cdot W^{2})caligr...
As our method computes the correlation between feature spatial locations, it might become intractable when feature maps are large. To this end, we extend our pipeline in a two-step hierarchical fashion: 1) instead of computing correlation of all spatial locations, we split the feature maps into several groups of patche...
We address the conundrum by the proposed anchor-point distillation. As shown in Figure 2 (c), we summarize the local area to compact representation, referred to anchor, within a local area that is representative to describe the semantic of the given area, forming the new feature map of smaller size. Since the new feat...
Figure 2: Illustration of our framework. (a) Target-aware Transformer. Conditioned on the teacher feature and the student feature, the transformation map Corr. is computed and then applied on the student feature to reconfigure itself, which is then asked to minimize the L22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRI...
Therefore, we propose a hierarchical distillation approach to address this large feature map limitation. It contains two steps: 1) patch-group distillation that splits the entire feature maps into smaller patches, so to distill local information from the teacher to the student; 2) we further summarize the local patches...
D
Simultaneous Embedding. Some recent algorithms for simultaneous embedding/multiview embedding include Multiview Stochastic Neighbor Embedding (m-SNE) [39, 40], based on a probabilistic framework that integrates heterogeneous features of the dataset into one combined embedding and Multiview Spectral Embedding (MSE) [38]...
The related work section is organized as follows. First, we review several dimensionality reduction algorithms that are widely used in visualization. Second, we delve into the fundamentals of t-SNE, to provide the needed background information needed for its generalization, ENS-t-SNE. Third, we review algorithms for su...
Multi-view Data Visualization via Manifold Learning [31], proposes extensions of t-SNE, LLE and ISOMAP, for dimensionality reduction and visualization of multiview data by computing and summing together the gradient descent for each data-view. Multi-view clustering for multi-omics data using unified embedding [30] uses...
In Fig. 4, we compare MPSE embedding of Palmer’s Penguins dataset to ENS-t-SNE (Fig. ENS-t-SNE: Embedding Neighborhoods Simultaneously t-SNE) using the same variables. In the first view (Fig. 4(b)), blue and orange points are mixed, and in the second view (Fig. 4(c)), squared and circled shapes are mixed. ENS-t-SNE, h...
Dimension Reduction. A wide variety of dimension reduction techniques abound: Principal Component Analysis (PCA) [23], Multi-Dimensional Scaling (MDS) [32], Laplacian Eigenmaps [6], t-Distributed Stochastic Neighbor Embedding (t-SNE) [28], Uniform Manifold Approximation and Projection (UMAP) [29]. These techniques atte...
B
}_{h}({a^{h+k}_{h-\ell}})\cup\bigl{\{}{{}^{t}}{o^{h+k+1}_{h-\ell}}\bigr{\}}.caligraphic_D start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_a start_POSTSUPERSCRIPT italic_h + italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h - roman_ℓ end_POSTSUBSCRIPT ) ...
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
We now fit the density mappings based on the density estimation oracle. For each step h∈[H]ℎdelimited-[]𝐻h\in[H]italic_h ∈ [ italic_H ] and action sequence ah−ℓh+k∈𝒜k+ℓ+1subscriptsuperscript𝑎ℎ𝑘ℎℓsuperscript𝒜𝑘ℓ1{a^{h+k}_{h-\ell}}\in\mathcal{A}^{k+\ell+1}italic_a start_POSTSUPERSCRIPT italic_h + italic_k end_POSTS...
Upon collecting the data, we follow the embedding learning procedure and fit the density mappings for the estimation of Bellman operator. In practice, various approaches are available in fitting the density by observations, including the maximum likelihood estimation (MLE), the generative adversial approaches, and the ...
where the dataset 𝒟htsubscriptsuperscript𝒟𝑡ℎ\mathcal{D}^{t}_{h}caligraphic_D start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is updated based on the data collection procedure described in §4.1. Meanwhile, we define the following density mappings for the estimation of...
C
Moreover, both the observations and rewards depends on the state Shsubscript𝑆ℎS_{h}italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and such dependency is depicted in red. We would like to highlight that Shsubscript𝑆ℎS_{h}italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT affects both Ahsubscript𝐴ℎA_{h}i...
In specific, based on the confounded dataset, we first constructing a novel confidence region CRπ⁢(ξ)superscriptCR𝜋𝜉\textnormal{CR}^{\pi}(\xi)CR start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_ξ ) for 𝐛πsuperscript𝐛𝜋\mathbf{b}^{\pi}bold_b start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT based on leve...
We propose a novel policy optimization algorithm which leverages proximal causal inference for handling the confounding bias, and adopts pessimism to tackle the distributional shift. The core of our algorithm is a coupled sequence of confidence regions constructed via proximal causal inference and minimax estimation, w...
Furthermore, in order to handle the distributional shift between behavior policy and target policies, we construct a sequence of confidence regions for {bhπ}h=1Hsuperscriptsubscriptsuperscriptsubscript𝑏ℎ𝜋ℎ1𝐻\{b_{h}^{\pi}\}_{h=1}^{H}{ italic_b start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT itali...
More concretely, P3O involves two components — policy evaluation via minimax estimation and policy optimization via pessimism. Specifically, to tackle the distributional shift, P3O returns the policy that maximizes pessimistic estimates of the values obtained by policy evaluation.
D
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi...
On the other hand, a growing body of literature leverages optimization procedures to facilitate online inference, starting with Robbins1951stochastic; Kiefer1952Stochastic and continuing through Robbins1971convergence; Fabian1973Asymptotically; Ermoliev1983Stochastic. To study the asymptotic distribution of stochastic ...
Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me...
The asymptotics of second-order Newton’s methods for unconstrained problems have recently been investigated. Bercu2020Efficient designed an online Newton’s method for logistic regression, and Boyer2023asymptotic generalized that method to general regression problems. Compared to first-order methods that often consider...
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
A
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition. In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3...
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous...
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont...
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB...
A
Statistical properties of natural images that have been well-studied include shift-invariance, scale-invariance, high spatial auto-correlation and preponderance of certain colors, as well as spatial sparseness of edges, [18, 19, 20, 21]. Shift-invariance, which is a form of stationarity of signals, arises due to the a...
We adjusted the stride (or patch size) in the initial convolutional layers in all WaveMix models that handled high-resolution images to ensure that resolution of feature maps before it reached WaveMix blocks was always less than 256 for classification and 1024 for segmentation. We only used strided convolutions in the ...
As shown in Figure 1 (a) and (b), the macro-level idea behind the proposed framework is to stack N𝑁Nitalic_N (a hyperparameter) similar WaveMix blocks that are fully convolutional (by design) in both spatial dimensions and maintain the spatial resolution of the feature maps across the blocks. While some CNNs are full...
Among the different types of mother wavelets available, we used the Haar wavelet (a special case of the Daubechies wavelet [36] , also known as Db1), which is frequently used due to its simplicity and faster computation. Haar wavelet is both orthogonal and symmetric in nature, and has been extensively used to extract ...
Two-dimensional discrete wavelet transform (2D-DWT) has been extensively researched to exploit various properties of images for multiple applications, including denoising [23], super-resolution [24], recognition [25], and compression [26]. Features extracted using wavelet transforms have also been used extensively with...
D
rows of R¯¯𝑅\bar{R}over¯ start_ARG italic_R end_ARG and the columns of σ⁢(R¯)𝜎¯𝑅\sigma(\bar{R})italic_σ ( over¯ start_ARG italic_R end_ARG ) belong to B′′superscript𝐵′′B^{\prime\prime}italic_B start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT and J′′superscript𝐽′′J^{\prime\prime}italic_J start_POSTSUPERSCRIPT ′ ′ end_...
ō-regular at point η𝜂\etaitalic_η and then returns Y𝑌Yitalic_Y such that 𝒪Y,Σ=0subscript𝒪𝑌Σ0\mathcal{O}_{Y,\Sigma}=0caligraphic_O start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT = 0 and ∇Y,Σ(η)≠0subscript∇𝑌Σ𝜂0\nabla_{Y,\Sigma}(\eta)\neq 0∇ start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT ( itali...
Output: “failed” or a set Y𝑌Yitalic_Y of rows such that 𝒪Y,Σ=0subscript𝒪𝑌Σ0\mathcal{O}_{Y,\Sigma}=0caligraphic_O start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT = 0 and ∇Y,Σ(η)≠0subscript∇𝑌Σ𝜂0\nabla_{Y,\Sigma}(\eta)\neq 0∇ start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT ( italic_η ) ≠ 0.
ii) and iii) We know that there exists Y𝑌Yitalic_Y such that 𝒪Y,Σ=0subscript𝒪𝑌Σ0\mathcal{O}_{Y,\Sigma}=0caligraphic_O start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT = 0 and ∇Y,Σ(η)≠0subscript∇𝑌Σ𝜂0\nabla_{Y,\Sigma}(\eta)\neq 0∇ start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT ( italic_η ) ≠ 0. So...
i) Let Y𝑌Yitalic_Y be such that 𝒪Y,Σ=0subscript𝒪𝑌Σ0\mathcal{O}_{Y,\Sigma}=0caligraphic_O start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT = 0 and ∇Y,Σ(η)≠0subscript∇𝑌Σ𝜂0\nabla_{Y,\Sigma}(\eta)\neq 0∇ start_POSTSUBSCRIPT italic_Y , roman_Σ end_POSTSUBSCRIPT ( italic_η ) ≠ 0 and λ𝜆\lambdaitalic_λ be the mi...
C
Solving math word problems dates back to the dawn of artificial intelligence feigenbaum1963computers; bobrow1964natural; charniak1969computer. It is the task of translating a paragraph into a set of equations to be solved li2020graph. We focus on trends in the task since 2019, as a detailed survey zhang2019gap capture...
To highlight this we consider the following comparison. shen2020solving and zhang2020graph each extract two graphs from problem text. One is a number comparison graph, and the other relates word-word pairs shen2020solving or word-number pairs zhang2020graph. They both encode two graphs rather than one heterogeneous gra...
li2020graph learn the mapping between a heterogeneous graph representing the input problem, and an output tree. The graph does not consider mathematical elements of the text, and is instead constructed from two sources: a dependency parse tree representing relationships between words, and a constituency tree which cont...
We have described the path to the state-of-the-art for five representative areas considering the relationship between natural and mathematical language, either through necessity of the task or efficacy of approach. We describe the details, limitations and successes within each area and find that informal methods strug...
Increasing the number and diversity of text elements considered through graphs improves accuracy. Methodologies associated with extracting and encoding dependency graphs between different aspects of word problems and mathematical text in general, is now common practice. Explicitly representing relationships between li...
D
In this paper, we consider networks with no node correspondence and no link between networks as it may be the case in multilayer networks (Kivelä et al.,, 2014). Furthermore, the networks are assumed to represent interactions of the same type (directed or not) and with the same valuation (binary, discrete or continuous...
As a side effect, by modeling these networks together, provided that the networks have common connectivity patterns, we can use the information of certain networks to recover noisy information from other networks by improving the prediction of missing links (Clauset et al.,, 2008). Hence c⁢o⁢l⁢S⁢B⁢M𝑐𝑜𝑙𝑆𝐵𝑀colSBMi...
If the networks in a collection do not have the same connectivity structure, we aim to cluster them accordingly. In order to do this, we propose to use the BIC-L criterion in a similar fashion as we did for testing common connectivity structure in Section 5.2.2. We seek the partition of the collection which maximizes a...
The interest of our c⁢o⁢l⁢S⁢B⁢M𝑐𝑜𝑙𝑆𝐵𝑀colSBMitalic_c italic_o italic_l italic_S italic_B italic_M model is two-folds. The first one is to find a common connectivity pattern which explains the structure of the different networks in the collection and to assess via model selection whether these structures are a rea...
When observing such a collection, we aim to determine if the respective structures of the networks are similar. This paper focuses on the mesoscale structure of the networks by assuming that the nodes can be grouped into blocks on the basis of their connectivity pattern (White et al.,, 1976).
D
Compositionality: In this work, covariate shift is introduced in test set by intervening on only one mechanism (i.e. rotation). In the setting where multiple mechanisms are considered, it will be ideal if multiple E𝐸Eitalic_Es could leverage the knowledge learned separately and cooperate with each other.
It can also be noticed that the pre-trained modules E𝐸Eitalic_E and D𝐷Ditalic_D do not have to access MNIST during training, and do not rely on C𝐶Citalic_C too much either. Based on the fact that the training and test set of MNIST share the same class label space, we also explore a second architecture that only empl...
However, some preliminary results show that E𝐸Eitalic_Es will not generalize well, if the training is based on only the interventions of target mechanism and keeping the others fixed. This is in line with [26], where the generalization improves only if more combinations of two mechanisms (category and pose) are expose...
This task is extensively studied in various computer vision topics, such as 2D spatial invariance learning [16], text detection [46, 45, 44, 43], and 3D pose estimations [26, 6], among many others. However, in most of the existing studies, parameter estimation can only be performed restricted to object categories that ...
Compositionality: In this work, covariate shift is introduced in test set by intervening on only one mechanism (i.e. rotation). In the setting where multiple mechanisms are considered, it will be ideal if multiple E𝐸Eitalic_Es could leverage the knowledge learned separately and cooperate with each other.
B
In real settings when we do not have knowledge about the dataset exact π𝜋\piitalic_π cannot be computed and needs to be estimated - referred to as the Mixture proportion estimation (MPE) problem. Formally speaking, Mixture proportion estimation (MPE) refers to the task of estimating the weight of a component distribu...
Unlike recent supervised variants of infoNCE (Khosla et al., , 2020; Assran et al., , 2020; Zhong et al., , 2021) which can only leverage explicit (strong) supervision (e.g in form of labeled data), puNCE is also able to leverage implicit (weak) supervision from the unlabeled data. The main idea is to use the fact that...
Note that, this is a standard assumption made in PU Learning literature and is at the heart of most classical cost sensitive PU Learning algorithms (Elkan and Noto, , 2008; Kiryo et al., , 2017; Du Plessis et al., , 2014; Chen et al., 2020a, ; Niu et al., , 2016).
PU Learning.  Owing to its importance in several real world problems (e.g. recommendation), developing specialized learning algorithms for PU setting have received renewed impetus in the machine learning community. Most of the recent research in this area can be broadly categorized into two major class of algorithms - ...
Owing to its importance in several real world problems (e.g. recommendation), developing specialized learning algorithms for PU setting have received renewed impetus in the machine learning community. Most of the recent research in this area can be broadly categorized into two major class of algorithms - based on how t...
C
Statistical factor models of networks are generally not competitive with machine learning classifiers that use even simple topological features (see, e.g., Clauset et al., 2008; Liben-Nowell and Kleinberg, 2007; Ghasemian et al., 2020). As such, the absolute performance here should not be considered a metric of primary...
Interestingly, the cross-validation observations are different for each link prediction task. We observe a higher test-AUC associated with the layer dependent NNTuck in the independent link prediction task, and becomes more pronounced as K𝐾Kitalic_K increases. In the tubular link prediction task, however, the layer in...
The construction of the cross-validation approach is as follows. For each link prediction task we construct five different masking tensors and estimate a model based on only observed entries of the data tensor. We select the NNTuck with the highest test set log-likelihood from 50 different runs of the multiplicative u...
Cross validation results for the Village 0 network are shown in Figure 8. There is a slight gap in test-AUC across the two link-prediction tasks for this dataset, where the tubular link-prediction task is more difficult than the independent link-prediction task. However, in both tasks we observe that the layer redundan...
Note that for the cross-validation tasks in the following subsections, we report the average test AUC across 50 different random initializations for each combination of K𝐾Kitalic_K and C𝐶Citalic_C. We vary K𝐾Kitalic_K from 2222 to 12121212 in the Krackhardt multilayer network, and from 2222 to 20202020 in the Malari...
B
In this section, we evaluate the effectiveness and efficiency of our model on widely used benchmark datasets, scrutinize the contribution of each component of the model in an ablation study, and present a case study for a better understanding of our model.
This reflects that it is indeed challenging for the simple reasoning of QAGCN to answer complex 3-hop questions. However, the performance of QAGCN is better than most reasoning-based methods, e.g., 29.8% and 1.6% higher than the best-performing RL-based method SRN on MetaQA 3-hop and PQ-3hop, respectively.
Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task. Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:...
However, please note that QAGCN also has a large margin of 5.8% in comparison with the third-best method NSM. This demonstrates that, on complex questions, the simple single-step reasoning of QAGCN could perform better than the SOTA methods with complex multi-step label propagation.
Also, TransferNet and NSM are state-of-the-art (SOTA) reasoning-based methods in multi-relation QA. Furthermore, considering that our answer search in the learned embedding space is similar to embedding-based methods, we also select a prominently adopted embedding-based method: EmbedKGQA [22].
B
When running the densely encoded version of the word embeddings classifier from Section 3.3, 100% accuracy was achieved using 16 embedding dimensions and only 4 qubits (alexander2022quantum, ). This model achieves perfect accuracy for the lambeq set and in the fewest qubits of all the methods covered.
To draw deeper conclusions on scalability, Alexander and Widdows alexander2022quantum tested the QSVM classifiers on a more complex dataset of IMDb movie reviews (maas2011imdb, ). Actual reviews taken from the database incorporated varied word combinations and colloquial language,
and results accuracy can be considered in the light of what problems users are trying to solve. For example, the work by Alexander and Widdows alexander2022quantum investigates solely the effects of decreasing space in the QSVM using a densely encoded feature map. Improved accuracy from 90% to 100% in fewer qubits on ...
and to avoid redundancy and get more information out of each dimension, more compact distributional vector embeddings are often preferred. The work by Alexander and Widdows alexander2022quantum details the method used to classify words from their vector embeddings using a quantum support vector machine (QSVM), and dem...
For the problem of correctly classifying general text data samples using quantum computation, properly reflecting the complexity of language in quantum representations is challenging. The accuracies from quantum circuits are dependent on the compatibility between the language dataset, type of classes (sentiment, topic...
A
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
where 𝒩Y⁢(k)subscript𝒩𝑌𝑘\mathcal{N}_{Y}(k)caligraphic_N start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ( italic_k ) is a set of negative samples whose labels are different from node i𝑖iitalic_i, i.e. yi≠yksubscript𝑦𝑖subscript𝑦𝑘y_{i}\neq y_{k}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≠ italic_y st...
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01906, Artificial Intelligence Graduate School Program(POSTECH)) and National Research Foundation of Korea (NRF) grant funded by the Korea government...
Node classification tasks predict labels of nodes based on graph structure and node features. We aim to improve the prediction accuracy of GNN models by restructuring edges via the adaptive SC method, particularly for heterophilic graphs. The evaluation results are shown in Equation 1. On average, the performance of GN...
C
\mathsf{total}}(\alpha^{\mathsf{eq}},\Theta^{\mathsf{eq}})italic_V ( italic_α , roman_Θ ) := caligraphic_R start_POSTSUPERSCRIPT sansserif_total end_POSTSUPERSCRIPT ( italic_α , roman_Θ ) - caligraphic_R start_POSTSUPERSCRIPT sansserif_total end_POSTSUPERSCRIPT ( italic_α start_POSTSUPERSCRIPT sansserif_eq end_POSTSUPE...
show that there is a close connection between such dynamics and the total risk summed over subpopulations and learners. We provide a comprehensive characterization of stable equilibria and investigate the implications in terms of a utilitarian social welfare.
The connection between stability and the total risk function is significant in at least two ways: first, it means that under general classes of myopic and self-interested behaviors on the part of subpopulations and learners, the total risk is driven to at least a local minimum.
Second, it is a technically useful connection that will enable us to characterize and classify the stable equilibria for dynamics which are risk minimizing in the limit. We remark that Theorem 4.3 leaves open the question of stability for equilibria which are non-isolated minima of the total risk function.
For any learners and subpopulations who are risk minimizing in the limit, an equilibrium (α𝖾𝗊,Θ𝖾𝗊)superscript𝛼𝖾𝗊superscriptΘ𝖾𝗊(\alpha^{\mathsf{eq}},\Theta^{\mathsf{eq}})( italic_α start_POSTSUPERSCRIPT sansserif_eq end_POSTSUPERSCRIPT , roman_Θ start_POSTSUPERSCRIPT sansserif_eq end_POSTSUPERSCRIPT ) is asympt...
B
We consider fairness in the sense of equalized odds (Hardt et al., 2016) (also termed disparate mistreatment; see, e.g., Zafar et al., 2017a). This notion, originally defined for binary classification, defines a classifier as fair if its false positive rate and its false negative rate, conditioned on the value of the ...
Clearly, if the confusion matrix conditioned on each of the attribute values is known, then it is easy to check whether a classifier is fair and also to calculate its error. However, as in the scenarios described above, the confusion matrix may not be known. We study the conclusions that can be drawn based on label pro...
Obtain the necessary information about the classifier: The prevalence of each sub-population; The true frequency of each of the possible labels in each sub-population; The frequency of prediction of each of the possible labels in each sub-population. Set Inputs using this information.
In the label-proportions setting, we assume that the confusion matrices of the classifier under study are unknown, and the only available information on the labels is the proportions of the true labels and the predicted labels in each region. Denote the available information in the label-proportions setting by
In the previous section, we discussed the calculation of unfairness based on the confusion matrices of the classifier in each sub-population. However, as discussed in the Introduction, these confusion matrices might be unavailable, especially if we do not have access to the classifier or to individual-level validation...
A
To further ease the use of the new method, we present extensions of the algorithms, that can automatically learn the right values for the two input parameters, namely motif size k𝑘kitalic_k and length l𝑙litalic_l, to discover interesting motifs. This considerably reduces the time and effort needed in exploratory anal...
Our experimental evaluation is three-fold: Firstly in Section 6.1 we compare our approximate algorithm against SotA in a quantitative analysis on six real world sets. We evaluate methods by the similarity and cardinality of motif sets. Secondly in Section 6.2, we compare our approximate k𝑘kitalic_k-Motiflet algorithm ...
We first compare the results of the approximate k𝑘kitalic_k-Motiflet algorithm to that of four state-of-the-art competitors using the six real TS. For these comparisons we performed an unbiased computation of extents and cardinalities of found motif sets at equivalent values of r𝑟ritalic_r (respectively d=2⋅r𝑑⋅2𝑟d=...
We perform extensive quantitative and qualitative evaluation of our new methods on six real-world and 25 semi-synthetic TS and compare them to four state-of-the-art competitors. We show that our approximate algorithm finds larger motif sets given the same distance threshold and motif sets with smaller pairwise distance...
By qualitative and quantitative evaluation on six real-world and 25 semi-synthetic use cases, we showed that the approximate algorithm produces better motifs than all its competitors at lower runtimes, and that its results come very close to the exact algorithm despite an exponentially lower runtime. Future work will c...
C
\mathscr{F}(kh-1)])roman_inf start_POSTSUBSCRIPT italic_k ≥ 0 end_POSTSUBSCRIPT italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( blackboard_E [ ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_...
At present, most results on decentralized online linear regression algorithms (e.g. [28]-[29] and [45]-[46]) all require that the regression matrices and graphs satisfy some special statistical properties, such as i.i.d., spatio-temporal independence or stationary, etc. However, these special statistical assumptions a...
To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c...
Historically, Guo [41] first proposed the stochastic persistence of excitation condition for analyzing the centralized Kalman filtering algorithm, which was then refined in [42]. Whereafter, the cooperative information condition on the conditional expectations of the regression matrices over the deterministic connected...
Here, by assuming that (i) the sequences of the graphs, regression matrices and noises are identically distributed, respectively; (ii) the mean graph is undirected; (iii) the sequences of regression matrices and noises are mutually independent, we obtain the above algorithm. In fact, even if the mean graphs are direct...
A
Since the theoretical bounds are defined in terms of quantities that are not always easily computable, we also propose computationally inexpensive heuristics while still maintaining a close relation to the rigorous theoretical estimates. The heuristics allow to dynamically adjust the accuracy in the AAR calculations wi...
Guided by the theoretical bounds, we constructed a heuristic that dynamically adjusts the dimension of the projection subspace at each iteration. As the process approaches convergence, the backward error decreases, which allows for an increase of the inaccuracy of the least-squares calculations while still maintaining...
Our theoretical results allow for accuracy reduction in different calculations performed by AAR on linear fixed-point problems. When the fixed-point operator evaluations are the dominant computational cost of AAR, one may choose to approximate the evaluations of the fixed-point operator to reduce the computational cost...
In the numerical sections, we assess the effectiveness of our heuristics using two different approaches to inject inaccuracies into AAR. The first approach injects error in the evaluations of the fixed-point operator, and the heuristics are used to dynamically adjust the magnitude of the injected error. The second app...
Since the theoretical bounds are defined in terms of quantities that are not always easily computable, we also propose computationally inexpensive heuristics while still maintaining a close relation to the rigorous theoretical estimates. The heuristics allow to dynamically adjust the accuracy in the AAR calculations wi...
C
We proposed STAS, a structured way to evaluate the generated summaries. In addition, we conducted a user study to validate and interpret the STAS score ranges. We also proposed topic-controllable methods that employ either topic embeddings or control tokens demonstrating that the latter can successfully influence the s...
There exist several controllable approaches that prepend information to the input source to influence the different aspects of the text such as the style [11] or the presence of a particular entity [12, 13]. Even though this technique can be readily combined with topic controllable summarization, this direction has not...
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and...
Controllable summarization belongs to the broader field of controllable text-to-text generation [15, 16]. Several approaches for controlling the model’s output exist, either using embedding-based approaches [5, 6], prepending information using special tokens [11, 12] or using decoder-only architectures [15]. Controllab...
Future research could examine other controllable aspects, such as style [11], entities [13] or length [17, 13]. In addition, the tagging-based method could be further extended to working with any arbitrary topic, bypassing the requirement of having a labeled document collection of a topic to guide the summary towards ...
D
A more detailed analysis of the usage and effectiveness ratios for the multiplier shows that, after some optimization on the positioning and structure of qubit storages, the multiplier can achieve a usage ratio of 33/48 for the 4-qubit case. Generalized, the multiplier requires 2*3*(N+1+⌈N/4⌉+⌊N/2⌋)23𝑁1𝑁4𝑁22*3*(N+1...
We define the usage ratio as the tile qubits to the total number of computer qubits necessary to hold the tiling. In the case of a single 3D tile we use seven qubits, and if the hardware would look like a cube (has eight vertices), the usage ratio would be 7/8. If one considers that only three are computational qubits ...
Critically, the fact that quantum circuits are often formed by repeating patterns of sub-circuits inspires an opportunity to use this information for speeding up the compilation and the routing of the qubits. For example, this is the case for many arithmetic circuits which were imported from classical computing (e.g. a...
Tiling is a method for compiling circuits for a device that has a regular layout of qubits, and can be used to improve the usage ratio of the quantum chips as a whole. Standard cells and tiling, together with layout-aware routing methods, allow for the extremely fast and efficient compilation of very large scale quantu...
We conclude that tiling standard cells allows for a faster and improved understanding of the layout of the compiled circuit without the processing time involved in compilation and routing. It is a valuable tool in estimating the resources required for compiling a given quantum circuit to hardware, and especially in cre...
C
Brainweb is a simulated brain database that contains a set of realistic MRI data volumes produced by an MRI Simulator. We used this tool to generate test scans in 5 different parameter settings. The results can be seen in Figures 6 and 6 for both models. The evaluation metrics on this test-set can be found in Table 2.
For our training, we require the MRI scans in two different parameter settings of {TE, TR}. One serves as input to the model, and the other as the ground truth corresponding to the desired parameter setting to compute the loss. We use MRiLab [7] which is an MRI Simulator to generate these synthetic brain scans in diff...
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
It can be observed that the default-to-param model achieves a higher mean PSNR and lower MAE compared to Param-to-Param model. We believe this is due to the more complex nonlinearity associated with the task of reparameterizing from any parameter than from a fixed parameter.
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p...
C
Namely, the computational complexity of the numerical algorithm increases exponentially as a function of the dimensionality of the problem [4, 27]. Although particle methods hold promise for addressing high-dimensional problems, standard particle methods often lack accuracy and are not suitable for problems involving d...
During the recent years, neural networks (NNs) have demonstrated remarkable success across a wide spectrum of scientific disciplines [12, 31, 41, 43, 47]. Leveraging the potent expressive power of neural network [3, 10, 20], particularly deep neural network (DNN) architectures,
The model is trained with 200200200200 outer iterations (τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01) and at most 200200200200 iterations for inner optimization (30). An early stop criterion was applied if the l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-norm of the model parameters between two consecut...
In principle, the proposed numerical framework is independent of the choice of neural network architectures. However, different neural network architectures may lead to different numerical performances, arising from a balance of approximation (representation power), optimization, and generalization. In this subsection...
Given that the proposed numerical scheme employs neural networks with time-dependent parameters to approximate the solution of a gradient flow, there is no need to employ a deep neural network. In all numerical experiments for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows, we...
A
On our way, we prove several results and provide examples that may be of independent interest to the TDA community. In particular, we show that increasing the number of parameters of two filtrations of topological spaces can only increase the interleaving/convolution distance between their associated persistence module...
We review the notion of γ𝛾\gammaitalic_γ-sheaves, and recall the precise relationship between this type of sheaves and persistence modules [3]. We then strengthen one of our previous results, asserting that the interleaving distance between persistence modules equals the convolution distance between their associated γ...
The interplay between sheaves on a real vector space and persistence theory necessitates the use of a topology on a vector space introduced by Kashiwara and Schapira [16], called the γ𝛾\gammaitalic_γ-topology. In this section, we first recall the basic definitions associated to the γ𝛾\gammaitalic_γ-topology. There is...
We review classical constructions of sheaf theory, such as integral transforms and kernel compositions. We recall the definition of the convolution distance between (derived) sheaves of 𝐤𝐤{\mathbf{k}}bold_k-vector spaces on a finite-dimensional real vector space, as developed by Kashiwara-Schapira [17] and provide pr...
It is possible to equip the derived categories of sheaves on a good metric space (X,dX)𝑋subscript𝑑𝑋(X,d_{X})( italic_X , italic_d start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) with a pseudo-metric [24]. This pseudo-metric generalizes the convolution distance of [17] from normed finite dimensional real vector spa...
C
Figure 5: The sensitivity of different algorithms to variable ordering compared to other factors in terms of impact on F1 score (CPDAG). The comparisons are: sample size increased by 100 times; variable ordering changed from worst to optimal; score function changed from BIC to BDeu for score-based and hybrid algorithm...
Figure 5 shows the sensitivity to variable ordering for the algorithms described in section 2 and compares it with their sensitivity to other selected factors (details are given in the figure caption). TABU is a variant of HC and is, as one might expect, sensitive to variable ordering, but the mean F1 change of 0.278 i...
Table 3 shows the mean F1 scores that each of the other algorithms generate relative to the F1 scores of HC, averaged across all networks and sample sizes, for each of the three variable orderings. In addition, the final column in the table presents results where ten random variable orders are generated for each networ...
We start by investigating the bnlearn implementation of HC which is widely used in the literature [32, 35] and find that these arbitrary edge orientations are made on the basis of arbitrary variable ordering in the dataset, which therefore has an impact on the accuracy of the learnt CPDAG. For HC, variable ordering is ...
Figure 5 also shows that the constraint-based algorithms are sensitive to variable ordering, with a mean change in F1 of 0.036, 0.104 and 0.021 for PC-Stable, GS and Inter-IAMB respectively. Although these sensitivities are small relative to those observed in other algorithms, they are still larger than the sensitivity...
A
While this sacrifices the computational benefits of using integer arithmetic, empirical findings in LLMs suggest that weight-only quantization can achieve significantly higher compression ratios for a given target accuracy compared to quantizing both weights and activations (Zeng et al., 2022).
Recent studies have focused on the inefficiency of the generation step and, in response, proposed the utilization of the W4A16 format (Frantar et al., 2022; Zeng et al., 2022; Dettmers et al., 2023; Kim et al., 2023), which compresses model weights into 4-bit integers without quantizing the activations as weights typic...
Accordingly, in recent years, several large-scale generative language models, including GPT-3 (175B) (Brown et al., 2020), HyperCLOVA (204B) (Kim et al., 2021a), Gopher (280B) (Rae et al., 2021), Chinchilla (70B) (Hoffmann et al., 2022), Megatron Turing NLG (530B) (Smith et al., 2022), PaLM (540B) (Chowdhery et al., 20...
Various weight-only quantization methods have been proposed to improve the compression ratio while preserving accuracy, often accompanied by dedicated kernels for practical acceleration through quantization (Jeon et al., 2020; Frantar et al., 2022; Lin et al., 2023).
These efforts include quantization (Rastegari et al., 2016; Jacob et al., 2018; Nagel et al., 2017; Xu et al., 2018; Chung et al., 2020), pruning (Han et al., 2016; Zhu & Gupta, 2017; Gale et al., 2019), knowledge distillation (Hinton et al., 2015; Polino et al., 2018), and low-rank approximation (N. Sainath et al., 20...
C
Part of the work was done while the first and second authors attended the “Discrete Optimization” trimester program of the the Hausdorff Institute of Mathematics, Bonn. The authors are grateful to HIM for providing excellent working conditions and support.
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
The third author was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the ÚNKP-21-5 New National Excellence Program of the Ministry of Innovation and Technology of Hungary. The research has been implemented with the support provided by the Lendület Programme of the Hungari...
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
B
For the CIFAR10-DVS dataset, we adopt the VGG11-like architecture introduced in TET [36]. Due to the significant overfitting, we adopt the data augmentation as [41] and [36]. To maintain the same training settings as [36] for TCJA-TET-SNN, we use the triangle surrogate function, eliminate the last LIF layer, and replac...
TABLE I: The network architecture setting for each dataset. x𝑥xitalic_xCy𝑦yitalic_y/MPy𝑦yitalic_y/APy𝑦yitalic_y denotes is the Conv2D/MaxPooling/AvgPooling layer with output channels = x𝑥xitalic_x and kernel size = y𝑦yitalic_y. n𝑛nitalic_nFC denotes the fully connected layer with output feature = n𝑛nitalic_n, m...
For the DVS128 dataset, we utilize the same network structure and hyper-parameters as the [30] and add the TCJA module before the last two pooling layers. Dropout (DP) [51] rate is set to 0.5 in accordance with the original network. We add a 1-D average pooling voting layer in the last layer, which yielded a 10-dimens...
For N-Caltech 101 dataset, we combine two architectures together and add the TCJA module before the last two pooling layers. We first reserve a pooling for each layer; then, with the network going deeper, spatial resolution is reduced with the increasing channel number. To relieve the evident overfitting, the ratio of...
For the CIFAR10-DVS dataset, we adopt the VGG11-like architecture introduced in TET [36]. Due to the significant overfitting, we adopt the data augmentation as [41] and [36]. To maintain the same training settings as [36] for TCJA-TET-SNN, we use the triangle surrogate function, eliminate the last LIF layer, and replac...
C
that u∈[Hσ⁢(Ω)]2𝑢superscriptdelimited-[]superscript𝐻𝜎Ω2u\in\bigl{[}H^{\sigma}(\Omega)\bigr{]}^{2}italic_u ∈ [ italic_H start_POSTSUPERSCRIPT italic_σ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT with σ>12𝜎12\sigma>\frac{1}{2}italic_σ > divide start_ARG 1 end_ARG start_ARG 2 end_ARG.
K∈𝒯h𝐾subscript𝒯ℎK\in\mathcal{T}_{h}italic_K ∈ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and h≔maxK∈𝒯h⁡hK≔ℎsubscript𝐾subscript𝒯ℎsubscriptℎ𝐾h\coloneqq\max_{K\in\mathcal{T}_{h}}h_{K}italic_h ≔ roman_max start_POSTSUBSCRIPT italic_K ∈ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT en...
(⋅,⋅)𝒯h≔∑K∈𝒯h(⋅,⋅)K≔subscript⋅⋅subscript𝒯ℎsubscript𝐾subscript𝒯ℎsubscript⋅⋅𝐾(\cdot,\cdot)_{\mathcal{T}_{h}}\coloneqq\sum_{K\in\mathcal{T}_{h}}(\cdot,\cdot% )_{K}( ⋅ , ⋅ ) start_POSTSUBSCRIPT caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≔ ∑ start_POSTSUBSCRIPT italic_K ∈ caligraphi...
\qquad Q_{h}\coloneqq\prod_{K\in\mathcal{T}_{h}}\prod_{e\subset\partial K}% \bigl{[}P_{k-1}(e)\bigr{]}^{2}.italic_V start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ≔ ∏ start_POSTSUBSCRIPT italic_K ∈ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ italic_P start_POSTSUBSCRIPT 2 italic_k -...
\mathcal{T}_{h}}+\langle\hat{p}_{h},v_{h}\rangle_{\partial\mathcal{T}_{h}}( ∇ ⋅ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , ∇ ⋅ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT + ( ∇ × italic_u star...
A
If the estimation is accurate, this indicates that the transition functions of the action at the index k𝑘kitalic_k of 𝐂𝐂\mathbf{C}bold_C and its predecessors are accurate; so the procedure breaks the loop and returns the data points computed in the previous iterations (line 7). Otherwise, it indicates inaccurate tra...
Heuristic 3: necessary preconditions. Based on the observation that owning certain tokens is a necessary precondition for invoking some actions, FlashSyn prunes symbolic actions vectors that contain actions requiring tokens666Note a token can be standard tokens (ERC20, BEP20), or any other forms of tokens such as debt...
Flash loan attacks typically focus on victim contracts containing functions capable of transferring tokens,777Here, tokens refer to various forms of DeFi tokens, including stable coins, debt tokens, share tokens, liquidity tokens, asset tokens, etc. which can be invoked by regular users. The attacker manipulates the t...
For all verified smart contracts, their ABIs are made public to facilitate users to call functions and engage with the contracts. An ABI typically comprises (public or external) function names, argument names/types, function state mutability, and return types. During the process of selecting action candidates, certain...
internally Y pool automatically deposits USDT and USDC to Yearn,111Yearn is a DeFi protocol that generates yield on deposited assets. yTokens of Yearn represent the liquidity provided in a Yearn product. keeps yUSDT and yUSDC tokens, and retrieves them back when the users withdraw. We omit this complication for simplic...
B
}{n\cdot h^{d}}}\right),∥ over^ start_ARG italic_f end_ARG start_POSTSUBSCRIPT bold_H start_POSTSUBSCRIPT bold_0 end_POSTSUBSCRIPT , italic_h end_POSTSUBSCRIPT ( italic_x ) - italic_f ( italic_x ) ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT < italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⋅ ( divide start_ARG itali...
The sample complexity in the general bound of Theorem 6 grows exponentially with the dimension of the parameter space ΘΘ\Thetaroman_Θ. In many practical cases however, such as the HalfCircle domain of Example 3, there may be a low dimensional representation that encodes most of the important information in the tasks, e...
The first term on the right hand side of (1) is a bias term, which can be reduced by reducing hℎhitalic_h. However, the second term will grow when hℎhitalic_h is reduced. In general, the sample complexity under an optimal bandwidth scales exponentially in the dimension d𝑑ditalic_d (see Lemma 4 for a specific example)...
}}{{P_{M^{\prime}}\left(s^{\prime},c\mid s,a\right)}}italic_q = roman_sup start_POSTSUBSCRIPT italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_M , italic_s , italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_S , italic_a ∈ caligraphic_A , italic_c ∈ caligraphic_C end_POSTSU...
The first term in the bound of Theorem 9 is the KDE error. Note that, compared to the KDE error in Theorem 6, the exponential dependence is on the low dimension d′superscript𝑑′d^{\prime}italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and not on the higher dimension d𝑑ditalic_d. The second term in the bound is d...
B
We collected the healthcare Q&A corpus from OHCs, the platforms for laypeople to exchange health-related information, where people tend to use colloquial expressions rather than professional jargon. Hence, we can exploit the usage of health vocabulary from such HCGCs on the OHCs.
Parameter Settings. We determined the word vector space of each language by feeding all texts from English and Chinese healthcare Q&A corpus. We employed the word2vec module from gensim888https://radimrehurek.com/gensim/models/word2vec.html, which implements the Skip-gram algorithm [27]. All parameters of the Skip-gram...
Test Set Preparation. We evaluated our proposed framework by testing the query performance of frequently used medical entities selected from our collected healthcare Q&A corpora. Only five divisions (community groups in MedHelp), including General-Health, Women-Health, Dermatology, Ear-Nose-Throat, and Neurology, were...
After collecting the corpora, we processed our textual data using existing NLP toolkits. For English, we applied sciSpacy [33], a package for biomedical text processing, for word tokenization and medical concepts extraction by consulting Metathesaurus of UMLS [34]. Regarding pre-processing Chinese texts, we employed ji...
MedHelp is a popular English OHC consisting of various communities for consumers to discuss diseases. We extracted 520,659 discussion threads from 106 different communities in MedHelp. To gather Chinese contents, we collected three Chinese healthcare Q&A websites, including eDoctor222http://taiwanedoctor.mohw.gov.tw/,...
D
As a specific instance, the Att-BLSTM classifier predicted that the story titled “AI predicts protein structures” is about Science and Technology and we implemented TERP to generate the optimal explanation behind this prediction as shown in Fig. 5. We see that the most influential keywords are ’species’, and ’science’ ...
Furthermore, we view the overall problem of AI model explanation from the lens of classical thermodynamics.Callen (1985) It is known in thermodynamics that the equilibrium state of a system is characterized by a minimum in its Helmholtz Free Energy F⁢(T,V):=U−T⁢Sassign𝐹𝑇𝑉𝑈𝑇𝑆F(T,V):=U-TSitalic_F ( italic_T , ital...
Recently there has been significant progress in addressing this issue and the proposed approaches can be classified into two categories: (a) AI models that are inherently explainable, or (b) post-hoc explanation schemes for AI models that are not inherently explainable (XAI).Rudin (2019) Since most of the existing bl...
The widespread adoption of AI-based black-box models has become a standard practice across various fields due to their ability to be deployed without requiring an in-depth understanding of the underlying processes. However, this advantage also poses challenges regarding trustworthiness and the explanation of AI models...
One of the primary motivations behind our work is the recognition that model complexity can be an insufficient descriptor of human interpretability as shown in Fig. 1. In this case, if model complexity is used as a proxy for human interpretability, then both linear models shown in Fig. 1(a,b) will be assigned the sam...
C
Following seed generation, FuSeBMC begins the main coverage analysis phase (lines 7—30 of Algorithm 1). FuSeBMC incorporates three engines to carry out this analysis: two fuzzers (main fuzzer and selective fuzzer) and a bounded model checker. Here, the main fuzzer and the BMC engine are run with longer timeouts than d...
FuSeBMC begins by analyzing C code and then injecting goal labels into the given C program (based on the code coverage criteria that we introduce in Section 3.2.1) and ranking them according to one of the strategies described in Section 3.2.2 (i.e., depending on the goal’s origin or depth in the PUT). From then on, FuS...
The standard algorithm implemented in the AFL tool works as follows. Firstly, an initial fixed-size input stream is generated using the provided seed (a random seed is used if not explicitly specified). Secondly, the target program is repeatedly executed with the randomly mutated input. If the target program does not ...
It controls the size of the generated test-cases via the Consumed Input Size. In detail, the minimum size of the test-cases produced by the fuzzer is set to the current value of the consumed input size. This allows counter-acting the size selection bias of the AFL mutation algorithms, which tend to favor a reduction o...
The selective fuzzer’s (Alshmrany et al., 2021) main function is to attempt to reach the remaining uncovered goals after the iterative process of applying the fuzzer, and the BMC engine has finished. Similarly to the main fuzzer, it utilizes information about the identified ranges of the input variables to produce inp...
B
=𝐌b/p1/p⁢(x)⁢Db,p⁢C(α,β).absentsubscriptsuperscript𝐌1𝑝𝑏𝑝𝑥subscript𝐷𝑏𝑝superscript𝐶𝛼𝛽\displaystyle=\mathbf{M}^{1/p}_{b/p}(x)D_{b,p}C^{(\alpha,\beta)}.= bold_M start_POSTSUPERSCRIPT 1 / italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b / italic_p end_POSTSUBSCRIPT ( italic_x ) italic_D start_POSTSUBSC...
Another topic for future research is stable algorithms for fractional integration matrices in the JFP basis with optimal (i.e., 𝒪⁢(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )) complexity. As mentioned in Section 5.2, one possibility is to use asymptotic ...
The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in...
The results derived in this section will motivate the two algorithms for computing fractional integration matrices that are discussed in the next section. The following result gives the action of ℐμsuperscriptℐ𝜇\mathcal{I}^{\mu}caligraphic_I start_POSTSUPERSCRIPT italic_μ end_POSTSUPERSCRIPT on the JFP basis and is th...
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [...
C
Supplementary Figure S23: The maximum capability of the topological features measured by AUC-mROC in the supervised approach. The imbalanced positive and negative samples are generated by sample2. We use 21 indexes from four families and measure the performance of the supervised prediction by these indexes in all 550 n...
p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the percentage of samples in LPsuperscript𝐿𝑃L^{P}italic_L start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT that hold the topological feature, and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the percentage of samples in...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
B
The memory usage of a computation graph is related to its implementation [6, 44, 47, 46]. We provide two settings for memory measurement: (1) analytic profiling: we count the size of extra tensors required for backward computation, including the saved intermediate activation, binary truncation task, and the updated wei...
In this paper, we aim to bridge the gap and enable tiny on-device training with algorithm-system co-design. We investigate tiny on-device training and find two unique challenges: (1) the model is quantized on edge devices. A real quantized graph is difficult to optimize due to low-precision tensors and the lack of Batc...
We fine-tuned the last two blocks (simulate low-cost fine-tuning) of MCUNet to various downstream datasets (Table 1). With momentum SGD, the training accuracy of the quantized model (int8) falls behind the floating-point counterpart due to the optimization difficulty. Adaptive learning rate optimizers like Adam [36] ca...
Table 1: Updating real quantized graphs (int8) for the fine-tuning is difficult: the accuracy falls behind the floating-point counterpart (fp32), even with adaptive learning rate optimizers like Adam [36] and LARS [69]. QAS helps to bridge the accuracy gap without memory overhead (slightly higher due to randomness). Th...
We compare the performance of our searched sparse update schemes with two baseline methods: fine-tuning only biases of the last k𝑘kitalic_k layers; fine-tuning weights and biases of the last k𝑘kitalic_k layers (including fine-tuning the full model, when k𝑘kitalic_k equals to the total #layers). For each configurati...
B
1α¯⁢‖r⁢(x)‖≤‖x−x∗‖≤α¯⁢‖r⁢(x)‖,1¯𝛼norm𝑟𝑥norm𝑥superscript𝑥∗¯𝛼norm𝑟𝑥\frac{1}{\underline{\alpha}}\|r(x)\|\leq\|x-x^{\ast}\|\leq\overline{\alpha}\|r% (x)\|,divide start_ARG 1 end_ARG start_ARG under¯ start_ARG italic_α end_ARG end_ARG ∥ italic_r ( italic_x ) ∥ ≤ ∥ italic_x - italic_x start_POSTSUPERSCRIPT ∗ end_POST...
γ¯=max⁡{‖A−D‖:D=d⁢i⁢a⁢g⁢(di)⁢with⁢di∈[−1,1]}¯𝛾:norm𝐴𝐷𝐷𝑑𝑖𝑎𝑔subscript𝑑𝑖withsubscript𝑑𝑖11\underline{\gamma}=\max\{\|A-D\|:D=diag(d_{i})~{}\mbox{with}~{}d_{i}\in[-1,1]\}under¯ start_ARG italic_γ end_ARG = roman_max { ∥ italic_A - italic_D ∥ : italic_D = italic_d italic_i italic_a italic_g ( italic_d start_POSTS...
α¯=max⁡{‖A−B⁢D‖:D=diag⁢(di)⁢with⁢di∈[−1,1]}¯𝛼:norm𝐴𝐵𝐷𝐷diagsubscript𝑑𝑖withsubscript𝑑𝑖11\underline{\alpha}=\max\{\|A-BD\|:D=\mbox{diag}(d_{i})~{}\mbox{with}~{}d_{i}% \in[-1,1]\}under¯ start_ARG italic_α end_ARG = roman_max { ∥ italic_A - italic_B italic_D ∥ : italic_D = diag ( italic_d start_POSTSUBSCRIPT italic...
β¯=max⁡{‖A−D⁢B‖:D=diag⁢(di)⁢with⁢di∈[−1,1]}¯𝛽:norm𝐴𝐷𝐵𝐷diagsubscript𝑑𝑖withsubscript𝑑𝑖11\underline{\beta}=\max\{\|A-DB\|:D=\mbox{diag}(d_{i})~{}\mbox{with}~{}d_{i}\in% [-1,1]\}under¯ start_ARG italic_β end_ARG = roman_max { ∥ italic_A - italic_D italic_B ∥ : italic_D = diag ( italic_d start_POSTSUBSCRIPT italic_...
≥max⁡{‖(A−B⁢D)⁢(A−B⁢D)−1‖:D=d⁢i⁢a⁢g⁢(di)⁢with⁢di∈[−1,1]}absent:norm𝐴𝐵𝐷superscript𝐴𝐵𝐷1𝐷𝑑𝑖𝑎𝑔subscript𝑑𝑖withsubscript𝑑𝑖11\displaystyle\geq\max\{\|(A-BD)(A-BD)^{-1}\|:D=diag(d_{i})~{}\mbox{with}~{}d_{% i}\in[-1,1]\}≥ roman_max { ∥ ( italic_A - italic_B italic_D ) ( italic_A - italic_B italic_D ) start_POSTSU...
B
We proceed with a runtime analysis of the permutation variant of the Jump benchmark. In contrast to our analysis for LeadingOnes, where mild adaptations of the proofs for the bit-string case were sufficient, we now observe substantially new phenomena, which require substantially more work in the analysis. In particula...
In this section, we describe the most relevant previous works. In the interest of brevity, we only concentrate on runtime analysis works, knowing well that other theoretical aspects have been studied for permutation problems as well. Since the theory of evolutionary algorithms using bit-string representations has start...
Stagnation detection: Stagnation detection was proposed in [RW22] (and further developed in [RW21, RW23, DR23]) as a natural way to improve the performance of evolutionary algorithms when they get stuck in a local optimum. Given the power of this approach, it would be interesting to extend it to permuation-based optim...
As discussed in the introduction, the theory of evolutionary computation has massively profited from having a small, but diverse set of benchmark problems. These problems are simple enough to admit mathematical runtime analyses for a broad range of algorithms including more sophisticated ones such as ant colony optimiz...
The Jump benchmark as pseudo-Boolean optimization problem was proposed in [DJW02]. It is the by far most studied multimodal benchmark in the theory of evolutionary algorithms and has led to a broad set of interesting insights, mostly on crossover and on how evolutionary algorithms cope with local optima [DJW02, JW02, ...
D
This formulation adapts to the geometric setting the Euclidean multilevel approach outlined in Section 3.2, and similar comments concerning the trade-off of invoking coarse grid models at various levels apply. We leave a detailed study of this multilevel extension for future work.
In this section we numerically evaluate the proposed approach. In a first experiment we compare the Riemannian Gradient (RG) descent, see Algorithm  2, with a state-of-the-art Accelerated Bregmann Proximal Gradient (ABPG) method [HRX21]. In a second experiment we evaluate one-level vs. two-level (2L) schemes and compar...
Finally, we compared the proposed method to the Euclidean state-of-the-art multilevel method in [KM16], that is capable on handling box constraints. The results are shown as Figure 5.5. We call the latter method two-level projected gradient (2L PG) as it uses the projected gradient method on both levels for optimizing ...
Figure 5.5. Projected gradient (PG) vs. Riemannian gradient (RG) descent in terms of relative objective values is compared for the single-level and two-level (2L) scenario with 2% undersampled projection data. The left column corresponds to the first three phantoms: gear, bone, vessel in Figure 5.1, the right columns ...
In a second experiment we studied the influence of a using a second coarse level for computing efficiently descent directions on a 512×512512512512\times 512512 × 512 grid, as summarized by Algorithm 3. Figures 5.3 and 5.4 show the results for the Riemannian gradient descent: using one level versus two levels compared...
A
Our results reveal that for the ReLU activation, restricting the weights to be non-negative severally limits the ability of the model to express monotone functions. For threshold activation, we have shown that the restriction to positive parameters is less severe and that universality can be achieved at constant depth...
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was c...
Later, the problem of approximating arbitrary monotone functions with networks having non-negative parameters using more standard activation functions such as thresholds or sigmoids has been studied in [12]. In particular [12] gives a recursive construction showing how to approximate in the ℓ∞subscriptℓ\ell_{\infty}rom...
While it is well known that neural networks with ReLU activation are universal approximators (can approximate any continuous function on a bounded domain). Perhaps surprisingly, the same is not true for monotone networks and monotone functions. Namely, there are monotone functions that cannot be approximated within an...
We focused on the threshold activation function. It is an interesting direction to extend our results for other activation functions such as sigmoids. For the universality result of depth 4 monotone networks it seems plausible that one could approximate thresholds by sigmoids and prove that monotone networks of depth ...
D
)\|_{V_{h}}\leq|{\boldsymbol{\nu}}|!{\boldsymbol{b}}^{{\boldsymbol{\nu}}}\frac% {C_{\rm Poin}^{V_{h}}}{\alpha}\|f\|_{L^{2}(D)}.∥ ∂ start_POSTSUBSCRIPT bold_italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_ν end_POSTSUPERSCRIPT italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , bold_italic_y ) ∥ s...
Notably, the convergence order for SIPG can be increased by one using a duality argument as in [11, Sect. 4.2.4] if we have elliptic regularity. Importantly, the FE error in the QMC setting will be integrated with respect to 𝒚𝒚\boldsymbol{y}bold_italic_y. That is why, we additionally assume that |u⁢(⋅,𝒚)|Hk+1⁢(D)sub...
The remainder of the argument is completely analogous to the derivation presented for the continuous setting in [23]: the weights γ𝔲subscript𝛾𝔲\gamma_{\mathrm{\mathfrak{u}}}italic_γ start_POSTSUBSCRIPT fraktur_u end_POSTSUBSCRIPT enter the expression for the upper bound in the same manner as in the continuous settin...
This is an immediate consequence of [37, Lem. 9.1] (in fact, it already appears in [9]), but since the proof is short, we present it for completeness. The proof is carried out by induction with respect to the order of the multi-indices 𝛎∈ℱ𝛎ℱ{\boldsymbol{\nu}}\in\mathscr{F}bold_italic_ν ∈ script_F. In the affine setti...
Let 𝛎∈ℱ𝛎ℱ{\boldsymbol{\nu}}\in\mathscr{F}bold_italic_ν ∈ script_F and suppose that the claim has already been proved for all multi-indices with order less than |𝛎|𝛎|{\boldsymbol{\nu}}|| bold_italic_ν |. Then it is a consequence of Theorem 5.7 and the fact that ∂𝐦η⁢(𝐲)=0superscript𝐦𝜂𝐲0\partial^{\boldsymbol{m}}...
C
Minimizing communication in the CL model is critical due to network bandwidth constraints and latency, energy consumption (think of deep-sea/outer-space exploration), and data usage (e.g., if messages are sent by mobile devices). In this paper, we mainly focus on the round complexity. Like parallel/distributed computa...
In the CL model studied by (TZZ19, ) and (KZZ20, ), each agent interacts with the same environment; for the BAI problem in particular, by pulling the same arm, the agents sample from the same data distribution. However, as mentioned earlier, heterogeneous environments are inherent in many real-world collaborative learn...
As data continue to grow, multi-agent learning has emerged as an important direction in scalable machine learning and has attracted much attention under the name of federated learning (KMR15, ; KMRR16, ; MMR+17, ), where multiple agents try to learn an objective function in parallel via communication. While the majorit...
The (homogeneous) CL model was first used in the work (HKK+13, ) for studying multi-agent BAI, but the model was not formally defined there. The results for fixed-time BAI in (HKK+13, ) only consider the special case where there is only one communication phase (i.e., R=2𝑅2R=2italic_R = 2). The CL model was rigorously ...
The Collaborative Learning Model.   Most study for BAI has been done in the centralized model, in which just one agent pulls the set of arms sequentially. (TZZ19, ; KZZ20, ) studied BAI in the collaborative learning (CL) model, where there are K𝐾Kitalic_K agents, who try to learn the best arm in parallel via communica...
A
While the literature often assumes convexity for the nonsmooth term g𝑔gitalic_g, our proposed method, as indicated in Assumption 1, allows the nonconvex nature of this term. This enables the algorithm to effectively handle a wide range of nonconvex constraints, including rank constraints and ℓ0subscriptℓ0\ell_{0}roma...
Despite an upsurge in developing optimization methods to address such a problem, the potential of low-memory quasi-Newton methods has largely been neglected which can be partially attributed to the absence of theoretical foundations for handling nonsmooth settings. In the smooth strongly convex settings, competitive co...
We propose SPIRAL with convergence guarantees for a wide class of finite sum problems. Not only are both the nonsmooth regularizer g𝑔gitalic_g and the finite sum terms fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT all allowed to be nonconvex, but also fisubscript𝑓𝑖f_{i}italic_f start_P...
In order to achieve superlinear convergence, we assume z⋆superscript𝑧⋆z^{\star}italic_z start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT to be a strong local minimum of the cost φ𝜑\varphiitalic_φ, and that the envelope is twice (strictly) differentiable at this point. The subsequent lemma examines the second-order propert...
Motivated by the aforementioned advancements and recognizing the existing limitations in the literature, the proposed method addresses the optimization of regularized nonsmooth nonconvex cost functions, allowing the gradients of differentiable functions in the finite sum to be non-Lipschitz. To the best of our knowledg...
D
In this example, we apply Algorithm 4 to the COIL-100 dataset [51] which is the extension of the COIL-20 dataset. This data tensor consists of 7200 color images (100 objects under 72 rotations per object, see Figure 6 for some samples of this data tensor). The size of each image is 128×128×31281283128\times 128\times 3...
As discussed earlier, the randomized algorithms proposed in ([34, 35, 36, 50]) require an estimation of the tubal rank which may be a difficult task. To solve this limitation, we propose a new randomized fixed-precision or adaptive algorithm which for a given approximation error bound, it can find an optimal tubal rank...
In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a...
The sampling approach can also be used for low tubal rank approximation besides the random projection. Indeed, a randomized slice sampling algorithm was proposed in [35] in which horizontal and lateral slices are selected and a low tubal rank approximation is computed based on them, see Figure 3 for a graphical illust...
This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ...
B
In [51] the authors address the task of multi-label learning with incomplete labels, by combining the label imputation function and multi-label prediction function in a mutually beneficial manner. Specifically, the proposed method conducts automatic label imputation within a low-rank and sparse matrix recovery framewo...
The semi-supervised multi-label learning task has also been investigated in the context of graph-structured data by incorporating the idea of label embedding to capture both network topology and higher-order multi-label correlations [43]. In this work, the label embedding is generated along with the node embedding base...
One of the explanations of the inner workings of the proposed methods is related to the interaction of semi-supervised learning with the label dependency of MLC and HMLC tasks. More specifically, we investigate whether the smoothness assumption (and, indirectly, since they are not independent, the low-density and the ...
In [51] the authors address the task of multi-label learning with incomplete labels, by combining the label imputation function and multi-label prediction function in a mutually beneficial manner. Specifically, the proposed method conducts automatic label imputation within a low-rank and sparse matrix recovery framewo...
is proposed to exploit both the feature distribution and the label relation between examples. Therefore, the optimization simultaneously takes into account instance-level relations across labeled and unlabeled samples in feature space and the relations across labels. This approach has been only applied in the image mul...
A
The offline test is used to evaluate the model’s performance in handling multiple perception and control tasks simultaneously. All models are deployed to predict driving records and evaluated with multi-task and task-wise scoring. The test dataset is recorded three times in a completely different area from the train-v...
In the navigational controls estimation task, DeepIPC also has the best performance in line with the waypoints prediction result. The MLP agent can leverage useful features encoded from both RGB and BEV semantic maps. Therefore, the MLP agent can perform as well as the PID agent in estimating steering and throttle. Wi...
Based on the experimental results, we disclosed several findings as follows. First, in line with our previous work [1], the BEV semantic feature is proven can improve the model performance in predicting waypoints and navigational controls. With a better perception, the model can leverage useful information which result...
Table III shows that DeepIPC achieves the best performance by having the lowest total metric score in all conditions. Moreover, it achieves the fastest inference speed (lowest latency) as it has the lowest number of parameters, yielding a very low computational load compared to the other models. However, all models inc...
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t...
C
Then, we observe that α⁢(Wi)=6⁢k𝛼subscript𝑊𝑖6𝑘\alpha(W_{i})=6kitalic_α ( italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = 6 italic_k can hold for the initial input Wisubscript𝑊𝑖W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT only if α⁢(Ci∩W)=4⁢k𝛼subscript𝐶𝑖𝑊4𝑘\alpha(C_{i}\cap W)=4kita...
Thus, the number of maximal paths of non-forget non-leaf nodes is at most n𝑛nitalic_n, and therefore the number of non-forget non-leaf nodes is at most n2superscript𝑛2n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The number of leaf nodes is at most three times the number of non-leaf nodes, so the total n...
It follows that non-forget non-leaf nodes can be decomposed into maximal paths going between a node and its ancestor, and these paths have a length at most n𝑛nitalic_n by the height of the tree. Each such path either starts at the root or its highest node is a child of a forget node.
In the latter case, as these added vertices were not initially in Wisubscript𝑊𝑖W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, they are not in the bag of the parent node, and therefore the node constructed in such a call will be a forget node if any such vertices are added. Therefore, the new node const...
First, by the guarantee of 4.7 that Cisubscript𝐶𝑖C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is empty for at most one i∈{1,2,3}𝑖123i\in\{1,2,3\}italic_i ∈ { 1 , 2 , 3 }, we have that each Gisubscript𝐺𝑖G_{i}italic_G start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT has strictly fewer vertices than G𝐺...
B
Specifically, we follow Hong et al. hong2016convergence to assume convexity, Lipschitz smoothness, and the bounded loss for convergence analysis of VIMADMM. Furthermore, we acknowledge that analyzing the local model can be challenging, given the complexity of DNNs, so we introduce an additional assumption that bounds ...
(i) In the model splitting setting, each client trains a feature extractor as the local model that outputs local embeddings, and the server owns a model which predicts the final results based on the aggregated embeddings. (ii) In the VFL without model splitting setting, the clients host the entire model that outputs th...
Due to the privacy protection requirement of VFL, each client k𝑘kitalic_k does not share raw local feature set Xksubscript𝑋𝑘X_{k}italic_X start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT with other clients or the server. Instead, VFL consists of two steps: (1) local processing step: each client learns a local model th...
(1) Honest-but-curious server and clients: they follow the VFL protocol correctly but might try to infer private client information from information exchanged between the clients and server tran2023privacy . (2) External attackers: they are not directly involved in the VFL process but may observe the predicted results ...
While the raw features and local models are kept locally without sharing in VFL, sharing the model outputs such as local embeddings or predictions during the training process might also leak sensitive client information mahendran2015understanding ; papernot2018sok .
D
Thus, the embeddings of the two interactive nodes might need to be updated, and the neighbors of central nodes (i.e., the two interactive nodes) might be influenced. Previous studies, such as [39], attempt to update the embeddings of the central nodes and all of their neighborhoods once a new edge is established.
In such cases, the aforementioned methods suffer from the following limitations: if a neighbor node contains noisy information (e.g., v7subscript𝑣7v_{7}italic_v start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT in Fig. 1), propagating its knowledge to other nodes based on the existing message-passing mechanism is evidently unre...
In order to intuitively understand our reinforced neighbor selection module, we design a robustness visualization experiment by showing the actions output by the policy network under different levels of noise added to the UCI dataset. As shown in Fig. 4, the variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSC...
However, such a learning strategy is not reasonable in many real-world applications, due to the following reasons: if the neighbor node contains noisy information, it might not be helpful to propagate its information to other nodes. In contrast, such a propagation mechanism could lead to the collapse of the learning mo...
DyGNN [39] presents an approach for learning node representations when new links are added in the graph. It focuses on modeling the sequential information of edges and the time intervals between interactions to effectively propagate new information to the influenced nodes.
C
We show the composition of a realistic cloth-changing dataset in Figure 2. It closely resembles the real-world cloth-changing challenges, unlike the current datasets, which are collected from the lab or in the wild but have assigned the labels manually.
Figure 2: The composition of a real cloth-changing benchmark. (a) Cross-view sub-dataset. Each person has view variations but only has the normal walking condition (NM). (b) Cross-cloth sub-dataset. Each identity has walking in different coats condition (CL) but only has limited views (only front views).
The first sub-dataset is used to extract cross-view features, including persons that have view variations, however, they only have the normal walking condition (NM). And the second sub-dataset is used to extract cross-cloth features, containing other different persons that have the normal walking condition (NM) and wal...
However, currently, ReID mostly relies on features such as the cloth’s color and type to relate the video clips of the same person, so this sub-dataset predominantly contains short-term tracks of one person that only has view variations with the same cloth. In contrast, clustering the cross-cloth sequences of the same ...
However, in our benchmarks, the cross-view sub-dataset has richer intra-class diversity and has a wider feature span in the high dimensional space, whereas the cross-cloth sub-dataset has a relatively smaller spatial span in the feature space due to its scarcity in intra-class variety. So if the feature extracted from ...
A
{i}\right|}.roman_Var ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D italic_P italic_A italic_V end_POSTSUPERSCRIPT ) ≤ divide start_ARG italic_σ start_POSTSUBSCRIPT italic_M italic_a italic_x / italic_M italic_i italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT...
ME uses the maximum of sample means to estimate the ground truth maximal expected value (MEV), while DPAV takes the partial average over the maximum and minimum of sample means. The minimum will shift the DPAV estimation towards the ground truth, so the upper bound of the DPAV estimator bias will be smaller than that o...
Because the DPAV estimator utilizes the partial average between the maximum and minimum of sample means to estimate the ground truth. The weights assigned to the maximum and minimum are in the range (0,1), and the sum of weights is 1. According to the variance math properties (Casella and Berger, 2021), the estimation ...
λtsubscript𝜆𝑡\lambda_{t}italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is a float number between [0,1] that is dynamic in time and problem-dependent such that the DPAV can take the average between the maximum and minimum of the action values. The weights assigned to the maximum and minimum are not the same, ...
M𝑀Mitalic_M means the number of sample means, σisubscript𝜎𝑖\sigma_{i}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT means the variance of the it⁢hsubscript𝑖𝑡ℎi_{th}italic_i start_POSTSUBSCRIPT italic_t italic_h end_POSTSUBSCRIPT sample mean. For the bias of DPAV estimator, we have the following bounds.
B
Finally, we report quantitative results that lead our approach to win both CVPR@2022 and ECCV@2022 Ego4D Long-Term Action Anticipation (LTA) challenges. We extend the results with a detailed discussion based on our ablation study. To sum up, the contributions can be summarized as follows:
We report our results for the LTA task in Table 1 based on the test set of the Ego4D LTA dataset. In this experiment, our framework predicted the N=6𝑁6N=6italic_N = 6 observed actions and the overall intention from the past, to anticipate Z=20𝑍20Z=20italic_Z = 20 future actions by generating K=5𝐾5K=5italic_K = 5 seq...
Long-Term Anticipation of human actions needs to exploit temporal dependencies among the observed actions to generate plausible human action sequences in the future. Our two-step approach first aims at understanding the observed actions through a Hierarchical MultiTask MLP Mixer (H3M), described in Section 3.2 in a bot...
Long-Term Anticipation (LTA) has been a fundamental challenge in the computer vision research community. In the following section, we discuss the most relevant research in that field. Then, we review several works for hierarchical extraction and generative models.
To appropriately anticipate the future, it is necessary to understand in detail the observed actions. Human Action Recognition (HAR) from video is itself a large computer vision research field, with increasing interest over egocentric view datasets [7, 13]. Ego4D [13] is the most extensive daily-life egocentric video d...
C
We address two key challenges in the current one-class learning pipeline, i.e., the presence of anomaly contamination and the absence of knowledge about anomalies. COUTA achieves this goal through two novel calibration methods – uncertainty modeling-based calibration (UMC) and native anomaly-based calibration (NAC). In...
In NAC, we design tailored data perturbation operations to produce native anomaly examples based on original time series data, which provides one-class classification with valuable knowledge about primeval anomalous behaviors. These calibration methods enable COUTA to learn data normality in a noise-tolerant, anomaly-i...
Native anomaly example is created based on original time series data via tailored data perturbation operations. A new supervised training branch with a classification head ψcsubscript𝜓𝑐\psi_{c}italic_ψ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is added to empower COUTA to discriminate abnormal behaviors in time ...
The Proposed Approach. Based on the above motivation and insights, this paper proposes a novel Calibrated One-class classification-based Unsupervised Time series Anomaly detection method (COUTA for short). The approach fulfills contamination-tolerant, anomaly-informed normality learning by two novel normality calibrat...
According to the comparison results, COUTA successfully achieves state-of-the-art performance by addressing two key limitations in the current one-class learning pipeline. The superiority of COUTA can be attributed to the synergy of our two novel one-class calibration components, which achieves contamination-tolerant, ...
A
Delexicalization, often referred to as anonymization, is a common practice in D2T (Mairesse et al., 2010; Dušek and Jurcicek, 2016) wherein the slot-value pairs for the entities and their attributes in training utterances are replaced with a placeholder token such that weights between similar utterances can be shared (...
Re-ranking & Pruning: Dušek and Jurcicek (Dušek and Jurcicek, 2016) append the seq2seq paradigm with an RNN-based re-ranker to penalize narratives with missing and/or irrelevant attributes from the beam search output. Based on the Hamming distance between two 1-hot vectors representing the presence of slot-value pairs...
Delexicalization, often referred to as anonymization, is a common practice in D2T (Mairesse et al., 2010; Dušek and Jurcicek, 2016) wherein the slot-value pairs for the entities and their attributes in training utterances are replaced with a placeholder token such that weights between similar utterances can be shared (...
Often, appending contextual examples from outer sources to the training set, or permuting the training samples themselves to append variation, helps mitigate data sparsity. This is known as data augmentation. Nayak et. al. (Nayak et al., 2017) propose the creation of pseudo-samples by permuting the slot orderings of th...
From the notion that delexicalization of the data instance may cause the loss of vital information that can aid seq2seq models in sentence planning, where some data instance slots may even be deemed nondelexicalizable (Wen et al., 2015), Nayak et. al. (Nayak et al., 2017) explore different nondelexicalized input repre...
D
Visual Genome [19], the most widely utilized SGG dataset at present, has a consistent predicate distribution for each subject-object category pair in both the training and test sets. Some studies [24] have found that good performance can be obtained just through the frequency prior bias of the commonest predicate categ...
The statistics of the number of images, the number of triplets, and the difference in distribution (i.e., KL and KL-mean) between the VG and VG-OOD training and test sets are shown in TABLE I. Among them, KL divergence computes the disparity in predicate distributions between the training and test sets for all samples....
New Benchmark VG-OOD. In addition, for the widely-used SGG benchmarks (e.g., VG [19]), their predicate distributions of the training set and test set for each subject-object category pair are similar. As displayed in Fig. 8, the predicate distributions of woman-shirt in the original VG dataset are extremely similar (e...
Visual Genome [19], the most widely utilized SGG dataset at present, has a consistent predicate distribution for each subject-object category pair in both the training and test sets. Some studies [24] have found that good performance can be obtained just through the frequency prior bias of the commonest predicate categ...
We first count the number of all possible predicates for each subject-object category pair, and arrange the predicates based on their frequency in ascending order. Secondly, we add triplets whose total number is less than 20% of the overall subject-object category pairs in the test triplet list. For example, in Fig. 8,...
D
In Figure 13(a) and 13(b), we show the simulation results of the attacker’s RER when following A-PSM instead of the honest mining strategy with γ=0𝛾0\gamma=0italic_γ = 0 and 1111, respectively. The results of the attacker’s RER of A-PSM over selfish mining are shown in Figure 13(c) and 13(d) with γ=0𝛾0\gamma=0italic...
To verify the theoretical results, we simulate the RER of an attracted miner with a mining power of 0.2, using a Monte Carlo method over 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_POSTSUPERSCRIPT rounds, with an upper bound of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT error. ...
We compare the Monte Carlo simulation results of the A-PSM attack with honest mining and selfish mining simultaneously. In the simulation, we show the profits for the attacker with a computation power of 0.1 over 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_POSTSUPERSCRIPT rounds. The upper bound for error is ...
To further verify the accuracy of our quantitative results, assuming the attacker with a computation power of 0.2, we compare the Monte Carlo simulation results of the RER of PSM over selfish mining with our evaluation results. We run the Java-based simulator over 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_P...
To further verify the accuracy of our quantitative results, we implement a Monte Carlo simulator in Java to verify our theoretical analysis. We simulate an attacker with αA=0.2subscript𝛼𝐴0.2\alpha_{A}=0.2italic_α start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = 0.2 and run the simulator over 109superscript10910^{9}10...
B
This phemonemon was dubbed progressive sharpening in [8], and remains poorly understood. In the special case of full-batch gradient descent, the sharpness usually rises past the optimizer’s stability threshold, at which point (the “breakeven point” [21]) the optimizer becomes destabilized by exponentially growing movem...
How is this tension resolved? In the full-batch special case, gradient descent spends the bulk of training in a regime called the Edge of Stability (EoS) [8] in which the sharpness — the maximum eigenvalue of the training Hessian — hovers right at, or just above, the stability threshold.
However, it has not been clear whether these findings have relevance for adaptive gradient methods. Because of adaptive preconditioning, adaptive gradient methods do not evolve as linear recurrences on the local quadratic Taylor approximation, and thus it is not clear why their local stability would be well-modeled by ...
On quadratic objective functions, this behavior would lead to divergence. However, neural network training objectives are not quadratic, and gradient descent typically does not diverge; instead, it enters a regime called the Edge of Stability (EoS) [8] in which the sharpness hovers just above, or oscillates around, the...
In contrast to gradient descent (and preconditioned gradient descent), adaptive gradient methods do not evolve as linear recurrence relations on quadratic functions. Thus, it is a priori unclear whether their local stability can be modeled using an eigenvalue condition.
C
Importance w⁢(𝐱l)𝑤subscript𝐱𝑙w(\mathbf{x}_{l})italic_w ( bold_x start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ): The statistical weights from enhanced sampling decide accordingly to the bias if a sample is important, i.e., metastable states where the weights are higher are more important then high free-energy reg...
As a next example, we consider alanine dipeptide (Ace-Ala-Nme) in gas phase described using the Amber99-SB force field 58. The data set is generated by a 100-ns molecular dynamics simulation 59, 60 using the gromacs 2019.2 code 61 patched with a development version of the plumed 54, 55 plugin. The simulation is perfor...
As an example for the two stochastic embedding methods mrse and stke, we consider folding and unfolding of a ten amino-acid miniprotein chignolin (CLN025) 73 in the solvent. We employ the CHARMM27 force field 74 and the TIP3P water model 75, and we perform the molecular dynamics simulation 59, 60 using the gromacs 201...
Our framework is implemented in a development version of plumed 2.7 54, 55 as the LowLearner module and will be made publicly available in the coming future. Its initial implementation incorporating several algorithms used in this work can be accessed at Zenodo (doi: 10.5281/zenodo.4756093) and from plumed-nest 55 rep...
The main result of our work is deriving the reweighting procedure for manifold learning methods that use transition probabilities for building low-dimensional embeddings. These advancements enable us to directly construct a low-dimensional representation of CVs from enhanced sampling simulations. We show how our approa...
C
The application of the desired filters to the tree may result in a connected subgraph, however this is often not the case and further cleaning is needed to generate usable and useful trees. This section addresses the techniques applied to a filtered or pruned subtree to generate further subtrees that better meet the u...
In all pruning techniques, a threshold is set of one or more of the attributes of a segment. If a given segment surpasses these, then it is eligible for inclusion in the final tree. Pruning may result in a disconnected tree. If this is the case, then either the disconnected segments should be removed, they should be re...
There are at least two possible approaches to removing disconnected segments: remove the disconnected components from the subtree, or to build a new subtree using only the connected components of the one to be cleaned. If properly implemented, these should result in the same network, but their implementations do diffe...
Due to the size of the data set, the application of organisational methods to the data is essential to gaining insight into the structure and creating subtrees that are useful in applications such as hemodynamic simulations. Here, we discuss the application of sorting and filtration methods to the 3910 segments. Filte...
Some filtration methods result in collections of segments that do not form a connected graph. In particular, this occurs in radius based filtration when the radii within the tree are not monotonically decreasing. Most of the use cases for the trees produced with the analysis discussed here require a connected network.
D
In this paper we consider the problem of estimating the latency between a source and a destination network node (denoted as OD pair) in a routing network. The problem setting is to learn a model on smaller networks and extrapolate the predictions on larger networks assuming open-world input, as illustrated in Fig. 1.
In this paper we consider the problem of estimating the latency between a source and a destination network node (denoted as OD pair) in a routing network. The problem setting is to learn a model on smaller networks and extrapolate the predictions on larger networks assuming open-world input, as illustrated in Fig. 1.
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph, which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal...
Conventional deep learning (DL) methods have focused in predictive user behavior modeling on service demand and data usage. In this setting, a line of work focuses on leveraging deep neural networks for adaptive optimisation of routing schemes, as by [4], which typically require similar data distribution in the test en...
Note, that the problem of estimating the network state is EXPTIME-complete, as shown by [2], and has been extensively studied in the literature [3], where most commonly conventional deep-learning methods are employed in the context of reinforcement learning (RL) based network orchestration as well as deep-learning-base...
C
Second, we propose the first provably efficient exploration strategy incorporated with contrastive self-supervised learning. Our proposed UCB-based method is readily adapted to existing representation learning methods for RL, which then demonstrates improvements over previous empirical results as shown in our experimen...
In contrast, as a special case of the low-rank model, linear MDPs have a similar form of structures but with an extra assumption that the linear representation is known a priori (Du et al., 2019b; Yang & Wang, 2019; Jin et al., 2020; Xie et al., 2020; Ayoub et al., 2020; Cai et al., 2020; Yang & Wang, 2020; Chen et al....
Related Work. Our work is closely related to the line of research on RL with low-rank transition kernels, which assumes that the transition dynamics take the form of an inner product of two unknown feature vectors for the current state-action pair and the next state (see Assumption 2.1 for details) (Jiang et al., 2017;...
To improve the sample efficiency of RL algorithms, recent works propose to learn low-dimensional representations of the states via solving auxiliary problems (Jaderberg et al., 2016; Hafner et al., 2019a, b; Gelada et al., 2019; François-Lavet et al., 2019; Bellemare et al., 2019; Srinivas et al., 2020; Zhang et al., ...
There is a large amount of literature studying contrastive learning in RL empirically. To improve the sample efficiency of RL, previous empirical works leverages different types of information for representation learning, e.g., temporal information (Sermanet et al., 2018; Dwibedi et al., 2018; Oord et al., 2018b; Anand...
B
The decoupled MV-SDE (8) for the given empirical law {μtP:t∈[0,T]}conditional-setsubscriptsuperscript𝜇𝑃𝑡𝑡0𝑇\left\{\mu^{P}_{t}:t\in[0,T]\right\}{ italic_μ start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ [ 0 , italic_T ] } is a standard SDE, making it po...
Note that ζ𝜁\zetaitalic_ζ is the optimal importance sampling control for the decoupled MV-SDE (8) and not the particle system (2). With this scheme, we reduce the variance of the inner expectation in (4). Consequently, we assess the variance reduction in the MC estimator of the inner expectation in the first experime...
This section applies stochastic optimal control theory to obtain the optimal change of measure for the decoupled MV-SDE (8). Then, we incorporate the above importance sampling scheme to the DLMC Algorithm 1, and formulate the DLMC estimator with importance sampling.
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it. We decouple the computation o...
B
This work presents a paradigm shift in autonomous asteroid exploration studies by proposing a departure from the conventional approach of extensively reducing uncertainties before close-proximity operations. Instead, the focus is on leveraging autonomous robust guidance and control to handle uncertainties effectively. ...
The large orbital insertion burn introduces much uncertainty in the estimation, as expected from the analysis in Section 5.2. The errors in position can spike to the order of a few hundred meters and centimeters per second for the velocity. Nevertheless, they rapidly decrease in the orbital operation, with the mean rem...
Aligned with recent studies aiming to develop robust algorithms for constraining an asteroid’s shape within 1% accuracy, our proposal exhibits great promise. Through Monte Carlo simulations, we evaluate the performance of the spacecraft in rapid approaching and transiting between tight orbits, starting with no prior k...
A histogram for the budget Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V is shown in Figure 5b. The consumption of Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V is within 7.6 to 10.4 m/s having a mean of 8.41 m/s, with most of the consumption again in the Monte Carlo-Lambert guidance. Regarding the errors in estimating the spacecraft’s state, they...
In the case of Eros, the results of the Monte Carlo analysis, as illustrated in Figure 6, align with the discussion in Section 5.2. Once again, the spacecraft executes its operation successfully. It is effectively inserted into orbit and completes the transfer smoothly, as depicted in Figure 6a. The histogram in Figur...
B
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
The map G𝐺Gitalic_G resembles the alternate scaling algorithm for Brascamp-Lieb constants (Garg et al., 2018, Alg. 1). The resemblance of both approaches derives from an exploitation of the difference-of-convex structure of problem 1.3 (see also (Weber and Sra, 2023)). However, the Thompson geometry perspective employ...
A growing body of literature has analyzed a formulation of the problem, which links to the more general operator scaling problem (Allen-Zhu et al., 2018; Garg et al., 2015, 2018; Kwok et al., 2019; Franks, 2018; Bürgisser et al., 2018, 2019). We briefly recall some of the key results in this line of work and comment on...
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
In future work, we hope to further investigate approaches for computing δ,Δ𝛿Δ\delta,\Deltaitalic_δ , roman_Δ explicitly. In addition, we hope to apply the techniques presented here to geodesically convex optimization problems similar to  (1.3), such as the operator scaling problem (Garg et al., 2018) and difference o...
D
Similar to our previous analysis, we compared how centrality and persistence identify holes based on thresholds. Figure 13 illustrates the number of holes detected by each method at different threshold levels (represented as a percentage of the maximum persistence value). The plot reveals key differences in how these ...
In general, our centrality measures are of the form Jn:Λ×W→ℝ+:subscript𝐽𝑛→Λ𝑊subscriptℝJ_{n}:\Lambda\times W\to\mathbb{R}_{+}italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT : roman_Λ × italic_W → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT where ΛΛ\Lambdaroman_Λ is the collection of all non-trivial p...
In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. Integral to this is the development of an algorithm that captures the merge history of homology classes. These homology-based centrality measures produce, for all cycle gen...
We introduced novel centrality measures that leverage both persistence and merge dynamics of homology classes. These measures aim to capture a more comprehensive picture of the topological structure within point cloud data compared to traditional summaries. The algorithm for computing the merge dynamics of homology cla...
The centrality plots suggest the presence of a relatively important signal within the point cloud data. This is evidenced by the large difference in the maximum centrality value for the highest ranked hole compared to the others. Notably, this hole coincides with the one previously identified using the hypothesis test...
C
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui...
Based on the upper bound of the generalization error in Eq. 4, the proposed method needs to satisfy two requirements: 1) most of modules in the model are shared for all domains, which can be sufficiently trained by all samples, and 2) the model can reduce the interference of domain gap between different domains. Theref...
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ...
Remark. In our multi-task learning framework, using the independent BN can effectively mitigate the interference of different domains, as shown in Eq. 4. In addition, in our method, most of the modules are shared for all domains, which can sufficiently exploit all samples to reduce the third item in Eq. 4. Hence, our m...
We propose a simple yet effective multi-task learning (i.e., MultiMatch) for semi-supervised domain generalization, which can effectively reduce the interference from different domains during pseudo-labeling. Also, most of the modules in the model are shared for all domains, which can be sufficiently trained by all sa...
D
For the CLIP vision models, Conv-Adapter consistently outperforms fine-tuning on Structured tasks of VTAB-1k. We observe a performance gap of Conv-Adapter on MoCov3 pre-trained [11], and we argue this is possibly due to the difference in feature space of self-supervised and supervised models in CV [25].
The lower the CKA similarity, the larger capacity is required for good transfer performance. We plot the CKA similarity and the relative accuracy gain of Conv-Adapter to fine-tuning in Fig. 7, where the same trends over datasets exhibit for different architectures.
Table 2: Comparing Conv-Adapter (CA) with full Fine-Tuning (FT) using various backbone architectures of different pre-training. Each setting includes three runs and averaged top-1 accuracy (%percent\%%) over datasets and the averaged total trainable parameters (M) over all datasets are reported. We report the number of...
For evaluation, we compare the 4 variants of Conv-Adapter with full fine-tuning (FT) and 3 baseline methods: linear probing (LP), bias tuning (Bias) [7], and visual prompt tuning (VPT) [25, 1]. We test each method on ResNet50 [19, 27] with ImageNet21k pre-training. To find the optimal hyper-parameters of Conv-Adapter (...
As shown in the experimental results, whether the performance of PET surpasses Fine-tuning varies from datasets and domains. From the perspective of trainable weights, PET replaces the whole backbone with much smaller number of parameters compared with Fine-tuning. With the pre-trained backbone and the fine-tuned backb...
B
Estimate jointly the PDEs solution uN⁢Nsubscript𝑢𝑁𝑁u_{NN}italic_u start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT and the target parameter λ(k−1)⁢(t)superscript𝜆𝑘1𝑡\lambda^{(k-1)}\left(t\right)italic_λ start_POSTSUPERSCRIPT ( italic_k - 1 ) end_POSTSUPERSCRIPT ( italic_t ) by minimizing (9) using data fr...
Figure 1: The dataset is partitioned based on spatial information, with each batch encompassing the full temporal information. In the online learning approach, the network is trained using the previous distribution of loss weights and updated based on the data from the subsequent batch.
Figure 3 shows the performance of CP-PINNs in discovering changepoints and solving (16). Specifically, the leftmost panel illustrates the precise solution across a uniform temporal scale. Identifying the locations of changepoints remains challenging even when the solutions are known. In the second panel, the identical ...
Note that in the definition of Regret, 𝚯^(k)superscript^𝚯𝑘\mathbf{\hat{\Theta}}^{(k)}over^ start_ARG bold_Θ end_ARG start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT and λ^⁢(t)(k)^𝜆superscript𝑡𝑘\hat{\lambda}(t)^{(k)}over^ start_ARG italic_λ end_ARG ( italic_t ) start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPE...
The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl...
A
For example, b⁢a⁢b⁢c⁢d𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and a⁢b⁢c⁢b⁢c⁢d𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by a⁢b⁢c⁢d𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but a⁢b⁢c⁢b⁢d⁢a𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital...
(go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive, let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let
Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that, provided only tha...
for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=x⁢a⁢a⁢y𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ...
change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once. If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w.
B
β2⁢‖∇αq‖L22𝛽2superscriptsubscriptnormsuperscript∇𝛼𝑞subscript𝐿22\frac{\beta}{2}\|\nabla^{\alpha}q\|_{L_{2}}^{2}divide start_ARG italic_β end_ARG start_ARG 2 end_ARG ∥ ∇ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT italic_q ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT...
The bottom right panel of the figure also shows the information density j⁢(𝐱)𝑗𝐱j(\mathbf{x})italic_j ( bold_x ) that corresponds to this problem, as defined in (29). It illustrates that, given the location of detectors and the nature of the equation, information is primarily available upstream of detector locations....
accuracy of measurements on the one hand, and the uncertainty in the recovered parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us...
The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient. Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ...
For many inverse problems – such as ultrasound or seismic imaging, or electrical impedance tomography – the quantity we would like to identify is not a right-hand source term, but a coefficient in the operator on the left side of the equation. In these cases, the definition of the information density will have to be l...
D
In Figure 5c the most frequently co-occuring words with the currently selected words are displayed, ordered by their total number of co-occurrences. To distinguish between the co-occurrences with different words, we use a stacked bar chart for each word, ordered on the position in the annotation space. Because the num...
Usage Scenario. The re-annotation portion of the visual analytics system allows for a synoptic visualization of the labels for relabeling. A variety of approaches can be adopted to choose the materials. In general, aligning multiple images together in the same (re-)annotation space raises interesting questions about ho...
On mouseover, the image is shown. A lasso selection can be used at the points to select a set of images for the labeling process. The selected points are increased in size to better highlight them. The re-annotation space is accessed from a button in the left-hand drawer. The current state of the graph and the point c...
Of the possible filters at this stage (book, labels, subject), one can be used at a time. A filtered approach allows quite granular discovery of the images, their labels and the visual similarities. Each of the approaches was productive, with slight differences. Filtering by book provides the opportunity to explore vis...
User 2, for example, started the visual analytics process not by labeling the images with missing labels but by working on the hierarchy. Starting with the hierarchy helped to understand the variety of labels from a holistic point of view and to understand how they relate to each other. Annotating the illuminations wi...
A
Juhua Liu is currently a professor with the School of Computer Science and Institute of Artificial Intelligence, Wuhan University. His research interests mainly include image processing, computer vision, natural language processing and machine learning. He has published more than 40 research papers in CV/NLP/AI, incl...
Bo Du (M’10-SM’15) is currently a professor with the School of Computer Science and Institute of Artificial Intelligence, Wuhan University. He is also the director of National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China. He has more than 80 research papers published in the IEEE ...
Bo Du (M’10-SM’15) is currently a professor with the School of Computer Science and Institute of Artificial Intelligence, Wuhan University. He is also the director of National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China. He has more than 80 research papers published in the IEEE...
Bo Du (M’10-SM’15) is currently a professor with the School of Computer Science and Institute of Artificial Intelligence, Wuhan University. He is also the director of National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China. He has more than 80 research papers published in the IEEE ...
Bo Du (M’10-SM’15) is currently a professor with the School of Computer Science and Institute of Artificial Intelligence, Wuhan University. He is also the director of National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China. He has more than 80 research papers published in the IEEE ...
B
UDP Amplification DDoS Attacks. We use 1.7M DDoS attack records gathered by a honeypot network emulating protocols vulnerable to reflected UDP attacks (Thomas et al., 2017). A flow of packets is considered to be an attack if any sensor observes at least five packets for the same victim IP or IP prefix, and the attack ...
Underground Forum Discussions. Online forums are structured around subforums containing threads with multiple posts. To assess changes in discussion topics within the hacking community, we use a snapshot of the most popular hacking forum, Hack Forums from the CrimeBB dataset (Pastrana et al., 2018b). The forum is a pla...
One type of attack linked with the low-level cybercrime actors is website defacement (Romagna and van den Hout, 2017), which accounted for around 20% of online attacks in 2014 (Hackmageddon, 2015) and is often organised into discrete campaigns (Maggi et al., 2018). Attackers (or defacers) gain unauthorised access using...
Defacement Motives. The conflict caught the attention of existing defacers, who performed many attacks against other countries but not Russia and Ukraine until just after the invasion, suggesting their choice of targets was influenced. We also found some ‘new faces’ e.g., the second most active defacer targeting Russi...
This posting volume is tiny when set against the 62M-post size of Hack Forums, showing trivial contributions of the Russia-Ukraine discussions to the overall landscape (as with the previous evidence seen from defacement and DDoS attacks). These posts are centralised: 97.22% belongs to the top 5 popular subforums. Rank...
A
Interestingly, in our analysis we observed a high degree of transferability exhibited by the FGSM attack. Across these datasets, FGSM consistently demonstrated its effectiveness in crafting adversarial examples capable of successfully deceiving a range of diverse victim models. The FGSM works by performing a single lar...
The results garnered from our extensive experimentation with the HET ranking strategy offer compelling evidence of its effectiveness in enhancing the transferability of adversarial examples. Notably, the strategy demonstrates remarkable efficacy in the context of improving the transferability of a single specific samp...
For our all of our experiments, we consider a black-box adversary that has no knowledge of the victim’s architecture. To simulate this setting, we ensured that the architectures in used for f𝑓fitalic_f, f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and those in F0subscript𝐹0F_{0}itali...
In Fig. 6 we present the average transferability success rate of a sample when the highest ranked perturbation (out of 25) is used. Although perturbation ranking is less effective than image ranking, the figure shows that transferability can indeed be improved modestly in many situations. We also note that the attacker...
Using many attack steps may be preferable in a white box scenario since it allows targeting less prominent features in the victim model and thus creating less noticeable perturbations. Nevertheless, in the black-box scenario, this might hinder the attack transferability, since these features might only be present in th...
D
\right\|_{2}\leqslant q^{L+1}\left\|\rho_{0}-\frac{\mathbb{1}}{2^{n}}\right\|_% {2}\;.∥ over~ start_ARG italic_ρ end_ARG ( bold_italic_x ) - divide start_ARG blackboard_1 end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_ARG ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⩽ italic_q start_POSTSUPERSC...
Fig. 6 shows results for the scaling of the kernel variance as a function of the number of qubits n𝑛nitalic_n and HEE layers L𝐿Litalic_L. As L𝐿Litalic_L increases, the expressivity of the ansatz increases, and for sufficiently large L𝐿Litalic_L we observe exponential concentration of both the fidelity and projected...
Theorem 3 shows that the concentration of quantum kernels due to noise is exponential in the number of layers L𝐿Litalic_L for both the fidelity and projected quantum kernels. This is a consequence of the encoded state concentrating towards the maximally mixed state, as captured in Eq. (34).
In the previous section we saw that high expressivity can be an issue due to the fact that kernels (such as the fidelity kernel) compare inner products of objects in exponentially large spaces. This issue can be mitigated using projected kernels, which reduce the dimension of the feature space. However, a different iss...
We show that analogous to the causes of BPs for QNNs there are at least three different mechanisms that can lead to the exponential concentration of the encoded quantum states, including (i) the expressivity of the encoded quantum state ensemble, (ii) the entanglement in encoded quantum states with a local observable a...
B
The temporal anticipation accuracy is not affected much in any of the models, although there is still a 1% to 1.5% drop in performance between TTE-00 and TTE-20 in all cases. The prediction accuracy however is considerably enhanced by knowing the location of vehicles by their bounding boxes giving much higher performan...
The method for processing the data of our RGB+BB+3DN method is inspired by Izquierdo et al.’s work [10]. In [10], their best performing model, GoogleNet + LSTM yields 74.4% for lane change classification. Because 3D models can better extract spatio-temporal features than 2D CNNs, our RGB+BB+3DN method achieves top-1 c...
As Table 3 illustrates, Simonyan et al.’s two stream based method obtains better accuracy than our RGB+3DN method, however, their method requires bounding box coordinates of the target vehicle for Region of Interest (ROI) cropping. Furthermore, their validation data of each class is highly unbalanced. Although our RGB...
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per...
Human drivers predict lane change intentions mainly use visual clues rather than physical variables. However, existing works that utilize appearance features for lane change are surprisingly few. In [19], two appearance features, the state of brake indicators and the state of turn indicators are used for lane change re...
B
are shown in Figure 7. The method of Caliskan et al. leads to a one-sided distribution. Between 40% to 60% of the authors cannot be identified well. In contrast, the approach of Abuhamad et al. induces a two-sided distribution. Some authors are well protected while
Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GH)Abuhamad et al.Caliskan et al.Original Figure 4. Anonymization performance (uncertainty score) in the
Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Accuracy (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Accuracy (GH)Abuhamad et al.Caliskan et al.GuessingOriginal Figure 5. Attribution performance (accuracy) of candidate techniques
Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GH)Abuhamad et al.Caliskan et al.Original Figure 6. Anonymization performance (uncertainty score) in the
of data in learning models (e.g., Chaudhuri et al., 2011; Su et al., 2016; Abadi et al., 2016; He et al., 2017; Jayaraman and Evans, 2019) and also in the field of natural language processing (e.g., Weggenmann and Kerschbaum, 2018; Lyu et al., 2020; Fletcher et al., 2021; Mattern et al., 2022).
C
Learning Task Features without Text Encoder. In the proposed meta network, the task features are extracted by the text encoder. By removing the text encoder from meta network and using a learnable vector for each task, the task features can be learned from scratch. The results with and without the text encoder on three...
Is Adding Class Feature to Task Feature Necessary? We conduct experiments of SoftCPT with different configurations listed in Table 4. While learning the context [V]1⁢⋯⁢[V]Ksubscriptdelimited-[]𝑉1⋯subscriptdelimited-[]𝑉𝐾[{V}]_{1}\cdots[{V}]_{K}[ italic_V ] start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋯ [ italic_V ] start_...
Learning Task Features without Text Encoder. In the proposed meta network, the task features are extracted by the text encoder. By removing the text encoder from meta network and using a learnable vector for each task, the task features can be learned from scratch. The results with and without the text encoder on three...
For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede...
The main results of different methods on four datasets are listed in Table 10. For SoftCPT, we report the results of SoftCPT-NATA and SoftCPT-NATS as they could acquire desirable performance with lower computational cost. As a comparison, we also report a variant of SoftCPT, i.e., SoftCPT*. It is the method that learn...
D