context stringlengths 250 4.63k | A stringlengths 250 4.99k | B stringlengths 250 4.17k | C stringlengths 250 5.14k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
Although most participants disapproved of the feature that allowed them to hide apps from one another, some (37%, N=7 Parents and 47%, N=9 Teens) mentioned a positive aspect of this feature. They identified that this feature enabled users to have personal privacy on their app usage and a sense of independence. For exa... |
”Okay, so for example, the benefits might be like, if any personal app is installed, then I may not want to share all that with all the family members. So it gives you some kind of privacy. But for a child, I think up until she’s 18 there should be some control and there should not be an option for them to hide anythi... |
“I was puzzled when you [Interviewer] pointed out that I can’t deny access. You know, when I looked at his [Teen’s] apps, I couldn’t actually turn on or off stuff [change Teen’s app settings]. I could see what was there and what was happening, but I can’t change it. So my gut reaction was like, Oh, I don’t like that.”... |
“Um, so like, an advantage would be like, you could download anything. And like, you could also hide it. So like, even if it [the app installed] was something weird, no one would know. You could also have your privacy for apps that you know, you don’t want people to know about.” -T16, Female, 14 years | ”Because you have to physically go into settings and change those, because they [Parents] may not know how to go into the place and they can’t do it. If you’re able to do it from here [CO-oPS], that would be a thing like I need in this situation.”-T5, Male, 16 years
| C |
where Ez,r={ξ:f(ξ)1/Dd(z,ξ)≤r}subscript𝐸𝑧𝑟conditional-set𝜉𝑓superscript𝜉1𝐷𝑑𝑧𝜉𝑟E_{z,r}=\{\xi:f(\xi)^{1/D}d(z,\xi)\leq\sqrt{r}\}italic_E start_POSTSUBSCRIPT italic_z , italic_r end_POSTSUBSCRIPT = { italic_ξ : italic_f ( italic_ξ ) start_POSTSUPERSCRIPT 1 / italic_D end_POSTSUPERSCRIPT italic_d ( italic_z ,... | Consider, again, the additive noise-corrupted “Antman” two-square dataset on the right of Figure 5. The persistence diagrams for the distance-to-measure filtration and the RDAD filtration are shown in Figure 8 with different confidence bands. Note that in both figures the bands constructed by oracle bootstrapping and b... |
Figure 11: Dimension-1 persistence diagrams of different filtration functions for the noisy Voronoi dataset with confidence bands. Blue points are points in the dimension-1 empirical persistence diagram. The green solid lines and the orange dashed lines are the confidence bands constructed by subsample and oracle boot... | Figure 2: The distance filtration and its persistence diagrams. In the first subplot is a sample of points near a circle. Unions of balls centered at these points with different radii are shown the subsequent subplots. The last subplot shows the persistence diagrams of these unions of balls. The red diamond points corr... |
Figure 8: Dimension-1 persistence diagrams of different filtration functions for the additive noise-corrupted “Antman" two-square dataset with confidence bands. Blue points are points in the dimension-1 empirical persistence diagram. The green solid lines and the orange dashed lines are the confidence bands constructe... | D |
On the one hand, AUs usually do not occur alone when humans express certain emotions, and thus some combinations of AUs, which pertain to displayed emotions, can be frequently observed, e.g. AU6 (Cheek Raiser) and AU12 (Lip Corner Puller) tend to appear together and form facial expression happiness (Ekman 1992). On the... |
We compare the performance of CISNet with the previous state-of-the-art methods including DRML (Zhao, Chu, and Zhang 2016), EAC-Net (Li et al. 2017), ROI-Net (Li, Abtahi, and Zhu 2017), DSIN (Corneanu, Madadi, and Escalera 2018), JAA-Net (Shao et al. 2018), LP-Net (Niu et al. 2019), SRERL (Li et al. 2019), UGN-B (Song... | Recent works have made progress in capturing high-level AU semantic relations in an implicit way (Corneanu, Madadi, and Escalera 2018; Niu et al. 2019) by exploiting correlations between AUs via probabilistic graphic models or in an explicit way (Li et al. 2019; Shao et al. 2020) by constructing an AU semantic graph ac... | In other words, conventional AU recognition models which approximate P(Y|X)𝑃conditional𝑌𝑋P(Y|X)italic_P ( italic_Y | italic_X ) learn a set of latent AU semantic relations R𝑅Ritalic_R from the training data, which can be regarded as a kind of priors influencing the estimation results Y𝑌Yitalic_Y in an implicit wa... | Considering the semantic relations among AUs, some works (Wang et al. 2013; Walecki et al. 2017) make efforts in modeling such relations via probabilistic graphical models or graph neural networks. Wang et al. (Wang et al. 2013) introduced a restricted Boltzmann machine to model facial action units, thereby capturing n... | B |
According to the measurement result, pfsubscript𝑝𝑓p_{f}italic_p start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT fluctuated within the range of [0.053, 0.067] with an average value of 0.059. Additionally, the block interval is 13-15s during this period, with an average value of 14s. Using eq. (3), we compute that the 9... | This part presents experimental results to validate the efficiency of BBP by comparing its basic performance with LBP, BHP, and CBP under the testbed without any malicious nodes. Specifically, we examine block processing time, block traffic load, memory costs, and block propagation delays of these propagation protocols... |
Memory Cost: While our BBP achieves a significant benefit on block propagation, additional memory cost is required, since each BBP node needs to generate and store PPB before the next block arrives. We measured the detailed memory costs for four block propagation protocols. The experimental results are shown in Fig. 6... |
Block Traffic Load: We then measure the total network traffic induced by broadcasting blocks among all nodes. The measurement results are shown in Fig. 6(b). Benefiting from the bodyless design, the block traffic load of BBP almost keeps constant for various ntsubscript𝑛𝑡n_{t}italic_n start_POSTSUBSCRIPT italic_t en... | In this section, we implement and evaluate our BBP scheme over a test network. For comparison, we also implement three typical block propagation protocols: the Legacy Block Propagation (LBP) and Compact Block Propagation (CBP) of Bitcoin, and the Block-Hash Propagation (BHP) of Ethereum.
| A |
In this paper, we study linear function approximation in POMDPs to address the statistical challenges amplified by infinite observation and state spaces. In particular, our contribution is fourfold. First, we define a class of POMDPs with a linear structure and identify an ill conditioning measure for sample-efficient ... | In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo... | Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2... | Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate... |
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ... | C |
The following keywords were used to search all the databases: speech, language, disorder, impairment, assessment, therapy, rehabilitation, treatment, AI, artificial intelligence, automated, automatic. Boolean operators were used to combine the terms as: | We presented the language distribution of the papers based on the language addressed by the AI-based automated speech therapy tools as reported in the studies (see Figure 8). The most addressed languages were English (10 studies) and Spanish (4 studies). Furthermore, two studies addressed the Cantonese language, and th... |
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to... |
There were 91 unique authors identified from the included studies. The VOSviewer software was used to calculate the most impactful authors, generate co-authorship clusters, and perform co-occurrences of keyword analysis (Van Eck NJ, \APACyear\bibnodate). All the authors were counted irrespective of the authorship orde... | We further report the geographical distribution of the included studies based on the
location of the study indicated in the paper (see Figure 7). We looked at the author’s affiliation and funding agency when required. Most papers reported on studies which | C |
This includes clustering of such data [XT15], visualization [TWT21], outlier detection [AM13], and others. The primary progress in understanding the denoising effect of PCA has been solely in clustering, particularly in connection to the K-Means algorithm [DH04, KK10, AS12], where PCA in combination with a K-Means base... |
In a dataset with a community structure, we show the compression ratio for intra-community pairs is higher than that of inter-community pairs even in settings where the pre-PCA inter-community and intra-community distances are very similar. We demonstrate (through a random vector mixture model) that this ratio gap ref... | This includes clustering of such data [XT15], visualization [TWT21], outlier detection [AM13], and others. The primary progress in understanding the denoising effect of PCA has been solely in clustering, particularly in connection to the K-Means algorithm [DH04, KK10, AS12], where PCA in combination with a K-Means base... | Principal component analysis, commonly known as PCA, is one of the most fundamental tools in machine learning. PCA is primarily used as a dimensionality reduction tool that transforms high-dimensional data to lower dimensions for better visualization as well as a heuristic that reduces the complexity of the algorithms ... |
However, PCA seems to have a more “general” denoising effect in data, as it improves the performance of various downstream algorithms, including clustering [VKS17] as well community structure preserving graph embedding [HHAN+21] and this denoising effect is evident in many real-world datasets. | D |
Specifically, we pre-process the visual data to provide three different levels of missingness: obfuscations on the objects (e.g., humans, cars), obfuscations on the entire images, and the semantically masked visual images.
The masked visual data has more severe missingness compared to the other two levels. | We then design the dialog formulation by allowing the model to ask natural language questions and then provide the answers to the raised questions.
Specifically, different from most existing works in the field of visual dialog [8, 14], which concentrate on answering visual context-related natural language questions [15... | We formulate the dialog with NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT rounds of QA interactions. Specifically, given the visual input data with partially missing visions, the AI system is given NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT chances to ask ... |
In this paper, we investigate the SGG task setting with missing input visions, and propose to supplement the missing visual information via interactive natural language dialog using the proposed SI-Dial framework. Extensive experiments on the benchmark dataset with various levels of missingness demonstrate the feasibi... | As the primary information source for various computer vision tasks, the visual input data play a significant role in most existing works to achieve competitive and promising performance. It is reasonable to expect the performance drop under the task setting with incomplete visual input. To tackle the problem, we propo... | A |
We use the same entrance fee function as that in the proof of Theorem 10 except that we set the entrance fee at the location of agent 00 to 00.
Then in order to get an approximation ratio less or equal to 3333, one of the two facilities must always be located in the position of the new agent 00 with probability 1111. T... | This paper extends the classical facility location game on the real line by incorporating entrance fee functions, adding versatility to the model. The extension prompts a reevaluation of existing facility location games, like capacitated and heterogeneous facilities, opening avenues for broader applications.
Our arbitr... | Each facility, once located, has an entrance fee determined by its location. The cost of an agent is the sum of the travel fee (distance to the facility) and the entrance fee of the facility. Each agent will use one facility at a minimum cost.
111In this paper, we make the assumption that facilities are homogeneous in ... | However, the arbitrariness of the entrance fee function introduces new challenges in designing strategyproof mechanisms. Agent preferences may no longer adhere to single-peakedness [22, 5], and standard mechanisms for the classical model cannot be directly extended to our setting while preserving strategyproofness. To ... |
Moreover, we complement the proposed mechanisms with tight or nearly tight lower bounds, also parameterized by resubscript𝑟𝑒r_{e}italic_r start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT. While lower bounds for the classical model are applicable in our model, given that the classical model is a special case of ours, w... | A |
Every chain has alternately edge link triangles and vertex link triangles if we start from the variable triangle side, the first triangle is an edge link triangle. The number of each type of triangle is 2k22subscript𝑘22k_{2}2 italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (see Figure 8 for k2=1subscript𝑘21k_{2}=1i... |
Every chain has exactly 2k12subscript𝑘12k_{1}2 italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT vertex link triangles (see Figure 7 for k1=1subscript𝑘11k_{1}=1italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1). It is clear that every pair of consecutive vertices in the chain are odd-degree vertices in some induc... | It is clear that S(G(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F ) ) admits a DIM if and only if the resulting graph Q𝑄Qitalic_Q, after the replacement of variable-clause edges by the chains described above, has a DIM. Furthermore, every variable-clause edge of S(G(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F... | In any case, the contact vertex is colored white in any valid bi-coloring by Theorem 2 and its neighbor in the edge link triangle should be black, again by Theorem 2. In consequence, the vertices of the edge link triangle in the chain should have different colors. All other edges of the chain form part of some induced ... | This technique does not work for our class under consideration since this operation adds 3 new vertices of degree 2 and they are not part of triangles. Instead of this operation we propose other alternatives to avoid induced cycles of size at most k𝑘kitalic_k for any k𝑘kitalic_k. But all of them come with some cost. ... | C |
Nevertheless, neural network-based methods have historically been used in uncertain system control design as a proxy for suboptimal control actions 52, 28, 38, robust feedback policies based on pole-placement 53, 36, or to solve frequently encountered linear matrix inequalities 13, 23. More recently, instead, an algori... | We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes... |
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s outp... | We show how to guarantee the (uniform, in a set) ultimate boundedness property 6 of a discrete-time polytopic system when the ReLU approximation replaces a traditional stabilizing controller. Specifically, by focusing on the approximation error between NN-based and traditional controller-based state-to-input mappings, ... |
Capitalizing on the methodology proposed in 17, we take an optimization-based approach to develop analytical tools fulfilling such quest for theoretical guarantees of ReLU-based approximations of traditional stabilizing controllers for polytopic systems. We develop a purely offline method based on the systematic const... | D |
We model the dynamics of the objects using an ordinary differential equation (ODE) and use implicit neural representations to model the appearance, where the static background and the planar dynamics allow us to model the appearance in 2D. Our objective is to estimate the unknown physical parameters, and the initial co... | Qualitative results for a single scene can be seen in Figure 5, Table 2 shows a quantitative evaluation over all sequences. For more results we refer to the appendix. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. While both baselines yield similar results on the t... | For many physical phenomena, humans are able to infer (a rough estimation of) physical quantities from observing a scene, and are even capable to predict what is going to happen in the (near) future. In contrast, physical understanding from videos is an open problem in machine learning.
The physics of many real-world p... | To show the capabilities of our approach on real world data, we captured videos of three physical systems: A block sliding on an inclined plane, a thrown ball, see Figure 6, and a pendulum, see Figure 1. For the block, the initial position and velocity, the angle of the plane and the coefficient of friction are the unk... | For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the deflection angle and the angular velocity, and a two dimensional ODE can be used to describe the dynamics.
| D |
Quantum communication networks (QCNs) utilize quantum mechanics principles to enhance information transfer. QCNs transmit data using quantum states that are entangled and can exist in a superposition of multiple states simultaneously, offering greater efficiency than classical networks [1]. However, these quantum stat... | The majority of QCN models optimize the quantum resource allocation and network overall performance by embedding classical data into quantum states that are shared over quantum channels between distant nodes [3, 4, 5, 6]. Additionally, numerous approaches have been proposed to develop resource-efficient QCNs, including... | In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m... |
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi... | Here, the stored quantum vectors are initialized to different clusters either in an arbitrary fashion or by utilizing efficient heuristic approaches. Then, multiple iterations are performed such that in each iteration, the goal is to minimize the loss function in (2), which ensures that each vector is assigned to the c... | A |
Since the set of states of GDsubscript𝐺𝐷G_{D}italic_G start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT is identical to (or a subset of) the set of states of Gvsubscript𝐺𝑣G_{v}italic_G start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT, we can also classify the states of GDsubscript𝐺𝐷G_{D}italic_G start_POSTSUBSCRIPT it... | A C𝐶Citalic_C-enforcing defensive function should ensure that all possible defensive actions keep the eavesdropper confused regardless of system activity.
Thus, when the defensive function is subject to constraints, we propose a new construction by composing a verifier and a defensive verifier of a given system to cap... | Given an E𝐸Eitalic_E-verifier, we can check the necessary condition for the defensive function to be C𝐶Citalic_C-enforcing by following Algorithm 1.
However, it is possible that a defensive function may not be C𝐶Citalic_C-enforcing even though the necessary condition is satisfied. | For the given system G𝐺Gitalic_G, we denote the set of possible defensive actions outputted via the defensive function under deletion, insertion, and replacement constraints by EDsubscript𝐸𝐷E_{D}italic_E start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, which is defined as ED=⋃t∈EoD(t)subscript𝐸𝐷subscript𝑡subscrip... | In this section, a verifier and a defensive verifier are constructed to respectively capture system behavior and all feasible defensive actions following system activity.
Then, an E𝐸Eitalic_E-verifier is built by a special synchronization mechanism between a verifier and a defensive verifier to verify C𝐶Citalic_C-enf... | A |
(2) [Lines 7-10] When the timer expires (and the warm-up period is concluded), K𝐾Kitalic_K consecutive policy function updates are performed.
(3) [Lines 11-12] Using the updated policy, the agent i𝑖iitalic_i interacts with the environment to collect the experience trajectory data and computes the values of associated... |
The time between two updates (i.e., update-timer interval) directly impacts the convergence speed, but it is potentially arbitrary. Indeed, it can be set according to the length of an arbitrary episode, the time to process an update, or the replay-buffer sampling factor to generate a mini-batch. However, to strike a b... | A basic training cycle consists of the following steps, as shown in Fig. 5(a). The agents collect experience data by performing network operations under the existing policies, and send these data together with some local policy information to the central entity. The central entity puts the received data in the replay b... | Once the convergence is reached (f.i., after a maximum number of steps or when minimal NN weight updates are performed), the training procedure stops and distributed agents continue the interaction with the environment based on local observations and fixed local policies. However, the training procedures can be re-acti... |
The training process consists of a sequence of update instants, in each of which, multiple DNN updates are performed by sampling K𝐾Kitalic_K random mini-batches from the replay buffer and updating DNNs’ weights accordingly. Moreover, an initial transient period is needed to reach a steady state (i.e., stationary buff... | A |
On the other hand, the principal receives negative utility for false positives:
u(θ0,L)<0𝑢subscript𝜃0𝐿0u\left(\theta_{0},L\right)<0italic_u ( italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_L ) < 0 for all L>0𝐿0L>0italic_L > 0, and u(θ0,L)𝑢subscript𝜃0𝐿u\left(\theta_{0},L\right)italic_u ( italic_θ sta... |
Our study of hypothesis testing in the principal-agent model has forged connections between statistical inference and ideas from the economic theory of mechanism design. The primary conclusion of this work is that the principal who does not know the distribution of agent types should deploy the menu of all e𝑒eitalic_... | principal needs only to screen out agents who know that their product is ineffective, which the incentive-aligned statistical
contract does. Moreover, a larger menu that is incentive-aligned is more attractive to the agents, so to maximize participation the principal should offer the largest incentive-aligned menu poss... | Note that although this menu is infinite, it can still be easily implemented. Since it is simple to verify whether or not a contract is incentive-aligned, the principal asks the agent for their proposed incentive-aligned contract and then proceeds with that contract, provided it is indeed incentive-aligned. For the age... |
The principal’s expected utility depends on the distribution over agent types, Q𝑄Qitalic_Q, and the menu offered, ℱℱ\mathcal{F}caligraphic_F. The principal controls the latter, but may not know the former. Thus, we will seek a menu that performs well for many distributions Q𝑄Qitalic_Q. Our main result in what follow... | D |
This study introduces the novel paradigm of Privacy Preserving Image Registration, designed for allowing image registration in privacy-preserving scenarios where images are confidential and cannot be shared in clear. Leveraging both secure multi-party computation (MPC) and Fully Homomorphic Encryption (FHE), we propose... | In order to avoid local minima and to decrease computation time, we use a hierarchical multiresolution optimization scheme. The scheme involves M𝑀Mitalic_M resolution steps, denoted as r1…rMsubscript𝑟1…subscript𝑟𝑀r_{1}\ldots r_{M}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_r start_POSTSUBSCRIPT ital... |
Fig. 2: Qualitative results for affine registration with MI over 3D medical images using ADNI dataset [33]. The images are presented in a 3×4343\times 43 × 4 grid, with the first row representing the axial axis, the second row the coronal axis, and the third row the sagittal axis. In the first column of each row, the ... | Since the registration gradient is generally driven mainly by a fraction of the image content, such as the image boundaries in the case of SSD cost, a reasonable approximation of Equations (4) and (6) can be obtained by evaluating the cost only on relevant image locations.
This idea has been introduced in medical image... | This work has been supported by the French government, through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002, and by the ANR JCJC project Fed-BioMed 19-CE45-0006-01.
| D |
We conduct different teacher-student model pairs for distillation experiments, and use ResNet32 / ResNet56 / VGG13 / ResNet110 / ResNet50 / ResNeXt101 as teacher models and use ResNet8 / ResNet32 / VGG11 / MobileNet / ResNet34 / ResNeXt50 as student models. | Distillation performance is tested on various datasets, such as MNIST, CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet-1K, as top-1 classification accuracy is exploited as an evaluation metric.
The experimental results are shown in Tab. 7, Tab. 8 and Tab. 9. |
On MNIST, CIFAR, and Tiny ImageNet, we use ResNet32/56/110 and VGG13 as the teacher model and use ResNet8/32, VGG11, and MobileNet as the student model. We compare the top-1 classification accuracy (ACC) of different teacher-student pairs, the results are shown in Tab. 1. | While training B2KD methods, we randomly select 10K10𝐾10K10 italic_K images (100K100𝐾100K100 italic_K for ImageNet-1K) from the training set, and all images in the test set (val set for ImageNet) are used as the benchmark to calculate accuracy.
For other approaches, except DAFL [8] based on zero-shot learning, we u... |
Figure 7: Curve of top-1 classification accuracy on the datasets of CIFAR-100 (a,b) and CIFAR-10 (c,d). Using MEKD with soft (a,c) or hard (b,d) responses with or without ℒIMsubscriptℒ𝐼𝑀\mathcal{L}_{IM}caligraphic_L start_POSTSUBSCRIPT italic_I italic_M end_POSTSUBSCRIPT and ℒKLsubscriptℒ𝐾𝐿\mathcal{L}_{KL}caligr... | A |
The second reason is efficient compression of smooth functions. It is known that for functions with m𝑚mitalic_m continuous derivatives n𝑛nitalic_n-th coefficient is O(n−m)𝑂superscript𝑛𝑚O(n^{-m})italic_O ( italic_n start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT ) for both Chebyshev [MH02, Theorem 5.14] and ... |
All these properties combined make trigonometric and Chebyshev polynomials especially suitable for PDE solutions by spectral methods [Boy01] and adaptive lossless computations with functions [Tre07]. The later goal is fully realized in the Chebfun software.444https://www.chebfun.org Chebfun demonstrates that computati... | It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser... | Our solution to both of outlined problems is to detach the process of function approximation (interpolation) from the mapping between functional spaces, and explicitly fix the highest possible resolution. To do that, for both domain and codomain of neural operator we consider functions represented by finite series of t... |
First, the output of the neural operator is a neural network, i.e., essentially a black-box function. In classical numerical methods such as FEM [Cia02], spectral methods [Boy01] and others, the parametrization allows for extracting bounds on function, its derivatives, and any other local or global information in a co... | A |
To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude... | As our method computes the correlation between feature spatial locations, it might become intractable when feature maps are large. To this end, we extend our pipeline in a two-step hierarchical fashion:
1) instead of computing correlation of all spatial locations, we split the feature maps into several groups of patche... | Therefore, we propose a hierarchical distillation approach to address this large feature map limitation.
It contains two steps: 1) patch-group distillation that splits the entire feature maps into smaller patches, so to distill local information from the teacher to the student; 2) we further summarize the local patches... | In this section, we first briefly describe the fundamental elements of feature map knowledge distillation and then introduce the general formulation of our knowledge distillation via a target-aware transformer. As our method computes the point-wise correlation of the given feature maps, the computational complexity bec... | To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude... | A |
Table 1: Quantitative comparison between ENS-t-SNE and MPSE. Each pair of columns denotes what view is measured; either the full three dimensional embedding with respect to the full set of distance matrices or for the two dimensional projections with respect to the corresponding distance matrix. | In order to create a dataset with multiple perspectives, each containing multiple clusters, we propose the following procedure.
Fix the number of points and the number of projections; for each projection fix the number of clusters, and the number of points corresponding to each cluster. | Consider, for comparison, the standard t-SNE visualization of the same dataset in 2D; see Figure 6(b). The dominant factor for the embedding is the number of cylinders, resulting in three well-separated clusters in the embedding. Note, however, that the t-SNE embedding completely missed the weight information, as there... | We continue by analyzing the influence of the number of datapoints on the running time of the algorithm.
For this purpose, we create datasets containing clusters according to Section 3. We set the number of perspectives M=2,3𝑀23M=2,3italic_M = 2 , 3 and the number of clusters per perspective NCm=2𝑁subscript𝐶𝑚2NC_{... | In this section we consider the scalability of ENS-t-SNE with respect to the number of perspectives, the number of clusters per perspective, and the number of datapoints. In particular, the goal is to evaluate how the accuracy or the speed of ENS-t-SNE is affected as these parameters increase in value.
The results indi... | D |
We analyze the sample efficiency of ETC under the future and past sufficiency assumptions. In particular, such assumptions ensure that the future and past observations are sufficient for identifying the belief state, which captures the information-theoretic difficulty of POMDPs. We prove that ETC attains an O(1/ϵ2)𝑂1... | Deep reinforcement learning demonstrates significant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, ma... | Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a... |
In contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain significantly more challenging. Due to a lack of the Markov property, the low-dimensional feature of the observation at each step is insufficient for the prediction and control of the future (Sondik, 1971;... |
In the case that maintaining a belief or conducting the prediction is intractable, previous approaches establish predictive states (Hefny et al., 2015; Sun et al., 2016), which is an embedding that is sufficient for inferring the density of future observations given the interaction history. Such approaches typically r... | B |
μh(Sh,Γh−1)≔𝒫hπ(Sh,Γh−1)𝒫hb(Sh,Γh−1).≔subscript𝜇ℎsubscript𝑆ℎsubscriptΓℎ1subscriptsuperscript𝒫𝜋ℎsubscript𝑆ℎsubscriptΓℎ1subscriptsuperscript𝒫𝑏ℎsubscript𝑆ℎsubscriptΓℎ1\displaystyle\mu_{h}(S_{h},\Gamma_{h-1})\coloneqq\frac{\mathcal{P}^{\pi}_{h}(S%
_{h},\Gamma_{h-1})}{\mathcal{P}^{b}_{h}(S_{h},\Gamma_{h-1})}.it... | From a theoretical perspective, the identification result and the backward induction property of the bridge functions provide a way of decomposing the suboptimality of the learned policy in terms of statistical errors of the bridge functions.
When combined with the pessimism and the fast statistical rates enjoyed by an... | However, in many real-world applications, due to certain privacy concerns or limitations of the sensor apparatus, the states of the environment cannot be directly stored in the offline datasets. Instead, only partial observations generated from the states of the environments are stored (Dulac-Arnold et al., 2021).
For ... | The existence of such bridge functions is justified, e.g., by conditions on the the rank of certain conditional probabilities or singular values of certain conditional expectation linear operators.
We present the following examples to explain the existence in the tabular case with reactive policies. | Now given Assumption 3.1 and Assumption 3.5 on the existence of proxy variables and bridge functions, we are ready to present the main identification result.
It represents the true policy value J(π)𝐽𝜋J(\pi)italic_J ( italic_π ) via the value bridge functions (3.1), | C |
where ℒ(𝒙,𝝀)=f(𝒙)+𝝀Tc(𝒙)ℒ𝒙𝝀𝑓𝒙superscript𝝀𝑇𝑐𝒙\mathcal{L}({\bm{x}},{\bm{\lambda}})=f({\bm{x}})+{\bm{\lambda}}^{T}c({\bm{x}})caligraphic_L ( bold_italic_x , bold_italic_λ ) = italic_f ( bold_italic_x ) + bold_italic_λ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_c ( bold_italic_x ) is the La... |
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi... | On the other hand, a growing body of literature leverages optimization procedures to facilitate online inference, starting with Robbins1951stochastic; Kiefer1952Stochastic and continuing through Robbins1971convergence; Fabian1973Asymptotically; Ermoliev1983Stochastic. To study the asymptotic distribution of stochastic ... | In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi... | Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me... | D |
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition.
In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3... | The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB... |
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont... |
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous... | Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case... | C |
At the heart of WaveMix are three design elements – a stack of self-similar WaveMix blocks, a multi-level two-dimensional discrete wavelet transform (2D-DWT) in each block, and spatial resolution contraction followed by expansion back to the original size within a block. Using self-similar stacked blocks makes the arc... | We compared WaveMix models with various CNN, transformer, and token-mixing models for semantic segmentation and image classification. Ablation studies were conducted to assess the effect of the hyperparameters and the importance of each component and its placement in the WaveMix block.
|
Table 4 shows the performance of WaveMix on image classification using supervised learning on ImageNet-1K on a single GPU with limited epochs. WaveMix models outperform CNN and transformer-based models, and token-mixers. The use of non-learnable fixed weights and shallower network structure also makes inference using ... | We relate WaveMix to previous works in Section 2, where we delve further into the image priors modeled by various classes of neural architectures for vision, and the use of wavelet transform. Our key innovations – the WaveMix blocks, use of multi-level 2D-DWT in each block, channel mixing, and the preservation of featu... | For WaveMix model notation, we use the format Model Name -Embedding Dimension/ no. of blocks and mention the number of levels of DWT in brackets. We call the WaveMix model which uses only one level of 2D-DWT as WaveMix-Lite and it has been shown to perform well in small datasets with low resolution images. For other mo... | C |
β=(6,5,5,3,1)𝛽65531\beta=(6,5,5,3,1)italic_β = ( 6 , 5 , 5 , 3 , 1 ). The starred terms in the matrix below are
entries ai,jsubscript𝑎𝑖𝑗a_{i,j}italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT such that ai,j=αi+βjsubscript𝑎𝑖𝑗subscript𝛼𝑖subscript𝛽𝑗a_{i,j}=\alpha_{i}+\beta_{j}italic_a start_PO... | β=(6,5,5,3,1)𝛽65531\beta=(6,5,5,3,1)italic_β = ( 6 , 5 , 5 , 3 , 1 ). The starred terms in the matrix below are
entries ai,jsubscript𝑎𝑖𝑗a_{i,j}italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT such that ai,j=αi+βjsubscript𝑎𝑖𝑗subscript𝛼𝑖subscript𝛽𝑗a_{i,j}=\alpha_{i}+\beta_{j}italic_a start_PO... | the terms ∂Pi/∂xj(ai,j)subscript𝑃𝑖superscriptsubscript𝑥𝑗subscript𝑎𝑖𝑗\partial P_{i}/\partial x_{j}^{(a_{i,j})}∂ italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT / ∂ italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSC... |
where (αi)isubscriptsubscript𝛼𝑖𝑖(\alpha_{i})_{i}( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and (βj)jsubscriptsubscript𝛽𝑗𝑗(\beta_{j})_{j}( italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT define... | provide a maximal transversal sum. For all entries, one has
ai,j≤αi+βjsubscript𝑎𝑖𝑗subscript𝛼𝑖subscript𝛽𝑗a_{i,j}\leq\alpha_{i}+\beta_{j}italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ≤ italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSC... | D |
Alternatively, welleck2021naturalproofs propose mathematical reference retrieval as an analogue of premise selection. The goal is to retrieve the set of references (theorems, lemmas, definitions) that occur in its proof, formulated as a ranking problem (retrieval). An example conjecture with supporting premises is disp... |
Informal premise selection limitations. ferreira2020premise note that the graph-based approach to premise selection as link prediction struggles to encode mathematical statements which are mostly formulae, and suggest inclusion of structural embeddings (e.g. MathBERT peng2021mathbert) and training BERT on a mathematic... |
Separate approaches for learning mathematics and language representations may lead to improved performance, as it has in other tasks. Research in neuroscience butterworth2002mathematics; amalric2016origins suggests the brain handles mathematics separately to natural language: approaches in premise selection ferreira20... | Treating mathematics and language as one modality during training hinders performance in other tasks, but is the current norm in premise selection. Regardless of the task variation, current approaches ferreira2020premise; welleck2021naturalproofs; hancontrastive; coavoux2021learning tend to jointly consider mathematics... | Formula Retrieval. Similar to identifier-definition extraction, formula retrieval suffers from issues with wildcard formula queries (e.g. T=gαβTαβ𝑇subscript𝑔𝛼𝛽superscript𝑇𝛼𝛽T=g_{\alpha\beta}T^{\alpha\beta}italic_T = italic_g start_POSTSUBSCRIPT italic_α italic_β end_POSTSUBSCRIPT italic_T start_POSTSUPERSCRIP... | C |
Since the SBM is a very flexible model, it has already been adapted to multilayer networks. To name a few, Matias and Miele, (2017) model a collection of networks along a time gradient, the connectivity structure varies from time to time but they integrate a sparsity parameter, which is similar to our density paramete... |
Most contributions about collections of networks rely on some node correspondence between the networks. Recently, motivated by the analysis of fMRI data a few works extend the SBM to model population of networks (Paul and Chen,, 2018; Pavlović et al.,, 2020). Le et al., (2018) make the assumption that the networks of ... | Dealing with networks with no node correspondence, Faust and Skvoretz, (2002) compare networks involving different species and interaction types using the parameters of exponential random graph models (ERGMs or p*superscript𝑝p^{*}italic_p start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT models). More recently, Yin et al., ... |
Since the SBM is a very flexible model, it has already been adapted to multilayer networks. To name a few, Matias and Miele, (2017) model a collection of networks along a time gradient, the connectivity structure varies from time to time but they integrate a sparsity parameter, which is similar to our density paramete... | Finally on partitioning a collection of networks, Mukherjee et al., (2017) use graph moments (they also propose to fit a mixture of graphon when having access to node correspondence between the networks in the collection and then to make a spectral clustering on the distance matrix between networks), while Sweet et al.... | A |
Compositionality: In this work, covariate shift is introduced in test set by intervening on only one mechanism (i.e. rotation).
In the setting where multiple mechanisms are considered, it will be ideal if multiple E𝐸Eitalic_Es could leverage the knowledge learned separately and cooperate with each other. | However, some preliminary results show that E𝐸Eitalic_Es will not generalize well, if the training is based on only the interventions of target mechanism and keeping the others fixed.
This is in line with [26], where the generalization improves only if more combinations of two mechanisms (category and pose) are expose... | It can also be noticed that the pre-trained modules E𝐸Eitalic_E and D𝐷Ditalic_D do not have to access MNIST during training, and do not rely on C𝐶Citalic_C too much either.
Based on the fact that the training and test set of MNIST share the same class label space, we also explore a second architecture that only empl... | Compositionality: In this work, covariate shift is introduced in test set by intervening on only one mechanism (i.e. rotation).
In the setting where multiple mechanisms are considered, it will be ideal if multiple E𝐸Eitalic_Es could leverage the knowledge learned separately and cooperate with each other. | This task is extensively studied in various computer vision topics, such as 2D spatial invariance learning [16], text detection [46, 45, 44, 43], and 3D pose estimations [26, 6], among many others.
However, in most of the existing studies, parameter estimation can only be performed restricted to object categories that ... | A |
Learner Architecture.
Recall that, our PU learneris parameterized in terms of an encoder gB(⋅)subscript𝑔𝐵⋅g_{B}(\cdot)italic_g start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ( ⋅ ) with parameters B𝐵Bitalic_B and a linear layer with parameters 𝐯𝐯{\mathbf{v}}bold_v. On PU-CIFAR(Animal/Vehicle) we perform experiment... | Contrastive Loss Baselines.
We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated Sup... | We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated SupCon) (Khosla et al., , 2020;... | We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated SupCon) (Khosla et al., , 2020;... | Contrastive learning with supervision.
Supervised Contrastive Learning (SCL) (Khosla et al., , 2020; Zhong et al., , 2021; Graf et al., , 2021; Assran et al., , 2020) is a supervised variant of infoNCE that considers multiple positive pairs from other samples belonging to the same class as anchor in addition to the aug... | A |
Recent work has productively cast the study of multilayer community structure in the language of multilinear algebra (Wu et al., 2019), furnishing tensor-based definitions of multilayer stochastic block models (SBMs)(Schein et al., 2016; De Bacco et al., 2017; Gauvin et al., 2014; Carlen et al., 2022; Tarrés-Deulofeu... | We extend and generalize these efforts, connecting the tensorial nonnegative Tucker decomposition (NNTuck)with KL-divergence to the statistical inference of multilayer SBMs.
We show that minimizing the KL-divergence of the NNTuck is exactly equivalent to maximizing the log-likelihood of observing a multilayer network ... | The significance of this proposition is noticing that not only is minimizing KL-divergence equivalent to maximizing log-likelihood, but also that the algorithm by which to find a local minimum of the KL-divergence is the exact same as that to find a local maximum of the log-likelihood. Moreover, using EM to maximize th... | We begin this section by outlining our approach to a multilayer SBM that corresponds to a nonnegative Tucker decomposition with KL-divergence.
We will henceforth refer to the multilayer SBM developed here as just the nonnegative Tucker decomposition (NNTuck), although it’s important to note that the SBM interpretation... |
In this work we use the nonnegative Tucker decomposition (NNTuck)with KL-divergence as an extension of the stochastic block model (SBM)to multilayer networks. The NNTuck allows for layers in the network to have latent structure, just as the SBM allows for latent structure in the nodes of a single layer network. Usin... | A |
In this section, we evaluate the effectiveness and efficiency of our model on widely used benchmark datasets, scrutinize the contribution of each component of the model in an ablation study, and present a case study for a better understanding of our model.
| Also, TransferNet and NSM are state-of-the-art (SOTA) reasoning-based methods in multi-relation QA.
Furthermore, considering that our answer search in the learned embedding space is similar to embedding-based methods, we also select a prominently adopted embedding-based method: EmbedKGQA [22]. | This reflects that it is indeed challenging for the simple reasoning of QAGCN to answer complex 3-hop questions.
However, the performance of QAGCN is better than most reasoning-based methods, e.g., 29.8% and 1.6% higher than the best-performing RL-based method SRN on MetaQA 3-hop and PQ-3hop, respectively. | Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task.
Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:... | However, please note that QAGCN also has a large margin of 5.8% in comparison with the third-best method NSM.
This demonstrates that, on complex questions, the simple single-step reasoning of QAGCN could perform better than the SOTA methods with complex multi-step label propagation. | C |
When classifying the phrase “Do you want to play football?”, the word football is recognized and each of the football, topic
weights are connected to the sum qubit for the corresponding topic. Each of the topic qubits is measured, and the winner is the topic | similar to a realistic use case for quantum NLP. The average number of words in each review is between 228 and 229, and classification experiments were performed in batches of 50 documents: the the number of word tokens involved in each experiment was around 11000 (whereas the total size of
the lambeq training and test... | The design above is clear but wasteful — a system that requires distinct bits for each word in the vocabulary would be fine in classical
but not yet in quantum computing. In machine learning terms, using a qubit for each word is an example of a ‘one-hot encoding’, | and two-qubit gates, in particular the ‘controlled-X’ or ‘controlled-NOT’ (CNOT) gate that perfoms an X-rotation on the second qubit
of the first qubit is in the |1⟩delimited-|⟩1\lvert 1\rangle| 1 ⟩ state. Example circuit diagram components for these are shown in Figure 2. |
Building off of an n𝑛nitalic_n-qubit feature map for n𝑛nitalic_n-dimensional word vectors, the same QSVM classification process was followed for densely encoded feature maps (alexander2022quantum, ). In this case, vector representations were encoded into fewer qubits in the feature map circuit, using log2(n)subscri... | B |
Pei et al. (2020) firstly draw attention to the limitation of GNN on less-homophilic graphs.
Since then, various GNNs have been proposed to improve performance on these graphs. H2GCN (Zhu et al. 2020) show that proper utilization of ego-embedding, higher-order neighbourhoods, and intermediate embeddings can improve res... | Pei et al. (2020) firstly draw attention to the limitation of GNN on less-homophilic graphs.
Since then, various GNNs have been proposed to improve performance on these graphs. H2GCN (Zhu et al. 2020) show that proper utilization of ego-embedding, higher-order neighbourhoods, and intermediate embeddings can improve res... | This observation shows that the eigenvectors corresponding to the leading eigenvalues do not always align well with the node labels. Particularly, in a heterophilic graph, two adjacent nodes are unlikely to have the same label, which is in contradiction with the smoothness properties of leading eigenvectors. However, w... | Law, Urtasun, and Zemel (2017) and Bach and Jordan (2003) reveal that minimizing the loss of node similarity matrix can be seen as learning the leading eigenvector representations used for spectral clustering. Bianchi, Grattarola, and Alippi (2020) train cluster assignment using a similarity matrix and further use the ... |
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie... | C |
Another set of experiments in Figure 4(b) illustrates how a larger number of learners lead to better outcomes in terms of total risk. We consider a set of m=2𝑚2m{}=2italic_m = 2 learners and n=50𝑛50n{}=50italic_n = 50 subpopulations. We simulate the dynamics until the market has reached equilibrium, at which point a ... | Despite these inherent difficulties, we find that the situation improves as the number of learners increases. It is straightforward to see that the maximal social welfare will increase:
any point which is optimal for m𝑚mitalic_m learners can be trivially transformed into a feasible point for m+1𝑚1m+1italic_m + 1 lear... | Another set of experiments in Figure 4(b) illustrates how a larger number of learners lead to better outcomes in terms of total risk. We consider a set of m=2𝑚2m{}=2italic_m = 2 learners and n=50𝑛50n{}=50italic_n = 50 subpopulations. We simulate the dynamics until the market has reached equilibrium, at which point a ... | The procedure repeats until the number of learners reaches number of subpopulations. These simulations illustrate that more competition improves social welfare, however the improvements are not uniform for all subpopulations with some groups seeing their risk at equilibrium increase with the addition of new learners.
| Figure (a) illustrates a setting with 3 subpopulations and 2 learners. The dsolid lines correspond to the risk trajectory for the unstable balanced equilibrium at initialization. Dotted and dashed lines illustrate risk trajectories under three different slight perturbations from the initialization.
In Figure (b), the l... | C |
_{g}^{yy_{i+1}}\}_{g\in\mathcal{G}}\cup\{0,\gamma\}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT ∪ { italic_α start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSC... | Table 5 and Table 6 list the fairness lower bound calculated for each of these classifiers, as well as the upper bound obtained by each of the tested methods, and the ratio between the smallest upper bound and the lower bound. In some of the experiments the lower bound is equal to the upper bound, giving the exact valu... | Since the ordering of the labels in the greedy procedure is arbitrary, it is possible to attempt several different orderings and select the one that obtains the smallest unfairness value. In our experiments, we tried 10101010 random orderings in each upper bound calculation.
| In all of the cases above, we get an accurate result (up to tolerance γ𝛾\gammaitalic_γ) in the case of two labels. In the case of more than two labels, we get a heuristic result, since our minimization procedure is not guaranteed to converge to 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc st... | “Suppose that the classifier is known to be nearly fair. How accurate can it be?”. If it is known that the classifier’s unfairness is smaller than some value, then the minimal possible error of the classifier is the smallest value of the error lower bound in the part of the Pareto curve that is to the left of this unfa... | B |
Firstly (Left): SotA is based on range queries, with radius r𝑟ritalic_r as input. There are indefinitely many ranges to probe, whereas most return the same motif. A simple, yet overseen improvement to only get distinct motifs, is to use k-NN queries.
Secondly (Center): Consider two motif sets with 3 subsequences each.... |
Though intuitively easy to describe, the specific definitions of the MD problem for a TS T𝑇Titalic_T differ notably between existing works. Several tools focus only on motif pairs (Mueen et al., 2009; Yeh et al., 2016), which are defined as the most similar pair(s) of subsequences of T𝑇Titalic_T of user-defined leng... |
Ice Ice Baby by Vanilla Ice: This song contains one famous motif set with 20202020 repetitions roughly 4444s long from the introductory example. Learning-k (Section 5) took 3.43.43.43.4s. Given these silver standard parameters, all competitor methods find this riff but with up twice as large extent. k𝑘kitalic_k-Motif... | Finally (Right): We introduce elbow plots for a guided extraction of meaningful motif set sizes. Here, rapid changes in similarity when increasing k represent a characteristic change from one motif to another. Overall, we will show that these improvements reduce the runtime and human efforts to find motif sets consider... | In this paper, we introduce k𝑘kitalic_k-Motiflets, a novel definition for MD that turns the problem upside-down. k𝑘kitalic_k-Motiflets take the desired motif set size k𝑘kitalic_k as parameter and maximize the similarity of the motif set. As we will show, this k𝑘kitalic_k is an integer with an easily understood inte... | C |
Different from [27]-[30], the measurement noises are only assumed to be a martingale difference sequence and independent of the graphs and regression matrices in Assumption (A2). In this paper, neither mutual independence nor spatio-temporal independence is assumed on the regression matrices and graphs.
This is applica... |
The problems of decentralized online regression over graphs have been investigated in most of the literature, including regression with time-varying unknown parameters (R.T.V.P.) (e.g. [32]-[33]) and regression with time-invariant unknown parameters (R.T.I.P.) (e.g. [27]-[30], [34], [38], [45] and [51]-[52]), where di... | Historically, Guo [41] first proposed the stochastic persistence of excitation condition for analyzing the centralized Kalman filtering algorithm, which was then refined in [42]. Whereafter, the cooperative information condition on the conditional expectations of the regression matrices over the deterministic connected... | At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fittin... | At present, the non-regularized decentralized linear regression problems have been widely studied in [27]-[38].
Xie and Guo [32]-[33] considered the time-varying linear regression with measurement noises, where the cooperative information condition on the conditional expectations of the regression matrices was proposed... | A |
To reduce the computational cost of AA, a recently proposed variant of AA called Alternating Anderson Richardson (AAR) 21, 22, 23 performs multiple fixed-point iterations between two consecutive Anderson mixing corrections. The number of fixed-point iterations between two consecutive Anderson mixing corrections can be ... | One possibility to reduce the computational cost is to perform approximate/inexact calculations in the AAR scheme. To avoid compromising the convergence of AAR, the computation accuracy reduction must be performed judiciously.
For specific problems, the accuracy has been reduced by projecting the least-squares problem ... |
Although AA has been shown to significantly improve the convergence of fixed-point iterations in several scientific applications, effectively performing AA for large scale problems without introducing excessive computational burden remains still a challenge. One possibility to reduce the computational cost consists in... | By interpreting the accuracy reduction in AAR calculations as a perturbation to the original least-squares problem in the Anderson mixing computation, we assess how much accuracy can be sacrificed to limit the communication and computational burden without compromising convergence. Along the same lines as
previously pu... | Our theoretical results allow for accuracy reduction in different calculations performed by AAR on linear fixed-point problems. When the fixed-point operator evaluations are the dominant computational cost of AAR, one may choose to approximate the evaluations of the fixed-point operator to reduce the computational cost... | A |
There exist several controllable approaches that prepend information to the input source to influence the different aspects of the text such as the style [11] or the presence of a particular entity [12, 13]. Even though this technique can be readily combined with topic controllable summarization, this direction has not... | For the tagging-based method, all the words of the input document are lemmatized to their roots using NLTK [32]. Then, we tag the words between the existing lemmatized tokens and the representative words for the desired topic, based on the top-N𝑁Nitalic_N=100 most representative terms for each topic.
|
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and... | Given the set of representative words for each topic, a document, and the desired topic, the tagging mechanism works as follows. All the words of the input document are lemmatized to their roots. Then, we identify the common words between the existing lemmatized tokens and the representative words for the desired topic... |
We propose three different approaches to control the generation of the output summaries using control tokens: a) prepending the thematic category as a special token to the document, b) tagging with special tokens the representative terms for each topic, and c) combination of both control tokens. | B |
Figure 1: The standard cell for a 3D implementation of a Toffoli gate: a) Green vertices are the control qubits of the Toffoli gate, and the orange vertex is the target. In the Clifford+T decomposition of the Toffoli gate, the orange and green qubits are CNOT controls and the grey qubits are CNOT targets; b) Pink edge... |
The optimal design of large-scale circuits – both quantum and classical – is not computationally tractable, necessitating the use of sub-optimal heuristics. Noting that the qubit layout of a quantum computer is generally regular and similar to a tiling, large-scale quantum circuit design challenges can be naturally ma... |
The naive “full-custom” approach for optimizing large-scale classical circuit design automation is to automatically explore the entire range of transistor parameters and all possible interconnected circuit structures that achieve a desired complex computational function. However, this is intractable when billions or t... |
This work therefore describes the application of this tiling method to the design of a multiplication circuit. By creating a tile and repeatedly using it in a circuit, this standard cell approach leads to the realization of quantum circuits that are superior to those developed through sophisticated algorithms. Additio... | Given the extreme complexity of VLSI circuit design, the use of “standard cells” have therefore become a mainstream technique for the efficient design of large-scale high-performance computing systems [5] within standard circuit design curricula [6]. In the conventional VLSI standard cell approach, a library of standar... | B |
The parameters stacked in between the layers are shaped in the size of the layer it is being stacked to. The reason we pass the parameters in each block is that if we only passed them in the first block, it would be difficult for the later blocks to retain them. This problem is somewhat similar to the degradation prob... |
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i... |
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p... |
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ... | It can be observed that the default-to-param model achieves a higher mean PSNR and lower MAE compared to Param-to-Param model. We believe this is due to the more complex nonlinearity associated with the task of reparameterizing from any parameter than from a fixed parameter.
| B |
Our primary aim is to develop structure-preserving Eulerian algorithms to solve L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and structure-preserving Lagrangian algorithms to solve generalized diffusions based on their energy-dissipation law by utilizing neural networks as a... | To overcome difficulties arising from neural network discretization, we develop a discretization approach that performs temporal discretizations before spatial discretizations. This approach leads to a computer-memory-efficient implementation of neural network-based algorithms.
Since neural networks are advantageous du... |
The goal of this paper is to combine neural network-based spatial discretization with the framework of the discrete energetic variational approach [50], to develop efficient and robust numerical schemes, termed as energetic variational neural network (EVNN) methods. |
Our primary aim is to develop structure-preserving Eulerian algorithms to solve L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and structure-preserving Lagrangian algorithms to solve generalized diffusions based on their energy-dissipation law by utilizing neural networks as a... | In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat... | A |
We review classical constructions of sheaf theory, such as integral transforms and kernel compositions. We recall the definition of the convolution distance between (derived) sheaves of 𝐤𝐤{\mathbf{k}}bold_k-vector spaces on a finite-dimensional real vector space, as developed by Kashiwara-Schapira [17] and provide pr... | The interplay between sheaves on a real vector space and persistence theory necessitates the use of a topology on a vector space introduced by Kashiwara and Schapira [16], called the γ𝛾\gammaitalic_γ-topology. In this section, we first recall the basic definitions associated to the γ𝛾\gammaitalic_γ-topology. There is... | We review the notion of γ𝛾\gammaitalic_γ-sheaves, and recall the precise relationship between this type of sheaves and persistence modules [3]. We then strengthen one of our previous results, asserting that the interleaving distance between persistence modules equals the convolution distance between their associated γ... | The matchings between graded barcodes of γ𝛾\gammaitalic_γ-sheaves are defined in the same way as between barcodes of persistence modules. Therefore, one can compute the bottleneck distance between barcodes of γ𝛾\gammaitalic_γ-sheaves using already existing software [29, 30]. It is nevertheless far from being true whe... | One of the challenges of multi-parameter persistence is to provide a meaningful notion of distance between persistence modules which can be computed in a reasonable time complexity. Indeed, it has been shown that the usual interleaving distance between persistence modules is NP-hard to compute in the multi-parameter ca... | B |
where the symbols take the same values as in equation 3, and additionally, N𝑁Nitalic_N is the total number of instances in the dataset, and F𝐹Fitalic_F is the total number of free model parameters. The first term in equation 4 represents the log-likelihood of the learnt graph G𝐺Gitalic_G generating the dataset D𝐷D... | We compare the sensitivity of the F1 score relative to the default alphabetic ordering and two other orderings which we term “optimal” and “worst”. In optimal ordering, the variables are ordered so that they are consistent with the topological ordering of the nodes in the reference graph. This optimal ordering ensures ... | The above scores are decomposable so that the score for the whole DAG is the sum of the individual scores for each node and its parents, which facilitates computing the scores of neighbouring DAGs. The scores which we use in this study are all score equivalent which means that all DAGs in an equivalence class have the ... | We investigate the effect of variable ordering on two score-based algorithms. The first of these, HC, starts from an empty DAG and compares the scores of each neighbouring DAG, and moves to the neighbouring DAG with the highest score at each iteration. The process continues until there is no neighbouring DAG which impr... |
The results in this study are obtained using the algorithm implementations provided in version 4.7 of the bnlearn package [32, 35]. We use the default objective scores, conditional independence test functions and hyper-parameters222For score-based and hybrid algorithms, the default is to use the BIC score with a compl... | B |
Firstly, we employ a lookup table (LUT) based computation technique to mitigate redundant calculations caused by digitized binary weights after BCQ.
Furthermore, since most non-uniform quantization methods involve complex operations with limited parallelism and often lack hardware support, we design LUT-GEMM to efficie... | This simple yet profound enhancement enables the representation of both non-uniform and uniform quantization methods within the extended BCQ format, providing us with the flexibility to leverage various quantization techniques based on the specific requirements of LLMs.
Finally, We further refine the implementation det... | 1) We verify that BCQ is capable of representing both uniform and non-uniform weight quantization.
2) We show that LUT-GEMM, using the BCQ format, offers a broad spectrum of latency and accuracy trade-offs, leveraging GPU-specific hardware utilization methods to implement various BCQ configurations efficiently. | In Figure 3, we can see that for q𝑞qitalic_q-bit quantization, the uniform quantization method employs a single scaling factor, while the non-uniform quantization technique calls for the use of q𝑞qitalic_q distinct scaling factors.
Consequently, due to the incorporation of the extended binary-coding quantization form... | It is worth noting that BCQ was initially proposed to support non-uniform quantization, which relies on customized hardware for bit-level operations.
To our knowledge, we are the first to show that prior uniform quantization can be reformulated in the form of BCQ, allowing LUT-GEMM to support both non-uniform and unifo... | A |
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... | Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... | The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert... | Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia... |
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided... | D |
Figure 11: Attention distribution between time step and channel. The top row is the weight from the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset. We select sparse and dense attention frames in both temporal-wise (T=3,6𝑇36T=3,6italic_T = 3 , 6) and channel-wise (C=33,77𝐶3377C=33,77italic_C = ... |
As mentioned above, we contend that the frame at the current time step exhibits a significant correlation with its neighboring frames in both the channel and temporal dimensions. This correlation opens up the possibility of employing a mechanism to establish a connection between these two dimensions. Initially, we emp... | To thoroughly examine the impact of the TLA and CLA modules, we conducted a series of ablation studies. The results, as presented in Tab. VII, indicate that the CLA module plays a crucial role in enhancing performance. This can be attributed to the fact that, in most SNN designs, the number of simulation time steps is ... | We initially investigate the kernel size in the TCJA module. Intuitively, when the size of the kernel rises, the receptive field of the local attention mechanism will also expand, which may aid in enhancing the performance of TCJA-SNN. However, the experimental results in Fig. 10 overturn this conjecture. As the size o... |
To make the attention mechanism easier to understand, we finally visualize the output of the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset, which can be seen in Fig. 11. Changes in attention weights are primarily accumulated among channels, verifying further the substantial role performed by th... | C |
Kondrat’ev spaces are denoted in [33] by
𝔎p,as(Ω)subscriptsuperscript𝔎𝑠𝑝𝑎Ω\mathfrak{K}^{s}_{p,a}(\Omega)fraktur_K start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p , italic_a end_POSTSUBSCRIPT ( roman_Ω ), and similarly we have | Vλ+1m+1(Ω)⊂Vλm(Ω),subscriptsuperscript𝑉𝑚1𝜆1Ωsubscriptsuperscript𝑉𝑚𝜆ΩV^{m+1}_{\lambda+1}(\Omega)\subset V^{m}_{\lambda}(\Omega),italic_V start_POSTSUPERSCRIPT italic_m + 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_λ + 1 end_POSTSUBSCRIPT ( roman_Ω ) ⊂ italic_V start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCR... | Vλs(Ω)=𝔎2,s−λs(Ω)subscriptsuperscript𝑉𝑠𝜆Ωsuperscriptsubscript𝔎2𝑠𝜆𝑠ΩV^{s}_{\lambda}(\Omega)=\mathfrak{K}_{2,s-\lambda}^{s}(\Omega)italic_V start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( roman_Ω ) = fraktur_K start_POSTSUBSCRIPT 2 , italic_s - italic_λ end_PO... | 1}(\Omega),italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( roman_Ω ) ⊂ italic_V start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ) = italic_V start_POSTSUPERSCRIPT italic_s + ( 1 - italic_s ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_μ - 1 + ( 1 - italic_μ... |
Vλ+ϵs+ϵ(Ω)⊂Vλs(Ω),subscriptsuperscript𝑉𝑠italic-ϵ𝜆italic-ϵΩsubscriptsuperscript𝑉𝑠𝜆ΩV^{s+\epsilon}_{\lambda+\epsilon}(\Omega)\subset V^{s}_{\lambda}(\Omega),italic_V start_POSTSUPERSCRIPT italic_s + italic_ϵ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_λ + italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ) ⊂ italic_V star... | B |
After learning special parameters, each action candidate is executed at a given block to verify its executability. An action candidate may not be executable due to various reasons: (1) the function is disabled by the owner or admin; (2) internal function calls to other contracts are disabled by the owners or admins of... | Table 1 lists each action’s token flow, along with the number of data points collected initially (without counterexamples) and the total number of data points for polynomial and interpolation, respectively. The amounts of tokens transferred in/out for each action are calculated based on its contract’s member variables ... |
For all verified smart contracts, their ABIs are made public to facilitate users to call functions and engage with the contracts. An ABI typically comprises (public or external) function names, argument names/types, function state mutability, and return types. During the process of selecting action candidates, certain... | FlashSyn does not require prior knowledge of a vulnerable location or contract. Given a set of DeFi lego user interface contracts, action candidates and their special parameters such as strings are given by the users or automatically extracted from transaction history using FlashFind. FlashSyn utilizes these action can... | FlashFind automatically collects storage read/write information during the execution of these functions and infers the Read-After-Write (RAW) dependencies101010This RAW dependency information is also employed in FlashSyn’s initial data collection to expand the range of data points. between different action candidates. ... | D |
In this section we complement our theoretical results with an empirical investigation. Our goal is to show that our main idea of learning a KDE over a low dimensional space of tasks is effective also for state-of-the-art meta-RL algorithms, for which the linearity assumption of PCA clearly does not hold, and computing ... | VariBAD Dream:
Recall that our pipeline is to learn a KDE over the task parameters θ𝜃\thetaitalic_θ, and then train a policy on tasks from the estimated KDE. Unfortunately, in our meta-RL setting, we do not assume that we directly know the θ𝜃\thetaitalic_θ representation for each task. However, the VAE in VariBAD al... |
Modern deep RL algorithms are known to be highly sensitive to many hyperparameters [13], and meta RL algorithms are not different. To demonstrate our case clearly, we chose to build on the VariBAD algorithm of Zintgraf et al. [39], for which we could implement our approach by replacing just a single algorithmic compon... | To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. [39]. In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural networ... | We note that regularization techniques inspired by the mixup method [38] have been applied to meta learning [37], and recently also to meta RL [21], with a goal of improving generalization to out-of-distribution tasks. In our experiments, we compared our approach with an approach inspired by [21], and found that for in... | B |
MRR=1|Qr|∑i=1|Qr|1ranki𝑀𝑅𝑅1𝑄𝑟subscriptsuperscript𝑄𝑟𝑖11𝑟𝑎𝑛subscript𝑘𝑖MRR=\frac{1}{|Qr|}\sum^{|Qr|}_{i=1}\frac{1}{rank_{i}}italic_M italic_R italic_R = divide start_ARG 1 end_ARG start_ARG | italic_Q italic_r | end_ARG ∑ start_POSTSUPERSCRIPT | italic_Q italic_r | end_POSTSUPERSCRIPT start_POSTSUBSCR... |
Test Set Preparation. We evaluated our proposed framework by testing the query performance of frequently used medical entities selected from our collected healthcare Q&A corpora. Only five divisions (community groups in MedHelp), including General-Health, Women-Health, Dermatology, Ear-Nose-Throat, and Neurology, were... | recall=|relevant items∪retrieved items||relevant items|recallrelevant itemsretrieved itemsrelevant items\text{recall}=\frac{|\text{relevant items}\cup\text{retrieved items}|}{|\text{%
relevant items}|}recall = divide start_ARG | relevant items ∪ retrieved items | end_ARG start_ARG | relevant items | end_ARG |
Results. Table III reports the MRR performance for each model, in which EN →→\rightarrow→ ZH means that we used English queries to find Chinese synonym candidates (i.e., Chinese translations), and ZH →→\rightarrow→ EN was the reverse query direction. All three models outperformed the random baseline, indicating that a... |
When the system places the relevant item in the first place, its MRR will be 1111. In our case, high MRR means relevant translations are identified earlier, which prevents people from reading through many irrelevant translations. Reversely, if a system hardly identifies relevant items or places those items in the latt... | D |
}=\frac{1}{\text{log }k}\mathbb{E}[-\text{log }p]caligraphic_S start_FLOATSUPERSCRIPT italic_j end_FLOATSUPERSCRIPT = divide start_ARG - 1 end_ARG start_ARG log italic_k end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT italic_k ... | One of the primary motivations behind our work is the recognition that model complexity can be an insufficient descriptor of human interpretability as shown in Fig. 1. In this case, if model complexity is used as a proxy for human interpretability, then both linear models shown in Fig. 1(a,b) will be assigned the sam... |
Lemma 01: Similar to the concept of self-information/surprisal in information theory, the negative logarithm of pksubscript𝑝𝑘p_{k}italic_p start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT from a fitted linear model can be defined as the self-interpretability penalty of that feature. Interpretation entropy is then comp... |
Furthermore, we view the overall problem of AI model explanation from the lens of classical thermodynamics.Callen (1985) It is known in thermodynamics that the equilibrium state of a system is characterized by a minimum in its Helmholtz Free Energy F(T,V):=U−TSassign𝐹𝑇𝑉𝑈𝑇𝑆F(T,V):=U-TSitalic_F ( italic_T , ital... |
This functional form of interpretability penalty i.e, interpretation entropy (𝒮𝒮\mathcal{S}caligraphic_S) encourages low values for a sharply peaked distribution of fitted weights indicating high human interpretability and vice-versa. Furthermore, if the features are independent, 𝒮𝒮\mathcal{S}caligraphic_S has two... | D |
One of the weaknesses of pure fuzzing approaches is their inability to find test-cases that explore program code beyond complex guards, as they essentially work by randomly mutating seeds and, therefore, struggle to find inputs that satisfy the guards. |
The combination of symbolic execution and BMC with fuzzing has been used recently to combine the strengths of both techniques. For example, VeriFuzz (Chowdhury et al., 2019) is a state-of-the-art tool we have previously compared to FuSeBMC. The authors describe it as a program-aware fuzz tester that combines feedback-... | Symbolic execution and BMC have shown competence in producing high-coverage test-cases and detecting errors in complex software. One of the more popular symbolic execution engines is KLEE (Cadar et al., 2008). KLEE is a tool that explores the search space path-by-path by utilizing LLVM compiler infrastructure and dynam... | Furthermore, Tracer (Jaffar et al., 2012) is a verification tool that uses constraint logic programming (CLP) and interpolation methods. Another tool based on symbolic execution is DART (Godefroid et al., 2005). It conducts software analysis and applies automatic random testing to find software bugs. BAP (Brumley et al... |
He et al. (He et al., 2019) proposed an approach to learning a fuzzer from symbolic execution. First, it phrases the learning task in the framework of imitation learning. Then, it employs symbolic execution to generate quality inputs with high coverage while a fuzzer learns using neural networks to be used to fuzz new... | B |
Here 𝒟msuperscript𝒟𝑚\mathcal{D}^{m}caligraphic_D start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT denotes the m𝑚mitalic_m-th order (standard) differential operator: 𝒟mφ(x)=dmdxmφ(x)superscript𝒟𝑚𝜑𝑥superscript𝑑𝑚𝑑superscript𝑥𝑚𝜑𝑥\mathcal{D}^{m}\varphi(x)=\frac{d^{m}}{dx^{m}}\varphi(x)caligraphic_D st... | The fractional collocation methods in [31, 48] and the fractional Galerkin method in [36] achieve spectral convergence when the solution is smooth. However, in addition to giving rise to dense linear systems, these methods revert back to algebraic convergence if the solution has a singularity at the left endpoint, whic... | The convergence and stability analysis of the JFP method for general linear FIEs (69) is a topic for future research. This analysis could prove the bounds on condition numbers that we found in Fig. 7f and reveal the dependence of the rate of convergence on the parameter p𝑝pitalic_p in the JFP basis (see remark 3).
|
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [... | We shall compare the effect of λ𝜆\lambdaitalic_λ on the rate of convergence of the sum space method of [25] and the JFP method. We let the constant λ𝜆\lambdaitalic_λ grow quadratically in (49) since this is also the case in the time-fractional fractional heat/wave equation that we shall consider in Example 4.
| A |
Supplementary Figure S23: The maximum capability of the topological features measured by AUC-mROC in the supervised approach.
The imbalanced positive and negative samples are generated by sample2. We use 21 indexes from four families and measure the performance of the supervised prediction by these indexes in all 550 n... | p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the percentage of samples in LPsuperscript𝐿𝑃L^{P}italic_L start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT that hold the topological feature, and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the percentage of samples in... |
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ... |
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr... | Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind... | B |
We update the last two blocks of the MCUNet [47] model and only 1/4 of the weights for each layer to compare the accuracy of different channel selection methods (larger magnitude, smaller magnitude, and random). The results are quite similar (within 0.2% accuracy difference). Channel selection is not very important for... |
We compare the performance of our searched sparse update schemes with two baseline methods: fine-tuning only biases of the last k𝑘kitalic_k layers; fine-tuning weights and biases of the last k𝑘kitalic_k layers (including fine-tuning the full model, when k𝑘kitalic_k equals to the total #layers). For each configurati... | Figure 9: Sparse update can achieve higher transfer learning accuracy using 4.5-7.5×\times× smaller extra memory (analytic) compared to updating the last k𝑘kitalic_k layers. For classifier-only update, the accuracy is low due to limited capacity. Bias-only update can achieve a higher accuracy but plateaus soon.
| We visualize the update schedule of the MCUNet [47] model searched under 100KB extra memory (analytic) in Figure 11 (lower subfigure (b), with 10 classes). It updates the biases of the last 22 layers, and sparsely updates the weights of 6 layers (some are sub-tensor update).
The initial 20 layers are frozen and run for... | The most straightforward way is to only update the classifier layer [15, 23, 26, 62], but the accuracy is low when the domain shift is large [12].
Later studies investigate other tuning methods including updating biases [12, 71], updating normalization layer parameters [53, 25], updating small parallel branches [12, 32... | D |
𝒜=A+ΔA,ℬ=B+ΔB,w=‖Δb‖2‖b‖2(‖A‖2+‖B‖2)+‖ΔA‖2+‖ΔB‖2,formulae-sequence𝒜𝐴Δ𝐴formulae-sequenceℬ𝐵Δ𝐵𝑤subscriptnormΔ𝑏2subscriptnorm𝑏2subscriptnorm𝐴2subscriptnorm𝐵2subscriptnormΔ𝐴2subscriptnormΔ𝐵2\mathcal{A}=A+\Delta A,\mathcal{B}=B+\Delta B,w=\frac{\|\Delta b\|_{2}}{\|b\|_%
{2}}(\|A\|_{2}+\|B\|_{2})+\|\Delta A... | Clearly, x∗superscript𝑥∗x^{\ast}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a solution of the AVEs (1.1) if and
only if r(x∗)=0𝑟superscript𝑥∗0r(x^{\ast})=0italic_r ( italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) = 0. The function r(x)𝑟𝑥r(x)italic_r ( italic_x ) is called the natural |
x∗−y∗=−(A−BD~)−1Δb,superscript𝑥∗superscript𝑦∗superscript𝐴𝐵~𝐷1Δ𝑏x^{\ast}-y^{\ast}=-(A-B\tilde{D})^{-1}\Delta b,italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = - ( italic_A - italic_B over~ start_ARG italic_D end_ARG ) start_POSTSUPERSCRIPT - 1 end_P... | relative perturbation bounds of Theorem 3.2 under Theorems 2.5, 2.7
and 2.9, x∗superscript𝑥∗x^{\ast}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and y∗superscript𝑦∗y^{\ast}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT in order denote the solutions of | perturbed. More specifically, when ΔA,ΔBandΔbΔ𝐴Δ𝐵andΔ𝑏\Delta A,\Delta B~{}\mbox{and}~{}\Delta broman_Δ italic_A , roman_Δ italic_B and roman_Δ italic_b are the perturbation terms of A,Bandb𝐴𝐵and𝑏A,B~{}\mbox{and}~{}bitalic_A , italic_B and italic_b in (1.1) and (1.2), respectively, how do
we characterize th... | C |
To see if the insights stemming from our asymptotic analysis are visible already for realistic problems sizes, we conduct a small empirical analysis as well. We defer the details to Section 9 and note here only that the different rates of void mutations (mutations that create an offspring equal to the parent) of the d... | In summary, our results on the LeadingOnes and Jump benchmarks show that several arguments and methods from the bit-string world can easily be extended to permutation search spaces, however, the combinatorially richer structure of the set of permutations also leads to new challenges and new research problem such as wha... | With this work, we aim at contributing to the foundations of a systematic and principled analysis of permutation-based evolutionary algorithms. Noting that the theory of evolutionary algorithms for bit-string representations has massively profited from the existence of widely accepted and well-understood benchmarks suc... |
We designed a simple and general way to transfer the classic benchmarks from pseudo-Boolean optimization into permutation-based benchmarks. Our hope and long-term goal is that the theory of permutation-based EAs can profit from these in a similar manner as the classic EA theory has profited from benchmarks for bit-str... |
Our analysis for jump functions, in contrast, reveals a subtle difference to the bit-string case. Similar to the bit-string case, also in the optimization of a permutation-based jump function, the most difficult step is to mutate a local optimum777The precise definition of a local optimum depends on a neighborhood str... | A |
This paper mainly focuses on the case of two-grid geometric optimization, which constitutes the core problem of multilevel optimization. Our future work will examine the multilevel case in detail, the selection of an appropriate number of resolution levels and related problems, and possible refinements. The latter inc... | A key ingredient of the approach is to replace the feasible set [0,1]nsuperscript01𝑛[0,1]^{n}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT by the interior (0,1)nsuperscript01𝑛(0,1)^{n}( 0 , 1 ) start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT that is turned into a Riemannian manifold (ℬn,g)superscrip... | Such an adaption of the constraints is not needed in our proposed approach and therefore makes our approach more flexible. We notice however that the impact of the second level in the case of the projected gradient is stronger than in the geometric case.
We believe that this is due to the fact that our coarse model loo... | We thank Jan Plier (Heidelberg University) for simulation code that efficiently evaluates
our objective. We greatly benefited form fruitful discussions with Christoph Schnörr (Heidelberg University). To Oana Curtef (University of Würzburg) we are indebted for her observation concerning line search. SM, SP and MZ gratef... | that although ABPG performed worse in both scenarios, the cost of the iterations on the fine level
are more expensive when using our method due to line search. On the coarser level however, the higher cost of the line search is compensated by using a much smaller number of variables for computing search directions. | C |
For general networks (no restriction on the weights) with threshold activation, it has been established, in the work of Baum [4], that even with 2222 layers, for any labeled data set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, there exists an interpolating network. | While it was previously proven, in [12], that monotone networks with a threshold activation function can approximate any monotone function, the depth in the approximating network given by [12] scales linearly with the dimension. Our result is thus a significant improvement, whenever d>3𝑑3d>3italic_d > 3, which only re... |
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was c... | When using a network to approximate a monotone function, one might try to “force” the network to be monotone. A natural way to achieve this is to consider only networks where every parameter (other than the biases) is non-negative222We restrict our attention to non-negative prediction problems: The domain of the functi... | In the next lemma, we demonstrate another negative result, which shows an inherent loss of expressive power when transitioning to 2222-layered monotone threshold networks, provided that the dimension is at least two. We remark that when the input is real-valued (i.e., one-dimensional), an interpolating monotone network... | D |
(ii) Assumption (L2) together with [21, Lem. 3.10] ensure that both amax(⋅)subscript𝑎⋅a_{\max}(\cdot)italic_a start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ( ⋅ ) and 1/amin(⋅)1subscript𝑎⋅1/a_{\min}(\cdot)1 / italic_a start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( ⋅ ) are integrable with respect to the Gaussian... | In recent years, QMC methods have been demonstrated to be very effective at approximating the response statistics of PDE problems such as (1). The success of modern QMC theory for uncertainty quantification can largely be attributed to the introduction of weighted spaces (in the sense of Sloan and Woźniakowski [54] and... |
for some appropriately chosen norm ∥⋅∥\|\cdot\|∥ ⋅ ∥. We focus on the cubature error, and discuss the spatial discretization error briefly in Section 5.3. We remark that the order of the last two error contributions—the finite element error and cubature error, respectively—can be flipped when the diffusion coefficient... | In practice, the problem needs to be discretized in several ways before it is possible to approximate this quantity numerically. The infinite-dimensional input random field is first replaced by a finite-dimensional one, meaning that we end up analyzing a dimensionally-truncated PDE solution ussubscript𝑢𝑠u_{s}italic_u... |
Since QMC methods can only be applied to finite-dimensional integrals and the analysis of the dimension truncation error is independent of the chosen spatial discretization scheme (cf., e.g., [18, 37]), we restrict our analysis to the finite-dimensional setting in what follows. | D |
A Lower Bound for Adaptive CL Algorithms. Our second technical contribution is to prove the lower bound for adaptive CL algorithms directly, instead of via a reduction from a lower bound for non-adaptive CL algorithms. The details can be found in Section 3.3. | In order for the induction to proceed, we need to make sure that if we publish all heavy arms, the probability of the best arm being published is small, since otherwise the problem would already be solved and the round elimination process cannot continue. This is easy to do with non-adaptive algorithms, because the who... |
To facilitate the analysis, we augment the algorithm after the first round of pulls by publishing a set of arms, as well as making some additional pulls on the remaining arms so as to massage the posterior mean distribution. By publishing arm i𝑖iitalic_i we mean revealing its local means μiAsuperscriptsubscript𝜇𝑖𝐴... | Let us first recall the proof for non-adaptive algorithms in (TZZ19, ). After the first round of pulls, we set a threshold η𝜂\etaitalic_η and publish those arms who have been pulled more than η𝜂\etaitalic_η times in the first round; we call these arms the heavy arms. By publishing an arm we mean revealing its mean to... | To handle this challenge, we choose to explicitly analyze for each heavy arm its probability of being the best arm after the first round of pulls, and then show that the sum of these probabilities is small. This analysis is much more complicated than that for the non-adaptive algorithms. We try to illustrate the main i... | C |
Note that the directions computed by Broyden updates, as one of the quasi-Newton methods, provably satisfy the Dennis-Moré condition stated in (4.17) (refer to [68, Thm. 5.11] and [66, Thm. VI.8]).
In order to achieve this, Broyden updates require the aforementioned regularity conditions on rh^subscriptr^ℎ\operatorname... | The maximum number of backtracks qmaxsubscript𝑞q_{\max}italic_q start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT is also set equal to 5555.
It should be mentioned that while directions in step 1.6 can be computed using any quasi-Newton method (e.g. Broyden updates as discussed in Section 4.4), L-BFGS yields superior nu... | It is important to mention that, although it is not formally established that L-BFGS satisfies the Dennis-Moré condition, L-BFGS performs better than Broyden updates in practical scenarios.
The theoretical examination of the Dennis-Moré condition using L-BFGS updates is considered as a future research direction. | Note that the directions computed by Broyden updates, as one of the quasi-Newton methods, provably satisfy the Dennis-Moré condition stated in (4.17) (refer to [68, Thm. 5.11] and [66, Thm. VI.8]).
In order to achieve this, Broyden updates require the aforementioned regularity conditions on rh^subscriptr^ℎ\operatorname... |
As depicted in Figure 3, the quasi-Newton updates in SPIRAL significantly enhance the convergence rate compared to (low-memory) Finito/MISO, which lacks such updates. Although proxSARAH exhibits faster convergence for this problem, it performs slower in the Lasso problem and is unable to handle non-Lipschitz different... | B |
}}*\underline{\bf X}under¯ start_ARG bold_Y end_ARG = ( under¯ start_ARG bold_X end_ARG * under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT * under¯ start_ARG bold_X end_ARG and this should be performed sequentially using the subspace... | This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ... |
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl... |
The sampling approach can also be used for low tubal rank approximation besides the random projection. Indeed, a randomized slice sampling algorithm was proposed in [35] in which horizontal and lateral slices are selected and a low tubal rank approximation is computed based on them, see Figure 3 for a graphical illust... | In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a... | C |
The control on the amount of supervision to be used when learning the proposed semi-supervised predictive clustering trees makes them safe to use: They do not degrade the performances with respect to their supervised counterparts, i.e., they either outperform them or have the same performance. |
The feature-weighted semi-supervised method (SSL-PCT-FR) and the non-feature-weighted one (SSL-PCT) have similar trends in predictive performance. However, on some datasets, there are notable differences. Namely, on Birds and Scene datasets, feature weighting is beneficial for the predictive performance of SSL-PCTs, a... | Considering all the datasets and the various percentages of labeled data, the SL-PCTD+TD+T{}^{\text{D+T}}start_FLOATSUPERSCRIPT D+T end_FLOATSUPERSCRIPT algorithms perform better than the SL-PCT in 36% of the cases, the same in 54% of the cases, and worse in 11% of the cases. We recall that the corresponding figures fo... |
Weighting descriptive attributes by their importance may help the predictive performance of semi-supervised predictive clustering in some cases, but the advantages are not great enough to advocate the use of feature weighting by default. Thus, by the principle of Occam’s razor, the simpler solution should be preferred... |
Methods for feature weighting can be used to identify the most informative features by determining an importance score (weight), where a higher score denotes more informative features, while a lower score denotes less informative ones. The effectiveness of feature weighting with the importance scores was shown to help... | C |
Table IV shows that DeepIPC achieves the best drivability at noon where it has the lowest intervention count and intervention time. Meanwhile, DeepIPC is comparable to Huang et al.’s model in the evening where it achieves the lowest intervention time but has a higher intervention count. Keep in mind that a model with a... | Table III shows that DeepIPC achieves the best performance by having the lowest total metric score in all conditions. Moreover, it achieves the fastest inference speed (lowest latency) as it has the lowest number of parameters, yielding a very low computational load compared to the other models. However, all models inc... |
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t... | Based on the experimental results, we disclosed several findings as follows. First, in line with our previous work [1], the BEV semantic feature is proven can improve the model performance in predicting waypoints and navigational controls. With a better perception, the model can leverage useful information which result... |
In the navigational controls estimation task, DeepIPC also has the best performance in line with the waypoints prediction result. The MLP agent can leverage useful features encoded from both RGB and BEV semantic maps. Therefore, the MLP agent can perform as well as the PID agent in estimating steering and throttle. Wi... | B |
However, the weak spot in all these algorithmic approaches is the requirement that a tree decomposition with bounded independence number is given with the input.
Theorem 1.1 fills this gap by constructing a decomposition of bounded independence number |
Theorem 1.1 appears to be a handy tool in the subarea of computational geometry concerning optimization problems on geometric graphs. Treewidth plays a fundamental role in the design of exact and approximation algorithms on planar graphs (and more generally, H𝐻Hitalic_H-minor-free graphs) [18, 3, 28]. | The main property of such graphs is that they enjoy the bounded local treewidth property. In other words, any planar graph of a small diameter has a small treewidth. A natural research direction is to extend such methods to intersection graphs of geometric objects [24, 34]. However, even for very “simple” objects like ... | classes of intersection graphs of connected subgraphs of graphs with bounded treewidth, studied by Bodlaender, Gustedt, and Telle [6], which in particular include classes of H𝐻Hitalic_H-graphs, that is, intersection graphs of connected subgraphs of a subdivision of a fixed multigraph H𝐻Hitalic_H, introduced in 1992 b... | on geometric graphs. It is interesting to note that algorithms on geometric graphs often require geometric representation of a graph. Sometimes, like for unit disk graphs, finding such a representation is a challenging computational task [29].
In contrast, Theorem 1.1 does not need the geometric properties of objects o... | A |
As a consequence, both FedBCD and VIMADMM demonstrate a markedly better DP-utility tradeoff compared to VAFL and Split Learning, as illustrated in Table VI. Furthermore, we explicitly investigate the influence of τ𝜏\tauitalic_τ on the utility of VIMADMM under ϵ=1italic-ϵ1\epsilon=1italic_ϵ = 1 in Table VIII. The resul... | Particularly, in VAFL, the server aggregates local embeddings using their linear combination with learnable aggregation weights, and subsequently use these aggregated embeddings as input for the server model. Both Split Learning and FedBCD utilize concatenated local embeddings as server model input.
Notably, in VAFL an... | Consequently, even though the server leverages these perturbed embeddings to derive ADMM-related variables, the clients will re-calculate clean embeddings during forward pass of Eq. III-B based on the received ADMM-related variables for local model updates. This updating mechanism potentially facilitate convergence und... | In FDML, the server averages local logits, and sends aggregated logits back to clients at eatch communication round. The clients, who owns the copies of labels, can calculate the local gradient and execute one step of local update.
Our empirical findings suggest that our ADMM-based methods outperform the aforementioned... | In both settings, the local model is updated based on SGD with federated backward propagation fu2022usenix : a) server first computes the gradients w.r.t the local output (either embeddings or logits) from each client separately and sends the gradients back to clients; b) each client calculates the gradients of local o... | B |
However, as pointed out in [85], there is a static attention problem in GAT, where the learned layer 𝐖𝐖\mathbf{W}bold_W and 𝐚𝐚\mathbf{a}bold_a are not separated by a non-linearity.
GAT might collapse into a single linear layer, computing a shared ranking of attention coefficients across all nodes, which makes it ch... |
To investigate the impact of varying dimensions on the model’s performance, we conduct another experiment using our method with different dimensions on the UCI dataset. Note that in order to minimize the need for hyperparameter tuning, we set dn=dmsubscript𝑑𝑛subscript𝑑𝑚d_{n}=d_{m}italic_d start_POSTSUBSCRIPT itali... |
Upon establishing the connection between vssubscript𝑣𝑠v_{s}italic_v start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and vdsubscript𝑣𝑑v_{d}italic_v start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, we compute the intermediate states of both central nodes and neighbor nodes by leveraging the interaction message 𝐦(t)�... | In order to intuitively understand our reinforced neighbor selection module, we design a robustness visualization experiment by showing the actions output by the policy network under different levels of noise added to the UCI dataset. As shown in Fig. 4, the variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSC... | To this end, GATv2 [85] introduces a straightforward modification to GAT to compute attention, significantly enhancing robustness to noise.
Thus, we conduct another experiment by leveraging the stronger attention mechanism, GATv2, to investigate the impact of the attention mechanism on the model’s performance. | D |
Generate cloth-changing images with dilation can make the dataset distribution similar with CASIA-BN.
The ratio number between 𝒳vsubscript𝒳𝑣\mathcal{X}_{v}caligraphic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and 𝒳csubscript𝒳𝑐\mathcal{X}_{c}caligraphic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT also ... |
The sequence number of each cloth condition in original datasets and our benchmarks used for training can be seen in Figure 4. It can be seen that our benchmarks have a relatively smaller dataset volume compared with previous datasets, but the accuracy can be improved with further collected data. | With the four experiments, we can demonstrate that our improvement does not lie in the increased module parameter but in the two-stage module and the design of the triplets, making the CL condition has improvement (GaitSet: +3.3%, GaitGL: + 1.2%).
It’s important to recognize that our approach is not solely focused on a... | ProbFace [45] further re-designs the triplet loss, making it aware of the uncertainty, which can be used in our gait recognition task. The triplet loss can help our network focus more on the data from cross-view sub-dataset, which is the normal walking condition in our task.
| PFE [34] first proposes to map each face image as a Gaussian distribution, regarding the sequence feature as the mean, and adding another branch to learn the confidence for the sequence feature.
The mean of the distribution can be regarded as the most likely feature of the sequence mapped in the latent space, and the v... | A |
Replace target parameter θ−⟵θ⟵superscript𝜃𝜃\theta^{-}\longleftarrow\thetaitalic_θ start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ⟵ italic_θ after every L iterations. Update average weight λt+1⟵λt∗d⟵subscript𝜆𝑡1subscript𝜆𝑡𝑑\lambda_{t+1}\longleftarrow\lambda_{t}*ditalic_λ start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBS... |
In this subsection we show in Theorem 1 that in the limit DPAV Q-learning converges to the optimal policy. The proof444Lemma 1 was also used to prove the convergence of SARSA (Rummery and Niranjan, 1994) and Double Q-learning (Van Hasselt et al., 2016) of this result using the Lemma 1 (Singh et al., 2000) is in the A... | Reinforcement learning (RL) algorithms, specifically Q-learning (Watkins and Dayan, 1992) based algorithms, have become a mainstream method for training the dialogue policy module (Peng et al., 2018; Zhang et al., 2020b). For each step, the policy agent updates its action value 111This value is the expected return for ... | Overestimation bias is more problematic in the deep Q-learning network (DQN) algorithm (Fan et al., 2020) due to the function approximation errors of DRL. Polishing estimation tricks of a single model and using ensemble models are two mainstream solutions. Double Q-learning is subsequently adapted to a neural network a... |
The value-based algorithm Q-learning, a common unit of the dialogue policy module, suffers the overestimation bias (Thrun and Schwartz, 1993; Hasselt, 2010). Prior studies addressed the problem in multiple ways, including (1) bias compensation with additive pseudo costs and (2) a variety of estimators. Bias-corrected ... | A |
In Table 2, it is shown that applying all techniques (M+I+N) obtains the best Top-1 accuracy results of our framework, but our model slightly decreases its performance regarding Top-5 predictions. We claim that by dealing with imbalance of the Ego4D dataset through Focal Loss, the model takes more risk by attempting t... |
In Table 2, it is shown that applying all techniques (M+I+N) obtains the best Top-1 accuracy results of our framework, but our model slightly decreases its performance regarding Top-5 predictions. We claim that by dealing with imbalance of the Ego4D dataset through Focal Loss, the model takes more risk by attempting t... | Our model is able to recognize verb-noun pairs with similar performance as the baseline, as reported in Table 2. However, our model is trained on pre-extracted features 𝐅𝐅\mathbf{F}bold_F for each clip, and not the image-based video clip 𝐕𝐕\mathbf{V}bold_V . Due to the lower dimensionality of these features (𝐅∈ℝT×... | Table 1: Edit Distance (ED) comparison of long-term human action anticipation in Ego4D dataset. Scores are obtained directly from their reported results. Here, bold fonts denote the best result and underline denote the second-best result among all approaches.
| Table 2: Performance of H3M with different training strategies, compared to the baseline using the accuracy metric. M: multitask surrogate loss (sharing weights). I: focal loss to solve class imbalance. N: noise injection. Here, bold fonts denote the best result and underlines denote the second best result among all ap... | D |
The performance of all anomaly detectors downgrades with the increase of anomaly contamination. COUTA shows better robustness compared to its contenders, especially on datasets with a large contamination rate. It owes to the novel one-class classification loss function, which successfully masks these noisy data via unc... |
This experiment investigates the scalability of COUTA compared to its competing methods. Time efficiency w.r.t. both time series length T𝑇Titalic_T and dimensionality D𝐷Ditalic_D are recorded. As for the scalability test w.r.t. dimensionality, a group of seven time-series datasets with a fixed length (i.e., 2,000) a... | Fig. 9 presents the execution time of COUTA and its ten competing state-of-the-art methods on time series datasets with various sizes. Note that this experiment excludes three general anomaly detectors (i.e., OCSVM, ECOD, and GOAD) that are not originally designed for time series data. COUTA has good scalability compar... | Extensive experiments show that: (1) COUTA substantially outperforms 15 state-of-the-art competing methods on 10 real-world datasets and averagely achieves over 11% improvement; and (2) COUTA is also with several desired properties including generalization capability in identifying different anomaly types, favorable ro... |
This experiment is to quantitatively measure the interference of anomaly contamination to time series anomaly detectors, that is, we test the robustness of each anomaly detector w.r.t. different anomaly contamination ratios in the training set. Due to the continuity of time series data, we cannot directly remove or in... | A |
In this section, we take a deeper look into the specifics of evaluation for D2T systems. Traditionally, the evaluation of D2T systems is compartmentalized into either intrinsic or extrinsic measures (Belz and Reiter, 2006). The former either uses automated metrics to compare the generated narrative to a reference text... |
With the abundance of paired datasets where each data instance is accompanied by a human generated reference text, often referred to as the gold standard, the NLG community has sought after quick, cheap, and effective metrics for evaluation of D2T systems. The adoption of automated metrics such as BLEU, NIST, and ROUG... |
The last half-decade has seen the ML community place significant emphasis on the reproducibility of academic results (Pineau et al., 2021; Sinha et al., 2020). However, the focus of these reproducibility efforts are placed on automated metrics (§6.1, §6.2.1, §6.2.2) with the reproducibility of human evaluation results... |
In this section, we take a deeper look into the specifics of evaluation for D2T systems. Traditionally, the evaluation of D2T systems is compartmentalized into either intrinsic or extrinsic measures (Belz and Reiter, 2006). The former either uses automated metrics to compare the generated narrative to a reference text... | From a review of 284 correlations reported in 34 papers, Reiter (Reiter, 2018) notes that the correlations between BLEU and human evaluations are inconsistent - even in similar tasks. While automated metrics can aid in the diagnostic evaluation of MT systems, the author showcases the weakness of BLEU in the evaluation ... | A |
NICEST was evaluated on the widely recognized scene graph generation (SGG) benchmarks, including VG [19], GQA [20], and the newly introduced VG-OOD dataset. Within NICEST, NICE plays a crucial role in enhancing the quality of annotations in original datasets, while NIST focuses on absorbing unbiased knowledge from two ... | To be more specific, NICE consists of three steps: 1) negative noisy sample detection (Neg-NSD): We reformulate the negative NSD as an out-of-distribution (OOD) detection problem, i.e., regarding all the positive samples as in-distribution (ID) training data, and all un-annotated negative samples as OOD test data. In t... |
Highlights. In the original NSC of NICE-v1 [33], we directly assign hard labels (i.e., raw predicate labels) with full probability to noisy samples. If the newly assigned labels were unreasonable, new noise might be introduced. In the new NSC of this paper, we assign soft labels to noisy samples, which takes into acco... | In previous NICE-v1222For the sake of distinction, we use NICE-v1 to denote NICE in [33]., we merely assign a hard label to noisy samples in NSC, which may introduce new noise. Thus, in this paper, we propose a more robust method to generate soft labels based on the weights in wKNN.
|
Given all the detected noisy positive samples from Pos-NSD, the NSC module aims to correct and assign more robust soft labels to these noisy positive predicate labels. There are two non-zero (positive) categories in the newly assigned soft label rssuperscript𝑟𝑠r^{s}italic_r start_POSTSUPERSCRIPT italic_s end_POSTSUP... | C |
The idea of colluding with other rational miners and forming a coalition seems to be an applaudable way to mine selfishly. The overall mining power of the coalition may be enough to launch selfish mining. In practice, it is challenging to build trust among rational miners. Rational miners cannot know whether it is prof... | In PSM, an attacker follows the partial block sharing strategy and shares the partial block information with rational miners. The partial block has some data covered, e.g., nonce and part of arbitrary bytes in the coinbase transaction. Miners can mine after it to get a new block. The hidden data can be recovered by oth... |
In this paper, we propose a new block-sharing strategy in mining, called partial block sharing to solve the above challenges. Different from previous new block hiding or revealing, partial block sharing will only reveal part of a block (named partial block) while some fields are hiding, e.g., nonce and part of arbitra... | We develop a new and practical block-sharing strategy called partial block sharing. Based on it, we propose a new mining attack protocol, PSM. PSM exhibits a new paradigm of colluding with other miners to gain mining advantages. By sharing the partial block data and attracting rational miners to work on the attacker’s ... | Based on the partial block sharing strategy, we propose a new and practical mining attack called Partial Selfish Mining (PSM). As shown in Figure 1(b), PSM starts as selfish mining to withhold a newly mined block. Then, the attacker can launch the partial block-sharing strategy and finally releases the secret by broadc... | B |
On the local quadratic Taylor approximation, “frozen Adam” does evolve as a linear recurrence, and is unstable whenever the maximum eigenvalue of the preconditioned Hessian (the preconditioned sharpness) exceeds the stability threshold of EMA-style heavy ball momentum, which is 2+2β1η(1−β1)22subscript𝛽1𝜂1subscript�... |
However, even though adaptive gradient methods train at the “Adaptive Edge of Stability” (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods in the non-adaptive EoS regime: whereas non-adaptive optimizers in the non-adaptive EoS regime are blocked from accessing high-cu... | We will see that this behavior sometimes differs substantially from that of non-adaptive optimizers.
In particular, whereas non-adaptive optimizers at the EoS are blocked from entering high-curvature regions of parameter space, adaptive gradient methods at the AEoS can and do enter high-curvature regions via their abil... | This is especially liable to occur if the step size is small or the preconditioner decay factor (e.g. Adam’s β2subscript𝛽2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) is small.
Thus, adaptive gradient methods sometimes lack the implicit inductive bias [35] that blocks non-adaptive methods from converging... |
Our paper gives evidence for a different implicit bias: adaptive gradient methods are liable to find higher-curvature solutions than non-adaptive algorithms, since whereas non-adaptive algorithms are blocked from high-curvature regions, adaptive optimizers can evade this restriction. | A |
where M𝑀Mitalic_M is row-stochastic. The Markov transition matrix models then the unbiased Markov chain where each entry is the probability of the jump from 𝐱ksubscript𝐱𝑘\mathbf{x}_{k}bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to 𝐱lsubscript𝐱𝑙\mathbf{x}_{l}bold_x start_POSTSUBSCRIPT italic_l end_POST... |
where ρ(𝐱)𝜌𝐱\rho(\mathbf{x})italic_ρ ( bold_x ) is a kernel density estimator and α∈[0,1]𝛼01\alpha\in[0,1]italic_α ∈ [ 0 , 1 ] is the anisotropic diffusion parameter, which is crucial to properly include information about the data density and importance 37. Based on the anisotropic diffusion parameter, diffusion ... | To account for the manifold density, we need to employ a density-preserving kernel. In contrast to Laplacian eigenmaps that are appropriate for data sampled uniformly 29, 35, diffusion map allows working with data sampled from any underlying probability distribution. Specifically, let us consider the pairwise transitio... |
In this work, we consider the problem of using manifold learning methods on data from enhanced sampling simulations. We provide a unified framework for manifold learning to construct CVs using biased simulation data, which we call reweighted manifold learning. To this aim, we derive a pairwise reweighting procedure in... |
We start by considering the case of diffusion maps on which we base the derivation of the reweighting factor r(𝐱k,𝐱l)𝑟subscript𝐱𝑘subscript𝐱𝑙r(\mathbf{x}_{k},\mathbf{x}_{l})italic_r ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) (Sec. 2.5). By r... | B |
Such a map is useful as it could ease knowledge exchange between modellers and stakeholders in these models, such as interested clinicians.
In the specification of the AHA regions, the septum corresponds to regions 3, 4, 9, 10, & 15. Here, the septum corresponds to regions 3, 9, 14, & 17. The alignment between the subd... | Briefly, using spatial data describing the coronary arterial vasculature from a single porcine heart obtained from fluorescence cryomicrotome images (Goyal et al., 2012) and image processing techniques, we have developed algorithms to organise and search the data in order to build subtrees from the data. These subtrees... | The main drawback of the methods discussed here is the availability of data to which they can be applied. Such data comes from processed images. We have treated both the acquisition and initial processing of images to form a skeletonistaion as a black-box. In order to move towards patient specific coronary flow model, ... | The algorithms developed for the processing of the vascular data would benefit from optimisation, although they are currently sufficiently fast. It is unknown whether these procedures work well on data sets for which they were not created; this is an active area of investigation. The radius data are noisy which negativ... | In a mean radius condition, each segment is considered individually, and if the mean radius of nodes in that segment exceed some threshold then the segment is eligible for inclusion in the final tree. This method may be sensitive to spurious radius values that skew the distribution for a vessel. Comparing, for example,... | C |
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph,
which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal... | Conventional deep learning (DL) methods have focused in predictive user behavior modeling on service demand and data usage. In this setting, a line of work focuses on leveraging deep neural networks for adaptive optimisation of routing schemes, as by [4], which typically require similar data distribution in the test en... | Note, that the problem of estimating the network state is EXPTIME-complete, as shown by [2], and has been extensively studied in the literature [3], where most commonly conventional deep-learning methods are employed in the context of reinforcement learning (RL) based network orchestration as well as deep-learning-base... | Finally, [5] present their work on topology size generalization for latency estimation of Origin-Destination (OD) pairs, the same problem we focus here. Through improving RouteNet by including queue occupancy state per link, beside the path and link nodes, they formulate a tripartite message passing scheme, which intro... | More recent related works focus on learning the tuned graph representation of routing networks. [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation.
[19] and [20] extend RouteNet to be adaptive to heterogeneou... | D |
≤1/K⋅∑k=1K[V1br(νk),νk(s1)−V1πk,br(πk)(s1)]≤εabsent⋅1𝐾superscriptsubscript𝑘1𝐾delimited-[]superscriptsubscript𝑉1brsuperscript𝜈𝑘superscript𝜈𝑘subscript𝑠1superscriptsubscript𝑉1superscript𝜋𝑘brsuperscript𝜋𝑘subscript𝑠1𝜀\displaystyle\quad\leq 1/K\cdot\textstyle\sum_{k=1}^{K}[V_{1}^{\mathrm{br}(\nu%
^{k}),\n... | In contrast, as a special case of the low-rank model, linear MDPs have a similar form of structures but with an extra assumption that the linear representation is known a priori (Du et al., 2019b; Yang & Wang, 2019; Jin et al., 2020; Xie et al., 2020; Ayoub et al., 2020; Cai et al., 2020; Yang & Wang, 2020; Chen et al.... | This section provides the analysis of the transition kernel recovery via contrastive learning and the proofs of the main results for single-agent MDPs and zero-sum MGs. Our theoretical analysis integrates contrastive self-supervised learning for transition recovery and low-rank MDPs in a unified manner. Part of our ana... |
To focus our analysis on the contrastive learning for the transition dynamics, we only consider the setting where the reward function rh(⋅,⋅)subscript𝑟ℎ⋅⋅r_{h}(\cdot,\cdot)italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is known. One might further modify the proposed algorithm to the unknown reward... | In addition to theoretical guarantees, we also provide numerical experiments to empirically demonstrate the efficacy of our algorithm.
Furthermore, we extend the algorithm and theory to the zero-sum MG under the low-rank setting, a multi-agent extension of MDPs to a competitive environment. | B |
Importance sampling for MV-SDEs has been studied in (dos Reis et al., 2023; Ben Rached et al., 2023). The decoupling approach developed by (dos Reis et al., 2023) defines a modified, decoupled MV-SDE with coefficients computed using a realization of the MV-SDE law estimated beforehand using a stochastic particle syste... |
We extend the DLMC estimator introduced by (Ben Rached et al., 2023) to the multilevel setting and propose a multilevel DLMC estimator for the decoupling approach (dos Reis et al., 2023) for MV-SDEs. We include a detailed discussion on the bias and variance of the proposed estimator and devise a complexity theorem, di... |
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a... | We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ... |
Importance sampling for MV-SDEs has been studied in (dos Reis et al., 2023; Ben Rached et al., 2023). The decoupling approach developed by (dos Reis et al., 2023) defines a modified, decoupled MV-SDE with coefficients computed using a realization of the MV-SDE law estimated beforehand using a stochastic particle syste... | C |
Once again, the selection of scenarios is intended to underscore the proposal’s robustness under unfavorable conditions, while also incorporating conservative assumptions. These scenarios serve to demonstrate that, from a GN&C perspective, an autonomous robotic spacecraft does not necessarily require extensive navigati... |
In the case of Eros, the results of the Monte Carlo analysis, as illustrated in Figure 6, align with the discussion in Section 5.2. Once again, the spacecraft executes its operation successfully. It is effectively inserted into orbit and completes the transfer smoothly, as depicted in Figure 6a. The histogram in Figur... |
Due to the relatively short simulation time in the Monte Carlo runs, readers may question whether problematic behavior emerges after the initial 3 days. To address this, we extended the simulation to 10 days, considering a smaller set of 100 samples, to examine any signs of major issues beyond the 3-day mark. Figure 7... | For example, Fig. 4 illustrates a more realistic operational profile where the spacecraft is inserted into a 100 km orbit (NEAR-Shoemaker was inserted into a comparable orbit a month after rendezvous). In this case, the filter exhibits excellent performance, showcasing significant improvements in estimates. The errors ... |
To illustrate this point, we conducted a Monte Carlo run with 100 samples for a more realistic scenario, the same depicted in Fig. 4. The spacecraft is inserted into a 100 km circular orbit around Eros. The results for this scenario are shown in Figure 9. As evident, the filter performance is excellent, and the estima... | D |
It remains an open question to show, whether G~~𝐺\tilde{G}over~ start_ARG italic_G end_ARG is strictly contractive with respect to the Hilbert projective norm. This would allow to adapt our non-asymptotic convergence analysis to the map G~~𝐺\tilde{G}over~ start_ARG italic_G end_ARG as well. |
We will present a detailed non-asymptotic convergence analysis of a fixed-point approach based on iterating (1.4) for simple input data (see Def. 10). The key insight of our work is to analyze the nonlinear map G𝐺Gitalic_G through the lens of nonlinear Perron–Frobenius theory, which views ℙdsubscriptℙ𝑑\mathbb{P}_{d}... |
In this paper, we introduced a novel fixed-point approach for computing Brascamp–Lieb constants, which is grounded in nonlinear Perron–Frobenius theory. In contrast to much of the prior literature, which has analyzed the problem through a Riemannian lens, our approach utilizes a Finslerian geometry on the manifold of ... | The main contribution of this work is a novel geometric lens on Brascamp-Lieb constants, which relies on a Finsler geometry on the manifold of positive definite matrices, induced by the Thompson part metric.
Specifically, our contributions are as follows: | Our analysis leverages the Thompson part metric on the manifold of positive definite matrices to model convergence of the fixed-point iteration. To our knowledge, this is the first work that analyzes the computation of Brascamp–Lieb constants via Thompson geometry.
We note that a similar Finslerian lens can be employed... | B |
However, it is crucial to distinguish between the merging process and the act of gluing a boundary to a cycle. While merging signifies the disappearance of a topological feature upon simplex removal, gluing a boundary to a cycle essentially fills in the hole or closes the gap within the cycle. Although the resulting s... | The dynamic nature of merging classes is underscored by the observed phenomenon of information transfer and the need to resolve boundary gluing. These observations prompt further inquiry into the evolution of these classes under the filtration process.
| In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. Integral to this is the development of an algorithm that captures the merge history of homology classes. These homology-based centrality measures produce, for all cycle gen... |
When two homology classes merge due to simplex removal, the elder rule comes into play. This rule selects the generator formed at the lower threshold as the natural representative of the merged class. A key observation is made: even when a different generator survives a merge, the persistence information can be "trans... |
However, it is crucial to distinguish between the merging process and the act of gluing a boundary to a cycle. While merging signifies the disappearance of a topological feature upon simplex removal, gluing a boundary to a cycle essentially fills in the hole or closes the gap within the cycle. Although the resulting s... | A |
In addition, StyleMatch is developed to address the semi-supervised domain generalization task. Compared with it, our method has better experimental results on all settings and datasets, except for the “5 labels per class” case on PACS. For example, on miniDomainNet, our method increases StyleMatch by +3.66%percent3.66... |
We show some examples from PACS and Office-home in Fig. 3. As seen, there is an obvious difference among different domains. Besides, we also visualize the features of four categories on PACS by t-SNE (van2008visualizing, ), as illustrated in Fig. 4. In this figure, different colors are different domains. We observe th... |
Table 1. Comparison between our method and different semi-supervised (DG) methods under different numbers of labeled samples on PACA and Office-Home. Note that “P”, “A”, “C” and “S” denote different domains on PACS. “Avg” is the average result of all domains. The bold is the best result. |
Figure 1. Comparison between the typical semi-supervised learning (SSL) and the semi-supervised domain generalization (SSDG). Note that different colors denote different domains. In the SSDG setting, there are multiple training domains with different data distributions when compared with SSL. | In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis... | B |
Similarly, TinyTL introduces extra residual blocks to MobileNet [23, 6] for memory efficient on-device learning.
Guo et. al. [17] proposes re-composing a ResNet with depth-wise and point-wise convolutions, and re-training only the depth-wise part during fine-tuning. | To explore the effective adapting schemes of using Conv-Adapter to tune a ConvNet, we study it mainly from two perspectives, similar to
[18], 1) the location of adaptation in pre-trained ConvNets – which intermediate representation 𝐡𝐡\mathbf{h}bold_h to adapt, and 2) the insertion form of Conv-Adapter – how to set th... |
Extensive experiments demonstrate the effectiveness and efficiency of Conv-Adapter. It achieves comparable or even better performance to full fine-tuning with only around 5% backbone parameters. Conv-Adapter also well generalizes to detection and segmentation tasks that require dense predictions. | It needs a more sophisticated design on not only the Conv-Adapter architecture but also the adaptation location [59], and we empirically find that stage-wise adaptation produces inferior performance and requires much more parameters.
Conv-Adapter is flexible to be inserted into every residual block of the ConvNet backb... | RepNet [59] exploits a dedicated designed side network to re-program the intermediate features of pre-trained ConvNets.
Conv-Adapter differs from previous methods with a design that considers parameter efficiency and transferability from the internal architectures and adapting schemes. Besides, the proposed Conv-Adapte... | D |
Our method detects changepoints at 1.96s and 4.02s, offering precise parameter estimates for each segment: λ^(t)=0.499^𝜆𝑡0.499\hat{\lambda}(t)=0.499over^ start_ARG italic_λ end_ARG ( italic_t ) = 0.499 for t∈[0,1.96)𝑡01.96t\in[0,1.96)italic_t ∈ [ 0 , 1.96 ), 0.0100.0100.0100.010 for t∈[1.97,4.01)𝑡1.974.01t\in[1.9... | The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl... |
In this work, we present an innovative methodology that combines changepoints detection with PINNs to address changes and instabilities in the dynamics of PDEs. This approach marks the first exploration into simultaneously detecting changepoints and estimating unknown parameters within PDE dynamics based on observed d... |
We introduce a novel method for identifying changepoints in dynamic systems governed by general PDEs dynamics. Our approach works with piecewise-constant time-changing parameters and leverages total variation regularization on the first-order differences of parameters. We also propose an online learning strategy that ... | The standard PINNs model assumes that the parameters of PDEs are constant values across the entire time domain. In order to accommodate Definition 2.1, we allow for the changes in the λ(t)𝜆𝑡\lambda(t)italic_λ ( italic_t ) and introduce additional regularization term in a form of total variation penalty on the first ... | C |
For example, babcd𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and abcbcd𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by abcd𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but
abcbda𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital... | (go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive,
let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let | Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that,
provided only tha... | for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature
no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=xaay𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ... | change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once.
If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w. | B |
Herein, let us first provide some perspectives in Section 2 on how one formulates inverse problems, and how the different philosophical approaches inform our approach. We then address the goals mentioned in the previous paragraph by first considering a finite-dimensional, linear model problem in Section 3 that we use ... | The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient.
Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ... |
Historically, parameter estimation problems were usually formulated as seeking that set of parameters q∗superscript𝑞∗q^{\ast}italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT for which model predictions fit measurements best. This approach is often called the deterministic approach to parameter estimation. Oftenti... | Parameter estimation problems – of which inverse problems are a particular kind – seek information about parameters in the model using measurements (“observations”) of the system’s state. Depending on what kind of, and how much, information one seeks, parameter estimation problems can be formulated in different ways. F... | One could similarly analyze papers from many other disciplines that use inverse problems. They may be using different words, but a common feature of the many definitions of resolution, adjoints, sensitivity, and identifiability that can be found in the literature, is that most of these notions originate in, and were de... | C |
We qualify descriptive examples as those that anyone, without specialized knowledge, say a child, could learn to recognize. An illumination might contain ”chauve” (bald), or an ”épée” (sword) or a ”barbe” (beard). In the case that the label is quite close to contemporary image hierarchies, it would be a somewhat straig... |
In our exploration of the legacy labels of the Mandragore and Initiale databases, we realized limitations for their use in working with computer vision for tasks such as object detection or image segmentation. The original labels had been designed for search and retrieval of images for forms of close viewing, rather t... | We qualify descriptive examples as those that anyone, without specialized knowledge, say a child, could learn to recognize. An illumination might contain ”chauve” (bald), or an ”épée” (sword) or a ”barbe” (beard). In the case that the label is quite close to contemporary image hierarchies, it would be a somewhat straig... |
Categorizing these labels into different sorts is an extension of our visual thinking process discussed in Section 3. Our assumption in doing so is that different sets of labels will facilitate different distant viewing goals. One example of this might be a distant thematics of manuscript illumination in which we look... | We use a corpus of images from thirteenth- and fourteenth-century Latin bibles, exhibiting a certain degree of uniformity in their layout; among other details, one finds decorated initials at the beginning of chapters or prologues, intercolumn decoration and decorative outgrowths. These commonalities suggest that predi... | C |
Table XII: Comparisons of the number of trainable parameters and training efficiency. Here, we use the STS-B as the target task, and train the BERT-large for 10 epochs. As for PoT methods, we additionally train on the MNLI (source) task for 1 epoch (the main reason for training latency in PoT). All experiments are done... | Some readers may point out that the different metrics do not make a big difference in Table VI, and show concerns about the effectiveness of our proposed metric. Here, we discuss the contributions of our metric in detail. Firstly, we state that the main contribution of our metric is to effectively retrieve similar sour... | Some readers may point out that the different metrics do not make a big difference in Table VI, and show concerns about the effectiveness of our proposed metric. Here, we discuss the contributions of our metric in detail. Firstly, we state that the main contribution of our metric is to effectively retrieve similar sour... | As shown in Equation 4, our metric is used to control the knowledge transfer for each source-target pair. To analyze its effectiveness, we replace our metric with “constant factor (one) ”, and other prior metrics, i.e., “Eavg” and “ON”. Results of different methods are presented in Table VI.
Compared with the vanilla P... | Some readers may point out that the different metrics do not make a big difference in Table VI, and show concerns about the effectiveness of our proposed metric. Here, we discuss the contributions of our metric in detail. Firstly, we state that the main contribution of our metric is to effectively retrieve similar sour... | A |
We use several quantitative datasets collected regularly and separately, spanning 1 January to 30 June 2022;111 No further substantial changes have been observed beyond the six-month mark. We thus decided to maintain that timeframe, which is sufficient to deliver our narratives. timestamps are normalised to UTC. To de... | Further steps are performed to enhance data reliability. First, many on-hold submissions are valid but were never verified; we perform a semi-automatic validation using the messages left on defaced pages (see Appendix §F). Second, submissions may be reported to multiple archives to broaden their visibility. We de-dupli... |
Defacement Motives. The conflict caught the attention of existing defacers, who performed many attacks against other countries but not Russia and Ukraine until just after the invasion, suggesting their choice of targets was influenced. We also found some ‘new faces’ e.g., the second most active defacer targeting Russi... | Web Defacement Attacks.
We fully scrape the most popular active defacement archives during the period; see Table 1. We started with Zone-H, the largest and most popular one (since March 2002) providing cybersecurity news and self-reported defacements along with hacking content (Kurzmeier, 2020). We then took out the mo... | One type of attack linked with the low-level cybercrime actors is website defacement (Romagna and van den Hout, 2017), which accounted for around 20% of online attacks in 2014 (Hackmageddon, 2015) and is often organised into discrete campaigns (Maggi et al., 2018). Attackers (or defacers) gain unauthorised access using... | C |
The purpose of this experiment is to evaluate the generalization of the ranking strategies to different blackbox settings. In this experiment, we explore the transferability of the ranked samples for every combination of surrogate and victim model architecture and generate adversarial examples on f′superscript𝑓′f^{\p... |
The purpose of this experiment is to evaluate the generalization of the ranking strategies to different blackbox settings. In this experiment, we explore the transferability of the ranked samples for every combination of surrogate and victim model architecture and generate adversarial examples on f′superscript𝑓′f^{\p... |
The objective of this experiment is to see if the ranking strategies generalize to different attacks and whether there are some attacks that transfer better than others. For each dataset and combination of architectures we evaluated the transferability at k𝑘kitalic_k of the strategies where k𝑘kitalic_k was set to 5%... | Figure 4. E1.1 Results - The performance of ranking strategies for the CIFAR10 (top) and ImageNet (bottom) datasets. Each cell plots the transferability at k𝑘kitalic_k success rate for adversarial examples when ranked using different strategies across varied surrogate and victim model architectures. Columns represent ... | We investigate the following ranking tasks: (Sample Ranking) where the attacker must select the top k𝑘kitalic_k samples from 𝒟𝒟\mathscr{D}script_D to use in an attack on f𝑓fitalic_f, and (Perturbation Ranking) where the attacker must select the best perturbation for a specific sample x𝑥xitalic_x in an attack on f�... | B |
For encoded states that obey an area-law scaling, i.e. S(ρk(𝒙)∥𝟙k2),S(ρk(𝒙′)∥𝟙k2)∈𝒪(1)𝑆conditionalsubscript𝜌𝑘𝒙subscript1𝑘2𝑆conditionalsubscript𝜌𝑘superscript𝒙bold-′subscript1𝑘2𝒪1S\left(\rho_{k}(\boldsymbol{x})\|\frac{\mathbb{1}_{k}}{2}\right),S\left(\rho_{%
k}(\boldsymbol{x^{\prime}})\|\frac{\mathbb... |
Entanglement-induced concentration can also occur in cases where the embedding is not highly expressive but still leads to states satisfying a volume-law. Here, Theorem 2 implies that the kernel values of the projected quantum kernels will exponentially concentrate. |
Theorem 2 upper bounds the deviation of kernel values from a fixed value of 1 with the relative entropy between the reduced states of the encoded data and a maximally mixed state of a single qubit. In addition, unlike the results in the previous sections, the exponential concentration bounds here are deterministic. In... | We show that analogous to the causes of BPs for QNNs there are at least three different mechanisms that can lead to the exponential concentration of the encoded quantum states, including (i) the expressivity of the encoded quantum state ensemble, (ii) the entanglement in encoded quantum states with a local observable a... | It is worth highlighting that the entanglement-induced bound in Theorem 2 is stated for a given pair of data-encoded states, and not as an average over all possible data pairs. Hence, it is thus natural to determine classes of data and embeddings where concentration will arise with high probability, e.g., cases when th... | D |
Figure 1: A lane change event where a vehicle performs a right lane change. f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the frame at which lane change starts and f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the frame at which the rear middle part of the target ... | Most of current methods [25, 1, 31, 17, 5, 4] use physical variables, e.g., driving speed, acceleration, time-gap, heading angle, yaw angle, distances, etc. for lane change recognition. Nevertheless, physical variables cannot represent the type of target objects as they do not contain enough semantic information, where... |
To handle lane change recognition problems, many existing works [1, 31, 17, 5, 4, 18] normally employ physical variables to represent the relative dynamics of a target vehicle with its surrounding vehicles, e.g., driving speed, acceleration, time-gap, heading angle, yaw angle, distances, etc. Lane change events are pr... |
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per... | Owing to the improvements in computational hardware, CNN based prediction algorithms have been adopted to understand images and videos, and even obtain superior performance than humans in some fields such as image recognition, especially for fine-grained visual categorization. To this end, some research explores the po... | A |
for 𝒜(p)=c𝒜𝑝𝑐\mathcal{A}(p)=ccaligraphic_A ( italic_p ) = italic_c holds max(Nt,k)−min(Nt,k)≤ϵsubscript𝑁𝑡𝑘subscript𝑁𝑡𝑘italic-ϵ\max(N_{t,k})-\min(N_{t,k})\leq\epsilonroman_max ( italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ) - roman_min ( italic_N start_POSTSUBSCRIPT italic_t , italic_... | for 𝒜(p)=c𝒜𝑝𝑐\mathcal{A}(p)=ccaligraphic_A ( italic_p ) = italic_c holds max(Nt,k)−min(Nt,k)≤ϵsubscript𝑁𝑡𝑘subscript𝑁𝑡𝑘italic-ϵ\max(N_{t,k})-\min(N_{t,k})\leq\epsilonroman_max ( italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ) - roman_min ( italic_N start_POSTSUBSCRIPT italic_t , italic_... | ct=ci=…=ci+k−1subscript𝑐𝑡subscript𝑐𝑖normal-…subscript𝑐𝑖𝑘1c_{t}=c_{i}=\ldots=c_{i+k-1}italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = … = italic_c start_POSTSUBSCRIPT italic_i + italic_k - 1 end_POSTSUBSCRIPT and argmax𝒜(p)normal-argmax𝒜𝑝\ope... |
uk(t,c)=1−(max(Nt,k)−min(Nt,k))subscript𝑢𝑘𝑡𝑐1subscript𝑁𝑡𝑘subscript𝑁𝑡𝑘u_{k}(t,c)=1-(\max(N_{t,k})-\min(N_{t,k}))italic_u start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t , italic_c ) = 1 - ( roman_max ( italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ) - roman_min ( italic_N sta... |
Nt,k=(cπ[1],cπ[2],…,cπ[k])subscript𝑁𝑡𝑘subscript𝑐𝜋delimited-[]1subscript𝑐𝜋delimited-[]2…subscript𝑐𝜋delimited-[]𝑘\displaystyle N_{t,k}=(c_{\pi[1]},c_{\pi[2]},\ldots,c_{\pi[k]})italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT = ( italic_c start_POSTSUBSCRIPT italic_π [ 1 ] end_POSTSUBSCRIPT... | C |
The results are listed in Table 8, from which the following conclusions can be drawn: 1) the prompts of CoOp are transferable; 2) transferring prompts is able to improve the performance slightly (by comparing CoOp-CA and Oracle); 3) the proposed multi-task prompt learning is superior to the simple prompt transferring ... |
The main results of different methods on four datasets are listed in Table 10. For SoftCPT, we report the results of SoftCPT-NATA and SoftCPT-NATS as they could acquire desirable performance with lower computational cost. As a comparison, we also report a variant of SoftCPT, i.e., SoftCPT*. It is the method that learn... | Generalization From Base to New Classes. We here study the generalizability across classes following the same setting in CoCoOp [44]. From the results in Table 9, we can see that SoftCPT outperforms CoOp on both base and new classes, demonstrating the generalizability of multi-task prompt tuning. Besides, SoftCPT outpe... | For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede... |
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.