context
stringlengths
250
3.86k
A
stringlengths
250
5.11k
B
stringlengths
250
3.39k
C
stringlengths
250
8.2k
D
stringlengths
250
5.02k
label
stringclasses
4 values
Our study provides insight into the features and mechanisms that would be needed in a tool for parents and teens to participate in joint oversight of their online safety and privacy. Our findings suggest additional features that designers should consider in supporting parents’ and teens’ needs.
Overall, we found that parents and teens had given little previous thought to mobile online safety and digital privacy when they installed new apps. Most of the participants (63%, N=12 Parents and 52%, N=10 Teens) said that they took no special steps to verify an app’s safety, but rather they installed apps on an as-n...
Even though parents valued the ability for joint oversight, they were uncomfortable with the fact that the app put them at an equal level to their teens. They felt that the app should somehow account for the asymmetry in the parent-teen relationship. Mostly parents are the ones who brought up this matter and expressed...
We found that teens were more inclined to check their own app safety and did not show as much of an interest in providing oversight to their parents and other family members. Thus, incentive mechanisms would need to focus in particular on motivating and engaging teens. Research should investigate which incentives would...
Participants reported that they wanted to be informed about any changes that take place in their family members’ apps and settings. For example, parents in particular, wanted to be notified whenever their teens installed a new app. Thus, mechanisms to keep users aware of app changes will be important. For them, more tr...
D
Recall the two-square dataset “David and Goliath" in the right subplot of Figure 1 in the introduction. 100 points are uniformly sampled from the bigger square annulus, and 400 from the smaller annulus. Since the dataset has no additive noise or outliers, we compare the distance filtration and the DAD filtration for t...
Figure 12, followed by the persistence diagrams of the two filtrations in Figure 13. Even without the aid of the confidence bands, one point is conspicuously far away from the diagonal in the persistence diagram of each filtration. The RDAD filtration picks up 2 more significant loops.
Recall the two-square dataset “David and Goliath" in the right subplot of Figure 1 in the introduction. 100 points are uniformly sampled from the bigger square annulus, and 400 from the smaller annulus. Since the dataset has no additive noise or outliers, we compare the distance filtration and the DAD filtration for t...
Figure 2: The distance filtration and its persistence diagrams. In the first subplot is a sample of points near a circle. Unions of balls centered at these points with different radii are shown the subsequent subplots. The last subplot shows the persistence diagrams of these unions of balls. The red diamond points corr...
Two squares are clearly visible in the scatter plot in the right subplot of Figure 1. However, the blue point corresponding to the smaller square in the persistence diagram of the distance filtration is very close to the diagonal (it is at the tip of the cluster of red diamonds near the origin). On the other hand, for ...
D
where C𝐶Citalic_C is the number of AUs, pisubscript𝑝𝑖p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the ground-truth binary label for the ithsuperscript𝑖thi^{\rm th}italic_i start_POSTSUPERSCRIPT roman_th end_POSTSUPERSCRIPT AU, and p^isubscript^𝑝𝑖\hat{p}_{i}over^ start_ARG italic_p end_ARG star...
To answer the whys and wherefores of subject variation problem, we use a structural causal model (Pearl et al. 2000) to illustrate the causalities among variables in AU recognition models. As shown in Fig. 2, there are four variables involved in our AU causal diagram, which are facial images X𝑋Xitalic_X, subjects S𝑆...
In our experiments, we use two AU benchmark datasets, BP4D (Zhang et al. 2014) and DISFA (Mavadati et al. 2013). BP4D involves 41 young adults, including 23 female and 18 male adults. Each subject is asked to finish 8 tasks, and 324 videos containing around 140,000 images are captured. Each frame is annotated with bina...
To show that the representations learned by CISNet are more subject-invariant, we insert C𝐶Citalic_C spatial-attention layers (Zhao and Wu 2019) between the backbone network and the classifier to obtain AU-specific features, where C𝐶Citalic_C is the number of AUs, and use t-SNE (van der Maaten and Hinton 2008) for vi...
Causal inference (Pearl et al. 2000; Rubin 2005) has been gradually applied to computer vision tasks in recent years, such as long-tailed classification (Tang, Huang, and Zhang 2020), weakly-supervised semantic segmentation (Zhang et al. 2020), few-shot learning (Yue et al. 2020), and class-incremental learning (Hu et...
B
To implement BBP in Ethereum, we put forth a Time-specific transaction Selection and Ordering (TSO) algorithm to disambiguate and align the transaction selection and ordering at all nodes. Although nodes can independently and freely select transactions for blocks, rational nodes would not do so because they can maximiz...
As shown in Fig. 1(b), the underpinning of BBP is that each node anticipates the transactions in the upcoming next block and pre-packs a blockbody based on the anticipated transactions. The node pre-executes and pre-validates the anticipated transactions before the next block is mined. To the extent that the anticipat...
As shown in Fig. 1(b), the basic idea of bodyless block propagation (BBP) is that only the blockheader is transmitted between nodes, and a new block is pre-validated so that the in situ block validation during the block propagation is just a simple comparison of the pre-computed global state and the global state embedd...
Challenge 2: The second challenge is how to deal with the as-yet unavailable information during the pre-validation process. Even if PPB is perfectly synchronized among all nodes, some required information may not be available yet. For example, some transactions may involve the Coinbase address associated with the mine...
To improve BBP in Ethereum, we put forth a pre-validation algorithm to accelerate block propagation. For block validation in Ethereum and most blockchains that support smart contracts, nodes need to execute the transactions to compute a updated global state. Since the execution order of the transactions matters to the ...
D
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
In a broader context of reinforcement learning with partial observability, our work is related to several recent works on POMDPs with special structures. For example, Kwon et al. (2021) considers latent POMDPs, where each process has only one latent state, and the proposed algorithm efficiently infers the latent state...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
B
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
There are some limitations in our study which is worth mentioning. We relied on three databases: ACM DL, IEEE Xplore, and Scopus; therefore, we may have missed relevant papers published in other databases. Another limitation is the inapplicability of quality appraisal methods such as the ”Risk of Bias Assessment” in ou...
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to...
were conducted in Europe (11 papers) and North America (6 papers). Studies conducted in Europe include four studies from Spain and one study each from Germany, Hungary, Romania, Portugal, the Czech Republic, and Italy. On the other hand, studies from North America include four studies from the USA, one collaborative st...
B
Here, for brevity we present the average of 𝗂𝗇𝗍𝗋𝖺X,k−1⁢(Vj)subscript𝗂𝗇𝗍𝗋𝖺𝑋𝑘1subscript𝑉𝑗{\sf intra}_{X,k-1}(V_{j})sansserif_intra start_POSTSUBSCRIPT italic_X , italic_k - 1 end_POSTSUBSCRIPT ( italic_V start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) and 𝗂𝗇𝗍𝖾𝗋X,k−1⁢(Vj)subscript𝗂𝗇𝗍𝖾𝗋𝑋𝑘1subscr...
We consider the case that some points may have significantly higher noise perturbations than others. In this setting, we randomly select some points, and we generate some points with c⋅σ⋅𝑐𝜎c\cdot\sigmaitalic_c ⋅ italic_σ coordinate-wise variance, where c=8𝑐8c=8italic_c = 8 for our experiments (recall that the noise...
In this direction, we consider the single-cell RNA sequencing datasets from a benchmark paper [DRS20]. These datasets also have moderate to highly reliable ground truth labels, that help us understand the usefulness of our metrics and our algorithm. These datasets vary in the number of cells, genes, clusters, cells per...
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt...
Here we remark that in real-world data, while some points may indeed behave like outliers, they need not all be the same kind of outlier. Thus, we would like to verify our method’s performance in the presence of a different kind of outlier, which we concretize below.
C
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
In addition, we also show again as in the baseline situations, that the first two levels of visual missingness, are innocuous for the SGG tasks. It provides empirical evidence and further insights to bring deliberately obfuscations for privacy concerns as in [13].
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
The experimental results show promising performance improvement with our proposed framework compared to multiple baselines. Notably, similar to the findings from [13] where the face obfuscated images only cause trivial performance drop for classifications and object detection, we also observe empirical evidence that no...
We observe that the PredCls does not fluctuate much in case of missing visions compared to other two metrics SGCls and SGDet. It is consistent with the previous findings in [2], where the authors find that the object labels are highly predictive of relation labels but not vice-versa. In contrast, SGCls and SGDet drops ...
A
The approximation ratio for the maximum cost is defined in the same way by replacing T⁢C⁢(𝐱,f⁢(e,𝐱))O⁢P⁢Tt⁢c⁢(e,𝐱)𝑇𝐶𝐱𝑓𝑒𝐱𝑂𝑃subscript𝑇𝑡𝑐𝑒𝐱\frac{TC(\mathbf{x},f(e,\mathbf{x}))}{OPT_{tc}(e,\mathbf{x})}divide start_ARG italic_T italic_C ( bold_x , italic_f ( italic_e , bold_x ) ) end_ARG start_ARG italic_O i...
In the one-dimensional facility location problem, agents are located on the real line, and a planner’s goal is to build one or more facilities on the line to serve the agents. The cost of an agent is her distance to the nearest facility. The problem asks to locate facilities to minimize the total cost of all agents (th...
In this section we present some useful insights regarding the structure of agents’ optimal location and the solution of the optimization problems. As we shall see in the later sections, these properties are useful in designing and analyzing strategyproof mechanisms.
The rest of the paper is organized as follows. Section 2 presents the formal definitions of our model. In Section 3, we derive some useful technical properties. Sections 4 and 5 present our main results for the one-facility game with total and maximum cost objectives, respectively.
The position of each agent is private information. We want to design strategyproof mechanisms that guarantee that the agents report their true positions and locate the facilities based on the reports such that either the total or the maximum cost approximates the optimal value of the corresponding optimization problem ...
B
Every chain has alternately edge link triangles and vertex link triangles if we start from the variable triangle side, the first triangle is an edge link triangle. The number of each type of triangle is 2⁢k22subscript𝑘22k_{2}2 italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (see Figure 8 for k2=1subscript𝑘21k_{2}=1i...
This technique does not work for our class under consideration since this operation adds 3 new vertices of degree 2 and they are not part of triangles. Instead of this operation we propose other alternatives to avoid induced cycles of size at most k𝑘kitalic_k for any k𝑘kitalic_k. But all of them come with some cost. ...
In any case, the contact vertex is colored white in any valid bi-coloring by Theorem 2 and its neighbor in the edge link triangle should be black, again by Theorem 2. In consequence, the vertices of the edge link triangle in the chain should have different colors. All other edges of the chain form part of some induced ...
Every chain has exactly 2⁢k12subscript𝑘12k_{1}2 italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT vertex link triangles (see Figure 7 for k1=1subscript𝑘11k_{1}=1italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1). It is clear that every pair of consecutive vertices in the chain are odd-degree vertices in some induc...
It is clear that S⁢(G⁢(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F ) ) admits a DIM if and only if the resulting graph Q𝑄Qitalic_Q, after the replacement of variable-clause edges by the chains described above, has a DIM. Furthermore, every variable-clause edge of S⁢(G⁢(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F...
B
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s outp...
The first quantity is precisely of the type required to apply the stability result of §3.2, thus supplying a condition on the optimal value of an MILP sufficient to certify the uniform ultimate boundedness of the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POST...
By analyzing the results in Tab. 3 – specifically, by contrasting the third and fourth column – we notice that we have always succeeded in the design of a minimum complexity, stabilizing ReLU-based surrogate ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) of Φ⁢(⋅)Φ⋅\...
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s outp...
We will obtain a condition on the optimal value of \pglsMILP sufficient to assure that the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) is (uniformly) ultimately bounded within a set of adjustable size and (exponential) co...
D
Qualitative results for a single sequence are presented in Figure 4. We observe similar behavior as before: Both methods fit the given training data very well, however, in case of the baseline the pendulum motion significantly slows down for unseen time steps and the predictions for unseen data are not very accurate. ...
Qualitative results for a single scene can be seen in Figure 5, Table 2 shows a quantitative evaluation over all sequences. For more results we refer to the appendix. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. While both baselines yield similar results on the t...
Figure 3 shows a qualitative comparison of our results to the method of [22] trained in the two settings explained above. We observe that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for extrapolation to unseen points in time, the overfitted model of...
For a quantitative evaluation of the prediction quality, we report the intersection over union (IoU) averaged over all frames of the test sequences. Averaged over the first 20 sequences of the test set, the overfitted baseline achieves an IoU of 0.540.540.540.54 while our method achieves a score of 0.760.760.760.76. I...
Table 2: Reconstruction quality on the test frames for the synthetic examples. We report IoU of the predicted vs. groundtruth masks and the relative error of all estimated physical parameters in percent (“Param”) averaged over the 9 sequences of each dataset. Our method achieves excellent reconstruction quality, mask c...
C
Quantum communication networks (QCNs) utilize quantum mechanics principles to enhance information transfer. QCNs transmit data using quantum states that are entangled and can exist in a superposition of multiple states simultaneously, offering greater efficiency than classical networks [1]. However, these quantum stat...
In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m...
Here, the stored quantum vectors are initialized to different clusters either in an arbitrary fashion or by utilizing efficient heuristic approaches. Then, multiple iterations are performed such that in each iteration, the goal is to minimize the loss function in (2), which ensures that each vector is assigned to the c...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
The majority of QCN models optimize the quantum resource allocation and network overall performance by embedding classical data into quantum states that are shared over quantum channels between distant nodes [3, 4, 5, 6]. Additionally, numerous approaches have been proposed to develop resource-efficient QCNs, including...
D
To deal with the enforcement of concealability, we assume that the considered system is unconcealable in the remainder of this paper. In this section, we first introduce a defensive function to manipulate actual observations generated by the system in order to enforce concealability.
We are interested in hiding from an external observer (a curious eavesdropper) confidential information of the system that is represented as the occurrences of events from ESsubscript𝐸𝑆E_{S}italic_E start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, which is called the secret event set. Accordingly, the privacy of the s...
Then, the notion of C𝐶Citalic_C-enforceability is introduced, which characterizes the ability of a defensive function to manipulate the observations of output events such that the occurrences of secret events can be concealed from the eavesdropper regardless of system activity.
The defensive function proposed in this section can alter observable output events of the system G𝐺Gitalic_G by deletions, insertions, or replacements. The problem of enforcing concealability of the system aims to determine whether the defensive function is C𝐶Citalic_C-enforcing, i.e., given constraints in terms of h...
If concealability of the system does not hold, then we deal with the problem of concealability enforcement. The notion of C𝐶Citalic_C-enforceability characterizes whether an external defensive function has the capability to use an obfuscation strategy that manipulates the outputs generated by the system such that the ...
B
Each IAB-node holds a buffer storing the bits received via backhaul links from its parent. The buffers could be the bottleneck of multi-hop transmissions. Indeed, if a buffer is empty, the activated links will transmit nothing and thus it causes a downlink starvation problem. Therefore, the flow routing and link schedu...
In the access, UEs connecting to such a backhaul are associated to sectors based on their positions. A UE can belong to two sectors if it is located on a sector boundary. And UEs are expected to work in a dual-connectivity mode [polese2017improved], i.e., equipped with both legacy (3GPP FR1) and mmWave (3GPP FR2) inter...
a) UE presence - Ii,sp⁢r⁢e⁢ssubscriptsuperscript𝐼𝑝𝑟𝑒𝑠𝑖𝑠I^{pres}_{i,s}italic_I start_POSTSUPERSCRIPT italic_p italic_r italic_e italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_s end_POSTSUBSCRIPT, which takes value 1111 for sector s𝑠sitalic_s of agent i𝑖iitalic_i if there are UEs located und...
Therefore, we assume that each IAB-node is informed about associated UEs in real time and their channel status. Channel status information is typically available at each BS via Reference Signals and used for beamforming, rate adaptation, and other 5G procedures. BSs can also estimate the number and the position of conn...
The centralized critics are computed at a central entity located at the IAB-donor, while local policies are distributed at agents associated to Tx antenna panels at both IAB-donor and IAB-nodes. The centralized critics (i.e., the DNN in the green circle in Fig. 4(a)) act as a bridge among local policies and implicitly ...
A
Tetenov (2016) pursued a similar analysis of Phase III incentives, and concluded that a Type I error of up to 15% is incentive-aligned for an average drug. In contrast, when we consider unusually low Phase III costs and unusually profitable drugs, we find that the FDA could already be at risk of violating incentive-ali...
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F...
The primary implication of this analysis is that if the standard of evidence required by the FDA is loosened, it may cease to be incentive-aligned for the more profitable drugs. The right standard of evidence for the FDA is a source of ongoing debate, and some call for much looser protocols. For example, the Bayesian ...
Case 2: large profit. Suppose that companies who receive approval make $1 billion in profit, 100 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose to run trials: their expected profit from seeking approval is $40 million. On average, 5% of such agents would receive approval, s...
There are important limitations to the above analysis. In particular, our calculation omits additional regulatory checks against approving ineffective drugs and punishments for agents who intentionally run clinical trials for drugs they believe to be ineffective. These considerations include: additional evidence standa...
D
Point Cloud Data. In Supplementary Table A1 we present the registration metrics for PPIR(MPC) and PPIR(FHE)-v1. The registration shows that PPIR(MPC) achieves the best results compared to PPIR(FHE), which exhibits not only a longer computation time but also requires higher bandwidth, thanks to its non-iterative algorit...
Table 3: Non-Linear SSD registration test comparison between Clear, PPIR(MPC), PPIR(FHE)-v1 and PPIR(FHE)-v2. The registration metrics are reported as mean and standard deviation. Efficiency metrics in terms of average across iterations. RMSE: root mean square error.
Whole body PET data: affine registration (SSD). Table 2 compares Clear, PPIR(MPC), PPIR(FHE)-v1 and v2, showcasing metrics resulting from the affine transformation of whole-body PET images. Notably, registration through PPIR(MPC) yields negligible differences compared to Clear in terms of the number of iterations, int...
Table 2: Affine SSD registration test, comparison between Clear, PPIR(MPC), PPIR(FHE)-v1 and PPIR(FHE)-v2. Registration metrics are reported as mean and standard deviation. Efficiency metrics in terms of average across iterations. RMSE: root mean square error.
Brain MRI data and whole body PET data: non-linear registration (SSD). Table 3, comparing Clear and PPIR(MPC), PPIR(FHE)-v1 and v2, showcases the metrics resulting from spline-based non-linear registration between grey matter density images without the application of gradient approximation. Additionally, the table incl...
B
We use ResNet [18], VGG [46] and MobileNet [20] as the backbone, and adopt standard data augmentation techniques (random crop and horizontal flip) and an SGD optimizer in all experiments. We consistently train the teacher and student model for 350350350350 epochs, except for 12121212 epochs for MNIST, and we adopt a mu...
The generator uses random variables as inputs that are sampled from a prior distribution with the same dimensionality as the logits. The well-trained generator is frozen and grafted behind the teacher and student model, whose output logits of the same examples are used as the inputs of the generator, as shown in Fig. 2...
After training the teacher, we train a DCGAN [42] with Gaussian noise in the same dimension as the category counts. The output logits of teacher or student for samples in the same class follow a Gaussian distribution, and the logits center is the mean of the Gaussian.
It is equivalent to a teacher assistant transferring the teacher’s knowledge to the student. Fig. 2 illustrates the architecture of MEKD. We freeze the generator and graft it behind the teacher and student model in the same way, using the softened logits of both models as the generator input.
Note that the dimensionality of z𝑧zitalic_z is the same as the output logits of the teacher model, i.e. |z|=C𝑧𝐶|z|=C| italic_z | = italic_C. The generator G𝐺Gitalic_G uses noise z𝑧zitalic_z to synthesize images, and the discriminator D𝐷Ditalic_D minimizes the Wasserstein distance between the generated μ′superscri...
B
The second reason is efficient compression of smooth functions. It is known that for functions with m𝑚mitalic_m continuous derivatives n𝑛nitalic_n-th coefficient is O⁢(n−m)𝑂superscript𝑛𝑚O(n^{-m})italic_O ( italic_n start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT ) for both Chebyshev [MH02, Theorem 5.14] and ...
It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser...
All these properties combined make trigonometric and Chebyshev polynomials especially suitable for PDE solutions by spectral methods [Boy01] and adaptive lossless computations with functions [Tre07]. The later goal is fully realized in the Chebfun software.444https://www.chebfun.org Chebfun demonstrates that computati...
First, the output of the neural operator is a neural network, i.e., essentially a black-box function. In classical numerical methods such as FEM [Cia02], spectral methods [Boy01] and others, the parametrization allows for extracting bounds on function, its derivatives, and any other local or global information in a co...
Our solution to both of outlined problems is to detach the process of function approximation (interpolation) from the mapping between functional spaces, and explicitly fix the highest possible resolution. To do that, for both domain and codomain of neural operator we consider functions represented by finite series of t...
B
We propose the hierarchical distillation to transfer local features along with global dependency instead of the original feature maps. This allows us to apply the proposed method to applications, which are suffered from heavy computational burden because of the large size of feature maps.
To this end, we propose a one-to-all spatial matching knowledge distillation pipeline that allows the each feature location of the teacher to teach the entire student features in a dynamic manner. To make the whole student mimic a spatial component of the teacher, we propose the Target-aware Transformer (TaT) to pixel-...
We address the conundrum by the proposed anchor-point distillation. As shown in Figure 2 (c), we summarize the local area to compact representation, referred to anchor, within a local area that is representative to describe the semantic of the given area, forming the new feature map of smaller size. Since the new feat...
Figure 2: Illustration of our framework. (a) Target-aware Transformer. Conditioned on the teacher feature and the student feature, the transformation map Corr. is computed and then applied on the student feature to reconfigure itself, which is then asked to minimize the L22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRI...
Validating the anchor-point distillation. Then, we give more insight concerning the proposed objectives’ functionality through sensitivity analysis. Specifically, we investigate the hyper-parameters that would influence the behavior of the training process. In terms of the anchor-point distillation, this work utilizes ...
C
As the main part of the paper describes the proposed ENS-t-SNE algorithm (from the idea to the implementation), the quantitative and qualitative evaluation is just sketched out here. Nevertheless, several different types of experiments, on synthetic and real-world datasets, indicate that ENS-t-SNE can indeed simultaneo...
An interesting direction that we began to explore is to extend the objective function such that each perspective shows the t-SNE embedding for different values of perplexities; see the supplemental material. Another possible application is using ENS-t-SNE to visualize image datasets, based on different parts of the inp...
However, feature grouping is not always clear. The data might have hundreds of features or come from where the meaning of features is unclear. Subspace clustering algorithms can efficiently find subspaces of interest. The USDA food composition dataset is frequently analyzed in subspace clustering literature; we use tw...
We use separate visual channels to encode the different types of clusters. Specifically, to show the original clusters for the first perspective, we use colors (blue and orange), for the second perspective, we use the shape (circles and squares), and for the third perspective, we use texture, filled and not filled; see...
In the ENS-t-SNE embedding, each point belongs to two clusters; one for its species and one for its sex. In an interactive environment, one can follow a datapoint from one projection to the other. In other words, there is a transition between the two views in three dimensions that is missing when using small multiples....
A
}_{h}({a^{h+k}_{h-\ell}})\cup\bigl{\{}{{}^{t}}{o^{h+k+1}_{h-\ell}}\bigr{\}}.caligraphic_D start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_a start_POSTSUPERSCRIPT italic_h + italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h - roman_ℓ end_POSTSUBSCRIPT ) ...
where the dataset 𝒟htsubscriptsuperscript𝒟𝑡ℎ\mathcal{D}^{t}_{h}caligraphic_D start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is updated based on the data collection procedure described in §4.1. Meanwhile, we define the following density mappings for the estimation of...
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
Upon collecting the data, we follow the embedding learning procedure and fit the density mappings for the estimation of Bellman operator. In practice, various approaches are available in fitting the density by observations, including the maximum likelihood estimation (MLE), the generative adversial approaches, and the ...
We now fit the density mappings based on the density estimation oracle. For each step h∈[H]ℎdelimited-[]𝐻h\in[H]italic_h ∈ [ italic_H ] and action sequence ah−ℓh+k∈𝒜k+ℓ+1subscriptsuperscript𝑎ℎ𝑘ℎℓsuperscript𝒜𝑘ℓ1{a^{h+k}_{h-\ell}}\in\mathcal{A}^{k+\ell+1}italic_a start_POSTSUPERSCRIPT italic_h + italic_k end_POSTS...
C
We then estimate these bridge functions via minimax estimation (Dikkala et al., 2020; Chernozhukov et al., 2020; Uehara et al., 2021). More importantly, to handle the distributional shift, we propose a sequence of novel confidence regions for the bridge functions, which quantifies the uncertainty of minimax estimation ...
Such construction contrasts sharply with previous works on offline RL which build confidence regions via either least square regression or maximum likelihood estimation (Xie et al., 2021; Uehara and Sun, 2021; Liu et al., 2022). Furthermore, we develop a novel theoretical analysis to show that any function in the confi...
This sequence of new confidence regions has not been considered in the previous works on off-policy evaluation (OPE) in POMDPs (Bennett and Kallus, 2021; Shi et al., 2021) as pessimism seems unnecessary in these works. Meanwhile, the confidence regions are constructed as a level set with respect to the loss functions o...
OPE via causal inference. Our work is closely related to the line of research that employing tools from causal inference (Pearl, 2009) for studying OPE with unobserved confounders (Oberst and Sontag, 2019; Kallus and Zhou, 2020; Bennett et al., 2021; Kallus and Zhou, 2021; Mastouri et al., 2021; Shi et al., 2021; Benne...
the past and current observations, which serves as the negative control action and outcome respectively (Miao et al., 2018a, b; Cui et al., 2020; Singh, 2020; Kallus et al., 2021; Bennett and Kallus, 2021; Shi et al., 2021). Then the value of each policy can be identified by a set of confounding bridge functions corres...
B
First, they studied unconstrained regression problems with objectives in the form F⁢(𝒙T⁢ξ)𝐹superscript𝒙𝑇𝜉F({\bm{x}}^{T}\xi)italic_F ( bold_italic_x start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_ξ ), resulting in objective Hessians owning rank-one updates that cannot be employed for our general problem ...
The asymptotics of second-order Newton’s methods for unconstrained problems have recently been investigated. Bercu2020Efficient designed an online Newton’s method for logistic regression, and Boyer2023asymptotic generalized that method to general regression problems. Compared to first-order methods that often consider...
In this paper, we answer this question by complementing the global convergence guarantees and establishing the local asymptotic properties of existing StoSQP methods. Specifically, we focus on an Adaptive Inexact StoSQP scheme, referred to as AI-StoSQP. By adaptive we mean that the scheme inherits the critical merit of...
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
To our knowledge, this is the first work that performs online inference by taking into account not only the randomness of samples but also the randomness of computation (i.e., sketching and stepsize); the latter is particularly important for making second-order methods computationally promising.
D
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
The discrete LBB condition could also be shown for the isogeometric generalized Taylor-Hood family, see [6], [7]. The proof there relies on a continuously differentiable parametrization of the domain ΩΩ\Omegaroman_Ω on each of a fixed number of patches, which does not cover general quadrilateral/hexahedral meshes.
In this paper we focus on the related generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes under the following assumptions.
The present paper suffers from the same rather severe restrictions on hexahedral meshes in 3D as in previous work. The analysis of discrete inf-sup conditions for general hexahedral meshes remains an open problem. Another open problem is the analysis of isoparametric generalized Taylor-Hood families in 2D and 3D to co...
D
We relate WaveMix to previous works in Section 2, where we delve further into the image priors modeled by various classes of neural architectures for vision, and the use of wavelet transform. Our key innovations – the WaveMix blocks, use of multi-level 2D-DWT in each block, channel mixing, and the preservation of featu...
What makes DWT an attractive tool for analysis of natural signals are its multi-resolution properties and treatment of spatio-temporally sparse discontinuities (edges). A 1D-DWT splits an input 1-D signal x of length H𝐻Hitalic_H into two sub-bands roughly of length H/2𝐻2H/2italic_H / 2 each [36]. The first one is ca...
We performed ablation studies using ImageNet-1k and CIFAR-10 datasets on WaveMix to understand the effect of each type of layer on performance by removing the 2D-DWT layer, replacing it with Fourier transform or random filters, as well as learnable wavelets. All of these led to a decrease in accuracy. Those methods tha...
Statistical properties of natural images that have been well-studied include shift-invariance, scale-invariance, high spatial auto-correlation and preponderance of certain colors, as well as spatial sparseness of edges, [18, 19, 20, 21]. Shift-invariance, which is a form of stationarity of signals, arises due to the a...
Natural images have a number of priors that are not comprehensively exploited in any single type of neural network architecture. For instance, (1) convolutional neural networks (CNNs) only model shift-invariance using convolutional design elements [5, 6, 7, 8, 9, 10, 11, 12], (2) vision transformers (ViT) model long-ra...
C
Over U2={ζ2′≠0}subscript𝑈2superscriptsubscript𝜁2′0U_{2}=\{\zeta_{2}^{\prime}\neq 0\}italic_U start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = { italic_ζ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ≠ 0 }, we take again Z(1)={ζ1:=x−ℓ⁢cos⁡(θ),ζ2:=y−ℓ⁢sin⁡(θ)}subscript𝑍1formulae-sequence...
θ=cotan−1⁢(ζ1′/ζ2′)𝜃superscriptcotan1superscriptsubscript𝜁1′superscriptsubscript𝜁2′\theta=\mathrm{cotan}^{-1}(\zeta_{1}^{\prime}/\zeta_{2}^{\prime})italic_θ = roman_cotan start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_ζ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT / ita...
y=cos⁡(ζ1)⁢ζ2−sin⁡(ζ1)⁢(ζ2′−ℓ⁢ζ1′)/ζ1′𝑦subscript𝜁1subscript𝜁2subscript𝜁1superscriptsubscript𝜁2′ℓsuperscriptsubscript𝜁1′superscriptsubscript𝜁1′y=\cos(\zeta_{1})\zeta_{2}-\sin(\zeta_{1})(\zeta_{2}^{\prime}-\ell\zeta_{1}^{% \prime})/\zeta_{1}^{\prime}italic_y = roman_cos ( italic_ζ start_POSTSUBSCRIPT 1 end_POSTSUB...
x=−sin⁡(ζ1)⁢ζ2−cos⁡(ζ1)⁢(ζ2′−ℓ⁢ζ1′)/ζ1′𝑥subscript𝜁1subscript𝜁2subscript𝜁1superscriptsubscript𝜁2′ℓsuperscriptsubscript𝜁1′superscriptsubscript𝜁1′x=-\sin(\zeta_{1})\zeta_{2}-\cos(\zeta_{1})(\zeta_{2}^{\prime}-\ell\zeta_{1}^{% \prime})/\zeta_{1}^{\prime}italic_x = - roman_sin ( italic_ζ start_POSTSUBSCRIPT 1 end_POS...
θ=tan−1⁡(ζ2′/ζ1′)𝜃superscript1superscriptsubscript𝜁2′superscriptsubscript𝜁1′\theta=\tan^{-1}(\zeta_{2}^{\prime}/\zeta_{1}^{\prime})italic_θ = roman_tan start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_ζ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT / italic_ζ start_POSTSUB...
A
Math Word Problems. It has been suggested zhang2022hgen to ensemble multiple encoders to learn different relations present within problem text, and to incorporate out-of-problem external knowledge structures to inject relational and domain knowledge. As noted in cobbe2021training, high-quality training data of appropri...
There is a clear evolution in mathematical text processing overall, from roots in explicit discourse representation zinn2003computational; cramer2009naproche to the present day, where graph-based and transformer-based models produce leading metrics in a few related tasks peng2021mathbert; ferreira2021star; liang2021mwp...
We have described the path to the state-of-the-art for five representative areas considering the relationship between natural and mathematical language, either through necessity of the task or efficacy of approach. We describe the details, limitations and successes within each area and find that informal methods strug...
Math Word Problems. It has been suggested zhang2022hgen to ensemble multiple encoders to learn different relations present within problem text, and to incorporate out-of-problem external knowledge structures to inject relational and domain knowledge. As noted in cobbe2021training, high-quality training data of appropri...
Formula Retrieval. Similar to identifier-definition extraction, formula retrieval suffers from issues with wildcard formula queries (e.g. T=gα⁢β⁢Tα⁢β𝑇subscript𝑔𝛼𝛽superscript𝑇𝛼𝛽T=g_{\alpha\beta}T^{\alpha\beta}italic_T = italic_g start_POSTSUBSCRIPT italic_α italic_β end_POSTSUBSCRIPT italic_T start_POSTSUPERSCRIP...
B
This dissimilarity is a squared distance weighted by the block proportions between the connectivity matrices of the two networks. The parameters (block proportions and connectivity matrices) are computed separately on the two networks with the node grouping provided by the inference on the whole sub-collection.
This dissimilarity is a squared distance weighted by the block proportions between the connectivity matrices of the two networks. The parameters (block proportions and connectivity matrices) are computed separately on the two networks with the node grouping provided by the inference on the whole sub-collection.
We then use 2222-medoids clustering to split the sub-collection of networks based on the dissimilarity measures. A split is validated if it increases the score of Equation (12). The mathematical definition of the dissimilarity measure and details on the recursive clustering algorithm are given in Appendix A.
Section 2 recalls the definition of the Stochastic Block Model on a single network. We motivate our new approach by inferring it independently on a collection of food webs. Then in Section 3, we present the various variants of the c⁢o⁢l⁢S⁢B⁢M𝑐𝑜𝑙𝑆𝐵𝑀colSBMitalic_c italic_o italic_l italic_S italic_B italic_M. The ...
Figure 4: Above: Clustering and connectivity structures of a collection of 67676767 predation networks from the Mangal database into 5555 sub-collections. The length of the dendrogram is given by the difference in BIC-L to the best model. Below: Contingency table of the clustering found by π⁢-⁢c⁢o⁢l⁢S⁢B⁢M𝜋-𝑐𝑜𝑙𝑆𝐵�...
B
This generalization is also referred to as systematicity [12]. Based on the above analysis, it can be inferred that it is the systematicity of acquired knowledge that enables human beings to take mechanisms into consideration in visual perception, and thus achieve excellent o.o.d. generalization ability.
This is based on the observation of the learning curves of the three mechanisms (in Fig. 12 in Appendix A). Fast learning on translation and scaling and a slow one on rotation can be noticed for all models, which indicates that CNN models have greater difficulty learning the mechanism of rotation.
To our best knowledge, this is the first work that utilize the systematic knowledge about other mechanisms in classifying images. As a result, in addition to answer questions like “Is there a ‘5’ in the image? ”, the proposed architecture is also able to answer “Why do you think it is a ‘5’? ”, based on the knowledge i...
How an image is perceived relies on our knowledge of various mechanisms, rather than knowledge of images that are previously seen (which is the way that existing machines operate). It can also be noticed that our knowledge about occlusion or notching is universal and independent of the domain of variables.
While children have plenty of time to gain systematic knowledge and physical mechanisms through observations and experiments [38, 35, 9], which build foundations for object perception and future knowledge acquisition [33, 37, 22], existing machine learning models rarely have opportunities to do so. One of the main reas...
D
Unlike recent supervised variants of infoNCE (Khosla et al., , 2020; Assran et al., , 2020; Zhong et al., , 2021) which can only leverage explicit (strong) supervision (e.g in form of labeled data), puNCE is also able to leverage implicit (weak) supervision from the unlabeled data. The main idea is to use the fact that...
Our experiments across PU and binary semi-supervised settings suggest that in settings with limited supervision puNCE can be particularly effective and often produce stronger representations than both infoNCE and its supervised variants (Khosla et al., , 2020; Assran et al., , 2020). Our experiments on standard PU lear...
Contrastive Loss Baselines. We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated Sup...
Table 4: Linear Evaluation of different contrastive losses under Semi-Supervised (PNU) setting - with 1%, 5% and 10% labeled training data. puNCE proves to be superior than infoNCE (Chen et al., 2020b, ) and semi-supervised SCL (Assran et al., , 2020) especially in low supervision regime.
While self supervised contrastive losses e.g. InfoNCE (Gutmann and Hyvärinen, , 2010) have gained remarkable success in unsupervised representation learning; recent works (He et al., , 2020; Kolesnikov et al., , 2019) have pointed out that purely self-supervised approaches often produce inferior visual representation ...
A
De Domenico et al. (2015) and De Domenico and Biamonte (2016) develop information-theoretic tools to identify layer dependency and cluster similar layers. In Stanley et al. (2016), the authors study layer interdependence by categorizing layers into groups such that all layers were drawn from the same SBM. In the MULTIT...
These various approaches to studying layer interdependence have been applied to various disciplinary contexts, and have resulted in varying discipline-specific conclusions, as well. In particular: De Domenico et al. (2015) identifies and interprets layer dependence in varying contexts—from the worldwide food import/exp...
We build upon these motivations from previous work (Schein et al., 2016; De Domenico et al., 2015; Stanley et al., 2016; De Domenico and Biamonte, 2016; De Bacco et al., 2017; Kao and Porter, 2018) and develop the NNTuck as a natural way to identify a latent space in the dimension of the layers. Analogous to how the fa...
De Domenico et al. (2015) and De Domenico and Biamonte (2016) develop information-theoretic tools to identify layer dependency and cluster similar layers. In Stanley et al. (2016), the authors study layer interdependence by categorizing layers into groups such that all layers were drawn from the same SBM. In the MULTIT...
The vocabulary around assessing interdependence amongst the layers of a multilayer network is scattered across the literature (Battiston et al., 2014; De Domenico et al., 2015; Stanley et al., 2016). In this work, we use the term interdependence colloquially, to refer to the concept of dependence between layers in an a...
A
Specifically, in this paper, we propose a novel Question-Aware GCN-based QA method, called QAGCN, which encodes questions and KG entities in a joint embedding space where questions are close to correct answers (entities). The intuition of our method is as follows:
Given a question, if an entity is the correct answer, the reasoning chain of this question would be part of the KG context (i.e., the neighboring triples) of that entity. For example, considering the above question, the reasoning chain is part of the context of Jess Talamantes within three hops in the KG.
In this case, the representation of the entity would contain information that is semantically consistent with the given question and can be aligned with the question in the embedding space. The GCN directly generates semantic representations of the question and KG entities in an end-to-end fashion.
An example is “who is the mayor of the city where the director of Sleepy Hollow was born,” which mentions one topic entity (“Sleepy Hollow”) and three relations (“mayor of”, “director of”, and “was born”). Correspondingly, starting from the entity Sleepy Hollow, the expected answer Jess Talamantes can be inferred from ...
For this question, three GCN layers are used in the graph encoder to encode entities in the subgraph that covers all paths of length one, two, and three starting from the topic entity Isabella of Portugal. The question is accordingly encoded by the question encoder with three layers of LSTMs.
A
The QCBM uses parametric quantum circuits to generate different superpositions. The training challenge is then to find the proper set of parameters that lead to the generation of the specific superposition that represents the target joint probability distribution. For this, a Simultaneous Perturbation Stochastic Approx...
The percentage of samples correctly classified peaked when using 4 qubits, where average accuracy was 57% with the ZZFeatureMap and 62% using the densely encoded feature map. The QSVM experiments were on par with classical SVM on average, and classified some sample batches with perfect accuracy. This, in contrast with ...
The initial results were disappointing (middle column) — while finding some peaks in the distribution, the optimizer entirely missed others, leading to a poor KL-divergence score of 1.1311.1311.1311.131. Our hypothesis was that the target distribution
Figure 7: Attempts to fit the target distribution, with and without introducing noise. The goodness-of-fit is measured by Kullback-Liebler (KL) divergence, DKL⁢(P∥Q)=∑x∈𝒳P⁢(x)⁢log⁡(P⁢(x)Q⁢(x))subscript𝐷KLconditional𝑃𝑄subscript𝑥𝒳𝑃𝑥𝑃𝑥𝑄𝑥D_{\text{KL}}(P\parallel Q)=\sum_{x\in\mathcal{X}}P(x)\log\left(\frac{P(x)...
The QCBM uses parametric quantum circuits to generate different superpositions. The training challenge is then to find the proper set of parameters that lead to the generation of the specific superposition that represents the target joint probability distribution. For this, a Simultaneous Perturbation Stochastic Approx...
B
We generate synthetic graphs from a base graph where the node features are kept while all edges removed. As a result, the initial graph is totally disconnected. We then introduce initial edges by randomly connecting nodes until a minimum density threshold is met. Afterwards, edges are added one-by-one following the rul...
As homophily and performance are correlated, in the restructuring process, number of edges are chosen based on homophily level on the validation set. As shown in Equation 5, we chose 48000480004800048000 edges for Chameleon and 26000260002600026000 edges for Squirrel, each corresponds to the first peak of homophily on...
Hyperparameters are tuned using grid search for all models on the unmodified and restructured graphs of each dataset. We record prediction accuracy on the test set averaged over 10 runs with different random initializations. We use the same split setting as Pei et al. (2020); Zhu et al. (2020). The results are average...
Following Zhu et al. (2021a), we report the results for different homophily levels under the same set of hyperparameters for each model. We adopt the same training and early stopping configuration as reported in Section 5. Results are in Figure 4. We also report the homophily score on the rewired graph for comparison ...
Hyper-parameter tuning is conducted separately using grid search for the graph structuring and node classification phases. In the graph restructuring phase, we fix m=4𝑚4m=4italic_m = 4 in Equation 9 and search for spectrum slicer width s𝑠sitalic_s in {10, 20, 40}. ϵitalic-ϵ\epsilonitalic_ϵ in the loss function Equati...
C
Similarly, define μjt:ℝd×ℝn→ℝd:subscriptsuperscript𝜇𝑡𝑗→superscriptℝ𝑑superscriptℝ𝑛superscriptℝ𝑑\mu^{t}_{j}:\mathbb{R}^{d}\times\mathbb{R}^{n}\to\mathbb{R}^{d}italic_μ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT : blackboard_R start_POSTSUPERSCRIPT italic_d end_...
First note that the total risk can be decomposed into either a weighted sum of average subpopulation risk or average learner risk. Thus the fact that learner and subpopulation dynamics are risk reducing ensures that the total risk is decreasing after the sequential updates. ∎
We remark that the notion of risk minimizing in the limit is reasonable for subpopulations because their average risk is linear in αi,:subscript𝛼𝑖:\alpha_{i,:}italic_α start_POSTSUBSCRIPT italic_i , : end_POSTSUBSCRIPT. It is also reasonable for learners because their average risk is convex in θjsubscript𝜃𝑗\theta_{...
The second observation is that the basis for the updates is the average loss, i.e. risk. This motivates the following definition: given participation α𝛼\alphaitalic_α and parameters ΘΘ\Thetaroman_Θ, the average risk experienced by each subpopulation i𝑖iitalic_i and each
The total risk objective can be viewed as an instance of the k𝑘kitalic_k-means clustering problem with k=m𝑘𝑚k=mitalic_k = italic_m. In the language of this literature (e.g., Selim and Ismail (1984)), each subpopulation is a data point and the parameter selected by each learner is a cluster center.
C
For each cancer type, we constructed a classifier that predicted which anonymous US-based Bing users were likely ill with a specific cancer. Note that we did not have individual validation data connecting users to a medically-validated diagnostic status. A user was predicted positive (having cancer) if they mentioned i...
In the next experiment of this section, we study pre-election polls, and use them to provide the value of 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT for the classifiers that they might represent. Statistics on 10 pre-election polls of the 2016...
We started with 10101010 evenly-spaced values for β𝛽\betaitalic_β, and added more values where the changes with respect to β𝛽\betaitalic_β were large. The pairs of (𝚞𝚗𝚏𝚊𝚒𝚛𝚗𝚎𝚜𝚜,𝚎𝚛𝚛𝚘𝚛)𝚞𝚗𝚏𝚊𝚒𝚛𝚗𝚎𝚜𝚜𝚎𝚛𝚛𝚘𝚛(\texttt{unfairness},\texttt{error})( unfairness , error ) that minimized 𝚍𝚒𝚜𝚌βsubscrip...
In the next set of experiments for binary classifiers, we demonstrate possible uses and outcomes of the calculation of 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT. We study several classifiers for which we only have aggregate statistics, and ge...
For each classifier, we wish to discover a best-case accuracy-fairness trade-off, so as to identify classifiers that might be useful for a future, more detailed study. We note that unfairness may ensue using this classification method due to differences in health literacy between states. Out of the 18 cancer types, cl...
D
Recall that Valmod will return the Pair Motif for k=2𝑘2k=2italic_k = 2. Thus, this experiment also shows that the special case of the 2222-Motiflets is equal to Pair Motif discovery, as the results are always equal to those of Valmod for k=2𝑘2k=2italic_k = 2.
Semi-Synthetic Data Sets with Gold Standard Labels: To measure the precision of the different MD methods we generated a semi-synthetic 25252525 dataset benchmark from (Dau et al., 2019) with implanted motif sets. For each method, we used the gold standard parameters as inputs, i.e. the size k𝑘kitalic_k for k𝑘kitalic...
We first compare the results of the approximate k𝑘kitalic_k-Motiflet algorithm to that of four state-of-the-art competitors using the six real TS. For these comparisons we performed an unbiased computation of extents and cardinalities of found motif sets at equivalent values of r𝑟ritalic_r (respectively d=2⋅r𝑑⋅2𝑟d=...
In this section we discuss the quality of the discovered motif sets. The purpose is to compare methods not only by the size and extent of found motifs as in the previous section, but also to consider whether these motifs are actually meaningful, i.e., correspond to important events in the process producing the TS. We ...
In the previous section, we evaluated the quality of the approximate k𝑘kitalic_k-Motiflet algorithm compared to four state-of-the-art MD methods. We did, however, not yet consider the exact k𝑘kitalic_k-Motiflet algorithm, because (a) its runtime is exponential in the size of the motif set and thus probably infeasibl...
C
Chen et al. [35] proposed a saturated innovation update algorithm for the decentralized estimation under sensor attacks, where the interagent communication is noiseless. They proved that if the communication graph is undirected and fixed, the nodes are locally observable, and the number of attacked nodes is less than h...
Wang et al. [38] investigated a consensus plus innovation based decentralized linear regression algorithm over random networks with random regression matrices. They proved that if the regression matrices and communication graphs satisfy the stochastic spatio-temporal persistence of excitation condition, properly choosi...
At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fittin...
Chen et al. [35] proposed a saturated innovation update algorithm for the decentralized estimation under sensor attacks, where the interagent communication is noiseless. They proved that if the communication graph is undirected and fixed, the nodes are locally observable, and the number of attacked nodes is less than h...
To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c...
A
The Anderson acceleration (AA) 9 is a multi-secant method 10, 11 that has been widely used either to improve the convergence rate of convergent fixed-point schemes or to restore convergence when the original fixed-point scheme is not convergent 12, 13, 14, 15, 16. In particular, the convergence of AA has been studied w...
AA requires solving a least-squares problem at each fixed-point iteration, and this can be computationally expensive for scientific applications that involve large-scale calculations, especially when such calculations are distributed on high-performance computing (HPC) platforms.
In large-scale distributed computational environments, solving the global least-squares problem at periodic intervals may still not be sufficient to avoid severe bottlenecks in the computation. To further reduce the computational cost, one could choose to adopt less expensive, approximate fixed-point operator evaluatio...
Solving a least-squares problem at each iteration is computationally expensive and requires global communications which introduce severe bottlenecks for the parallelization in HPC environments. P. Suryanarayana and collaborators 23 recently proposed to compute
When compared to the fixed-point iteration, AA often requires fewer iterations to converge thus resulting in a shorter computational time. On the other hand, AA also introduces the overhead of solving the least-squares problem (5) at each iteration. This computation overhead is outweighed by the benefit from fewer iter...
A
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and...
For the tagging-based method, all the words of the input document are lemmatized to their roots using NLTK [32]. Then, we tag the words between the existing lemmatized tokens and the representative words for the desired topic, based on the top-N𝑁Nitalic_N=100 most representative terms for each topic.
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and...
For example, suppose that we pre-process the sentence below, as a part of an input document, from which we aim to guide the generation towards the topic “Business & Finance”. Following the aforementioned procedure, we will enclose with the special token [TAG], the words “businesses”, “billion” and “tax” since they belo...
Given the set of representative words for each topic, a document, and the desired topic, the tagging mechanism works as follows. All the words of the input document are lemmatized to their roots. Then, we identify the common words between the existing lemmatized tokens and the representative words for the desired topic...
D
We illustrate the gate scheduling of the tiled multiplication circuit. The goal is to parallelise as many gates as possible: T gates, CNOTs and SWAPs. We analyse the cost of long range interactions in terms of SWAP gate counts. In the following, we present one of the algorithms used for extracting the schedules. In th...
Throughout the schedule, gates are typically applied to either some combination of c⁢t⁢r⁢l𝑐𝑡𝑟𝑙ctrlitalic_c italic_t italic_r italic_l, z𝑧zitalic_z, Ajsubscript𝐴𝑗A_{j}italic_A start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and pjsubscript𝑝𝑗p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, where j i...
Figure 4: SWAP schedule for the Toffoli step of the 3D multiplier circuit. Red bars indicates the application of a SWAP gate between two qubits. The initial mapping of the qubit registers to be multiplied (A𝐴Aitalic_A and B𝐵Bitalic_B) is indicated with labels of the form Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRI...
We start from the following definitions. A standard cell is a pattern that represents the 2D/3D abstraction of the qubits and the gates that form a sub-circuit (e.g. the Clifford+T decomposition of the Toffoli gate). Tiling is the procedure by which circuits are designed in a manner that is compatible with the underlyi...
In the listings, “upper” and “lower” refer to the top and bottom of the Toffoli-cube being worked in respectively; e.g. “lower W” refers to the ancillary qubit in the West corner of the bottom four corners of the current Toffoli-cube. “N” is for North. Multiple gates listed on a line are in the same moment (a moment is...
D
For our training, we require the MRI scans in two different parameter settings of {TE, TR}. One serves as input to the model, and the other as the ground truth corresponding to the desired parameter setting to compute the loss. We use MRiLab [7] which is an MRI Simulator to generate these synthetic brain scans in diff...
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
The training process comprises of two sequential phases: training of the auto-encoder, and training of the Param-Net. For the first phase, the auto-encoder was trained on a subset of Places-365 dataset [19] and fine-tuned using MR Image dataset. For the second phase, we train the Param-Net on MRiLab dataset. The weight...
The param-net is a coarse-to-fine model which uses a series of expansive layers to construct the output image from input image features along with the parameters. It consists of eight blocks where each block consists of a transposed convolutional layer for upsampling followed by three convolutional layers that acts as ...
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ...
B
It’s important to note that, with the exception of DRM, all existing neural-network-based methods are developed based on either the strong or weak forms of PDEs. While DRM utilizes the free energy functional, it is only suitable for solving static problems, i.e., finding the equilibrium of the system. Additionally, mos...
The rest of the paper is organized as follows. Section 2 reviews the EnVarA and some existing neural network-based numerical approaches for solving PDEs. Section 3 of the paper is devoted to the development of the proposed EVNN schemes for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradi...
In this section, we present the structure-preserving EVNN discretization for solving both L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and generalized diffusions, As mentioned earlier, the goal is to construct a neural-network discretization based on the energy-dissipation l...
Our primary aim is to develop structure-preserving Eulerian algorithms to solve L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and structure-preserving Lagrangian algorithms to solve generalized diffusions based on their energy-dissipation law by utilizing neural networks as a...
In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat...
B
We apply our previous results to the case of n𝑛nitalic_n-parameters persistence modules (seen as γ𝛾\gammaitalic_γ-sheaves over ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT). In particular, we show that the γ𝛾\gammaitalic_γ-linear ISM can be obtained by optimizing an ...
This section introduces the necessary background on sheaf theory, convolution distance, and its links with persistence. The main reference for general results on sheaves is [16]. The convolution distance for sheaves has been introduced in [17] and generalized in [24]. A useful notion for our purpose is the one of const...
We review the notion of γ𝛾\gammaitalic_γ-sheaves, and recall the precise relationship between this type of sheaves and persistence modules [3]. We then strengthen one of our previous results, asserting that the interleaving distance between persistence modules equals the convolution distance between their associated γ...
We review classical constructions of sheaf theory, such as integral transforms and kernel compositions. We recall the definition of the convolution distance between (derived) sheaves of 𝐤𝐤{\mathbf{k}}bold_k-vector spaces on a finite-dimensional real vector space, as developed by Kashiwara-Schapira [17] and provide pr...
The interplay between sheaves on a real vector space and persistence theory necessitates the use of a topology on a vector space introduced by Kashiwara and Schapira [16], called the γ𝛾\gammaitalic_γ-topology. In this section, we first recall the basic definitions associated to the γ𝛾\gammaitalic_γ-topology. There is...
A
Three constraint-based algorithms are also investigated. The first of these is the PC-Stable algorithm [8], which starts with a complete undirected graph and performs marginal and Conditional Independence (CI) tests to remove edges from the graph. For efficiency, the algorithm starts with marginal independence tests a...
Three constraint-based algorithms are also investigated. The first of these is the PC-Stable algorithm [8], which starts with a complete undirected graph and performs marginal and Conditional Independence (CI) tests to remove edges from the graph. For efficiency, the algorithm starts with marginal independence tests a...
Figure 5 shows the sensitivity to variable ordering for the algorithms described in section 2 and compares it with their sensitivity to other selected factors (details are given in the figure caption). TABU is a variant of HC and is, as one might expect, sensitive to variable ordering, but the mean F1 change of 0.278 i...
Two hybrid algorithms are investigated as well. These have an initial local constraint-based algorithm which determines the skeleton of the graph, and then a second HC phase which only considers adding arcs consistent with that skeleton. The first hybrid algorithm MMHC [41] uses a local constraint-based algorithm MMPC ...
The results in this study are obtained using the algorithm implementations provided in version 4.7 of the bnlearn package [32, 35]. We use the default objective scores, conditional independence test functions and hyper-parameters222For score-based and hybrid algorithms, the default is to use the BIC score with a compl...
C
However, for the LLaMA-65B model with FP16 weights, the model size exceeds the memory capacity of a single GPU (80GB for A100), necessitating model parallelism techniques. Nevertheless, when the weights of the LLaMA-65B model are quantized to 3 or 4 bits, as demonstrated to be a viable solution in (Frantar et al., 2022...
To examine the latency variance of LUT-GEMM with respect to group size g𝑔gitalic_g, we perform matrix multiplications (using an (m×n)𝑚𝑛(m\times n)( italic_m × italic_n ) matrix and an (n×1)𝑛1(n\times 1)( italic_n × 1 ) matrix) when g𝑔gitalic_g values vary as shown in Figure 4(a). We observe a sufficiently large g�...
However, for the LLaMA-65B model with FP16 weights, the model size exceeds the memory capacity of a single GPU (80GB for A100), necessitating model parallelism techniques. Nevertheless, when the weights of the LLaMA-65B model are quantized to 3 or 4 bits, as demonstrated to be a viable solution in (Frantar et al., 2022...
Assuming 3-bit quantization and the implementation of LUT-GEMM with g𝑔gitalic_g=128, a speed-up of 2.41×\times× for LLaMA-30B (using one GPU) and 2.04×\times× for LLaMA-66B (using two GPUs) is achievable. Note that when fine-tuning is performed after constructing a pre-trained model, more efficient quantization techni...
However, upon utilizing the BCQ format for quantization, LUT-GEMM is able to perform inference using just a single GPU, while maintaining a comparable overall latency. It should be noted that, when comparing identical 3-bit (weight-only and row-wise) quantization scenarios, the latency for token generation using LUT-GE...
C
Assuming the Planted Dense Subgraph Conjecture, Min-TSS cannot be approximated to within a factor of O⁢(n1/2−ε)𝑂superscript𝑛12𝜀O(n^{1/2-\varepsilon})italic_O ( italic_n start_POSTSUPERSCRIPT 1 / 2 - italic_ε end_POSTSUPERSCRIPT ) by a probabilistic polynomial-time algorithm for any ε>0𝜀0\varepsilon>0italic_ε > 0.
As computing the rank of a divisor is NP-hard in general, it is natural to ask whether it can be approximated within any reasonable factor. Our main contribution is establishing a connection between computing the rank and finding a so-called minimum target set in an undirected graph, a central problem in combinatorial ...
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
Using this notation, the rank of a divisor can be formulated as rankG⁡(f)=distGnh⁡(dG−𝟏−f)−1subscriptrank𝐺𝑓subscriptsuperscriptdistnh𝐺subscript𝑑𝐺1𝑓1\operatorname{rank}_{G}(f)=\operatorname{dist^{nh}_{\mathnormal{G}}}(d_{G}-% \mathbf{1}-f)-1roman_rank start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_f ) = ...
B
To make the attention mechanism easier to understand, we finally visualize the output of the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset, which can be seen in Fig. 11. Changes in attention weights are primarily accumulated among channels, verifying further the substantial role performed by th...
In the realm of ANNs, the Squeeze and Excitation (SE) block, introduced by Hu et al. [15], has proven to be a highly effective module for enhancing representation. The SE block can be seamlessly incorporated into a network, requiring only a minimal increase in parameters to recalibrate channel information. By employing...
Spiking Neural Networks (SNNs) have emerged as a promising research area, offering lower energy consumption and superior robustness compared to conventional Artificial Neural Networks (ANNs) [1, 2]. These characteristics make SNNs highly promising for temporal data processing and power-critical applications [1, 3]. In...
TABLE IX: The spiking rate, FLOPS, and SNN single operation energy cost of each layer in the network for classifying the DVS128 dataset, where Convx𝑥xitalic_x denotes x−t⁢h𝑥𝑡ℎx-thitalic_x - italic_t italic_h 2-D convolutional layer, Atty𝑦yitalic_y denotes y−t⁢h𝑦𝑡ℎy-thitalic_y - italic_t italic_h TCJA module, and...
Compared to the ANNs, SNNs consumes less energy due to its sparser firing and poorer processing accuracy. Owing to the binary spikes, each operation in SNNs consists of a single floating-point (FP) addition. In ANNs, on the other hand, each operation computes a dot product as a multiply-accumulate (MAC) calculation con...
D
(uh,ph,u^h)∈Vh×Qh×V^hsubscript𝑢ℎsubscript𝑝ℎsubscript^𝑢ℎsubscript𝑉ℎsubscript𝑄ℎsubscript^𝑉ℎ(u_{h},p_{h},\hat{u}_{h})\in V_{h}\times Q_{h}\times\hat{V}_{h}( italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , over^ start_ARG italic_u end_ARG start_POSTS...
(∇⋅uh,∇⋅vh)K+(∇×uh,∇×vh)K+α⁢(uh,vh)K+⟨ph+γ⁢uh,vh⟩∂Ksubscript⋅∇subscript𝑢ℎ⋅∇subscript𝑣ℎ𝐾subscript∇subscript𝑢ℎ∇subscript𝑣ℎ𝐾𝛼subscriptsubscript𝑢ℎsubscript𝑣ℎ𝐾subscriptsubscript𝑝ℎ𝛾subscript𝑢ℎsubscript𝑣ℎ𝐾\displaystyle(\nabla\cdot u_{h},\nabla\cdot v_{h})_{K}+(\nabla\times u_{h},% \nabla\times v_{h})_{K}+\alpha...
(∇⋅uh,∇⋅vh)𝒯h+(∇×uh,∇×vh)𝒯h+α⁢(uh,vh)𝒯h+⟨p^h,vh⟩∂𝒯hsubscript⋅∇subscript𝑢ℎ⋅∇subscript𝑣ℎsubscript𝒯ℎsubscript∇subscript𝑢ℎ∇subscript𝑣ℎsubscript𝒯ℎ𝛼subscriptsubscript𝑢ℎsubscript𝑣ℎsubscript𝒯ℎsubscriptsubscript^𝑝ℎsubscript𝑣ℎsubscript𝒯ℎ\displaystyle(\nabla\cdot u_{h},\nabla\cdot v_{h})_{\mathcal{T}_{h}}+(\nabla...
∥u−uh∥Ω2superscriptsubscriptdelimited-∥∥𝑢subscript𝑢ℎΩ2\displaystyle\lVert u-u_{h}\rVert_{\Omega}^{2}∥ italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT =ah⁢(u−uh,z)+⟨uh⋅n,∇⋅z⟩∂𝒯h−⟨uh×n,∇×z⟩∂𝒯h,absentsubscrip...
≔(∇⋅uh,∇⋅vh)K+(∇×uh,∇×vh)K+α⁢(uh,vh)K+⟨γ⁢uh,vh⟩∂K,≔absentsubscript⋅∇subscript𝑢ℎ⋅∇subscript𝑣ℎ𝐾subscript∇subscript𝑢ℎ∇subscript𝑣ℎ𝐾𝛼subscriptsubscript𝑢ℎsubscript𝑣ℎ𝐾subscript𝛾subscript𝑢ℎsubscript𝑣ℎ𝐾\displaystyle\coloneqq(\nabla\cdot u_{h},\nabla\cdot v_{h})_{K}+(\nabla\times u% _{h},\nabla\times v_{h})_{K}+\al...
B
{s}_{2}-0.85915531\times v+27351279.416023515sansserif_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0.73244455 × sansserif_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - 0.23655202 × sansserif_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 0.85915531 × italic_v + 27351279.416023515 ...
Second, FlashSyn fetches the actual state 𝗊a′subscriptsuperscript𝗊′𝑎\mathsf{q}^{\prime}_{a}sansserif_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT reached by executing 𝐂𝐂\mathbf{C}bold_C until reaching the action indexed k𝑘kitalic_k (line 5) on the actual smart contr...
The optimization sub-procedure might explore parts of the states space not explored during the initial data points collection. This might challenge the accuracy of the approximations and result in mismatch between the estimated and the actual values. Thus, it is necessary to collect new data points based on the counter...
the sub-procedure Construct constructs the optimization framework 𝒫𝒫\mathcal{P}caligraphic_P for the actions vector (line 9). Then, FlashSyn uses the optimization sub-procedure Optimize (line 10) to find the optimal concrete values to pass as input parameters to the methods in the actions vector that satisfy the cons...
After obtaining a list of parameters that maximize the estimated profit of an action sequence, FlashSyn proceeds to verify the synthesized attack vectors by executing them on a private blockchain and check their actual profits. If the difference between the actual profit and the
D
We propose a model-based scheme for meta-RL with a finite sample of training tasks, where we first estimate the prior distribution of tasks, and train a Bayes optimal policy on the estimated prior. Using KDE for density estimation, we obtain state-of-the-art PAC bounds. Further, our approach can exploit low dimensional...
We propose a model-based scheme for meta-RL with a finite sample of training tasks, where we first estimate the prior distribution of tasks, and train a Bayes optimal policy on the estimated prior. Using KDE for density estimation, we obtain state-of-the-art PAC bounds. Further, our approach can exploit low dimensional...
This insight provides a rule-of-thumb of when meta RL approaches based on task inference, such as VariBAD, are expected to work well. Indeed, recent empirical work by Mandi et al. [23] claimed that in benchmarks such as RLBench [15], where tasks are very diverse, simpler meta RL methods based on fine-tuning a policy tr...
For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ...
While significant empirical progress in meta RL has been made, the theoretical understanding of the problem is still limited. A central question, which we focus on in this work, is the probably approximately correct (PAC) analysis of meta RL, namely, how many training tasks are required to guarantee performance that is...
B
The increasing popularity of social media has facilitated discussion between Internet users and encourages us to share information and experiences. Such development contributes to the emergence of online health communities (OHCs). It stimulates a new channel for exchanging healthcare knowledge, including personal expe...
We collected the healthcare Q&A corpus from OHCs, the platforms for laypeople to exchange health-related information, where people tend to use colloquial expressions rather than professional jargon. Hence, we can exploit the usage of health vocabulary from such HCGCs on the OHCs.
Our proposed method can be generalized to align word vector spaces of two more languages to compare consumer-oriented expressions across multiple languages. The framework can be used for exploring different vocabulary usage patterns in different countries and generate a language-agnostic CHV for facilitating cross-lin...
This study presents two implications for practitioners. First, the induced non-English CHV connects to the existing English CHV thanks to the bilingual word space. Such connections help the induced non-English medical terms conform to the existing medical terminology, such as concept unique identifiers (CUI) in UMLS. T...
However, analyzing HCGC is challenging because the vocabulary used by consumers is very different from that used in the medical literature and electronic health records. For example, consumers tend to use colloquial expressions like watery stool rather than professional jargon such as diarrhea for describing their bowe...
D
Large-scale CelebFaces Attributes (CelebA) Dataset Liu et al. (2018) contains 202,599202599202,599202 , 599 celebrity images, each annotated with 40404040 binary attributes. CelebA offers the dataset in two different formats: (a) actual raw images, (b) processed data with aligned facial images. In this work, we employ...
Furthermore, we compute saliency map (Sec. IV.3) explanations for ‘Eyeglasses’ prediction as baseline. As shown in Fig. 4 (k), we see the limitations of the saliency explanation e.g, a lot of pixels irrelevant to ‘Eyeglasses’ are detected to have high absolute values of the probability gradient across the RGB channels....
To establish that TERP indeed takes both the input data and the black-box model into account when generating explanations we subject our protocol to the sanity tests developed by Adebayo et al. Adebayo et al. (2018). We achieve this by taking the fine-tuned ViT model and randomizing the model parameters in a top-to-bo...
To explain the ViT prediction ‘Eyeglasses’ (prediction probability of 0.9980.9980.9980.998) for the image shown in Fig. 4 (a) using TERP, we first construct human-understandable representative features by dividing the image into 196196196196 superpixels (collection of pixels) corresponding to the 196196196196 ViT patch...
Training and inference using ViT was implemented using pytorch-lightning 1.5 and python 3.9. The pre-trained ViT model was pulled from the timm python library. For saliency analysis, the absolute values of the gradients of prediction probabilities with respect to input pixels were calculated using the backward() metho...
D
Since the seed store is empty at this point, FuSeBMC performs primary seed generation (Line 1 Algorithim 2) to enable the fuzzing process. This procedure involves generating binary seeds (i.e., a stream of bytes) based on Consumed Input Size and the input constraints collected during static analysis. In detail, it gene...
FuSeBMC uses ESBMC to check for the reachability of a given goal label within the instrumented program (lines 1—25 of Algorithm 1). If it concludes that the current goal is reachable it produces a counterexample that can be turned into a witness – a sequence of inputs that leads the program’s execution to that goal la...
When the seed generation by fuzzing is finished, FuSeBMC executes the BMC engine for each goal label in the Goal Queue. To minimize the execution time, it is run with "lighter" settings: all implicit checks (i.e., memory safety, arithmetic overflows) and assertion checks are disabled, and the bound for loop unwinding ...
In this paper, we presented FuSeBMC v4, a test generator that relies on smart seed generation to improve the state-of-the-art in hybrid fuzzing and achieve high coverage for C programs. First, FuSeBMC analyses and injects goal labels into the given C program. Then, it ranks these goal labels according to the given stra...
FuSeBMC begins by analyzing C code and then injecting goal labels into the given C program (based on the code coverage criteria that we introduce in Section 3.2.1) and ranking them according to one of the strategies described in Section 3.2.2 (i.e., depending on the goal’s origin or depth in the PUT). From then on, FuS...
B
Another strategy to stabilize Algorithm 2, which we shall pursue in future work, is to derive asymptotic approximations to the entries of Ib,p,μ(α,β)subscriptsuperscript𝐼𝛼𝛽𝑏𝑝𝜇I^{(\alpha,\beta)}_{b,p,\mu}italic_I start_POSTSUPERSCRIPT ( italic_α , italic_β ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b , itali...
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [...
We first test our methods by applying them to problems that have solutions expressible in terms of Mittag–Leffler functions in Section 6.1 before tackling more challenging problems in Section 6.2. In Example 3 of Section 6.1 we shall consider a striking example in which the performance of the JFP method and the sum sp...
The superior performance of the JFP method in Example 3 of Section 6.1 suggests it could be an effective method for computing Mittag–Leffler functions. Just as the ultraspherical spectral method was shown in [10] to be an effective method for the global computation of a special function (the Gauss hypergeometric funct...
We shall compare the effect of λ𝜆\lambdaitalic_λ on the rate of convergence of the sum space method of [25] and the JFP method. We let the constant λ𝜆\lambdaitalic_λ grow quadratically in (49) since this is also the case in the time-fractional fractional heat/wave equation that we shall consider in Example 4.
B
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
Let us first consider the best index value ranking in the unsupervised approach (Fig. 1c presented in the main text and Fig. S20), in which the lowest index value of L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is greater than the highest index value of L2subscript𝐿2L_{2}italic_L start_POSTSUBS...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
B
(c) Sparse update with TTE operators achieves 23-25×\times× faster training speed compared to the full update with TF-Lite Micro operators, leading to less energy usage. Note: for sparse update, we choose the config that achieves the same accuracy as full update.
Figure 10: Measured peak memory and latency: (a) Sparse update with TTE graph optimization can reduce the measured peak memory by 20-21×\times× for different models, making training feasible on tiny edge devices. (b) Graph optimization consistently reduces the peak memory for different sparse update schemes (denoted by...
We also compare the memory saving of reordering under different update schemes on MCUNet (Figure 9(b), indicated by different accuracy levels). Reordering consistently reduces the peak memory for different sparse update schemes of varying learning capacities.
We measure the training memory of three models on STM32F746 MCU to compare the memory saving from TTE. We measure the peak SRAM usage under three settings: general full update, sparse update, and sparse update with TTE graph reordering (Figure 10(a)). The sparse update effectively reduces peak memory by 7-9×\times× com...
In this paper, we aim to bridge the gap and enable tiny on-device training with algorithm-system co-design. We investigate tiny on-device training and find two unique challenges: (1) the model is quantized on edge devices. A real quantized graph is difficult to optimize due to low-precision tensors and the lack of Batc...
C
where β⁢(A)=max⁡{‖(I−Λ+Λ⁢A)−1⁢Λ‖:Λ=d⁢i⁢a⁢g⁢(λi)⁢with⁢λi∈[0,1]}𝛽𝐴:normsuperscript𝐼ΛΛ𝐴1ΛΛ𝑑𝑖𝑎𝑔subscript𝜆𝑖withsubscript𝜆𝑖01\beta(A)=\max\{\|(I-\Lambda+\Lambda A)^{-1}\Lambda\|:\Lambda=diag(\lambda_{i})% ~{}\mbox{with}~{}\lambda_{i}\in[0,1]\}italic_β ( italic_A ) = roman_max { ∥ ( italic_I - roman_Λ + roman_Λ it...
~{}\mbox{with}~{}\lambda_{i}\in[0,1]\}italic_β ( italic_B ) = roman_max { ∥ ( italic_I - roman_Λ + roman_Λ italic_B ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_Λ ∥ : roman_Λ = italic_d italic_i italic_a italic_g ( italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) with italic_λ start_POSTSUBSCRIPT ital...
β⁢(A)=max⁡{‖(I−Λ+Λ⁢A)−1⁢Λ‖:Λ=d⁢i⁢a⁢g⁢(λi)⁢with⁢λi∈[0,1]}𝛽𝐴:normsuperscript𝐼ΛΛ𝐴1ΛΛ𝑑𝑖𝑎𝑔subscript𝜆𝑖withsubscript𝜆𝑖01\beta(A)=\max\{\|(I-\Lambda+\Lambda A)^{-1}\Lambda\|:\Lambda=diag(\lambda_{i})% ~{}\mbox{with}~{}\lambda_{i}\in[0,1]\}italic_β ( italic_A ) = roman_max { ∥ ( italic_I - roman_Λ + roman_Λ italic_A...
where β⁢(A)=max⁡{‖(I−Λ+Λ⁢A)−1⁢Λ‖:Λ=d⁢i⁢a⁢g⁢(λi)⁢with⁢λi∈[0,1]}𝛽𝐴:normsuperscript𝐼ΛΛ𝐴1ΛΛ𝑑𝑖𝑎𝑔subscript𝜆𝑖withsubscript𝜆𝑖01\beta(A)=\max\{\|(I-\Lambda+\Lambda A)^{-1}\Lambda\|:\Lambda=diag(\lambda_{i})% ~{}\mbox{with}~{}\lambda_{i}\in[0,1]\}italic_β ( italic_A ) = roman_max { ∥ ( italic_I - roman_Λ + roman_Λ it...
~{}\mbox{with}~{}\lambda_{i}\in[0,1]\}.italic_β ( italic_B ) = roman_max { ∥ ( italic_I - roman_Λ + roman_Λ italic_B ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_Λ ∥ : roman_Λ = italic_d italic_i italic_a italic_g ( italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) with italic_λ start_POSTSUBSCRIPT ita...
A
We proceed with a runtime analysis of the permutation variant of the Jump benchmark. In contrast to our analysis for LeadingOnes, where mild adaptations of the proofs for the bit-string case were sufficient, we now observe substantially new phenomena, which require substantially more work in the analysis. In particula...
In this section, we describe the most relevant previous works. In the interest of brevity, we only concentrate on runtime analysis works, knowing well that other theoretical aspects have been studied for permutation problems as well. Since the theory of evolutionary algorithms using bit-string representations has start...
Stagnation detection: Stagnation detection was proposed in [RW22] (and further developed in [RW21, RW23, DR23]) as a natural way to improve the performance of evolutionary algorithms when they get stuck in a local optimum. Given the power of this approach, it would be interesting to extend it to permuation-based optim...
As discussed in the introduction, the theory of evolutionary computation has massively profited from having a small, but diverse set of benchmark problems. These problems are simple enough to admit mathematical runtime analyses for a broad range of algorithms including more sophisticated ones such as ant colony optimiz...
The Jump benchmark as pseudo-Boolean optimization problem was proposed in [DJW02]. It is the by far most studied multimodal benchmark in the theory of evolutionary algorithms and has led to a broad set of interesting insights, mostly on crossover and on how evolutionary algorithms cope with local optima [DJW02, JW02, ...
D
If the left-hand side ‖R⁢∂f⁢(y0)‖norm𝑅𝑓subscript𝑦0\|R\partial f(y_{0})\|∥ italic_R ∂ italic_f ( italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ∥ is too small at the current iterate y0subscript𝑦0y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, then there is nothing to optimize on the coarse grid – as ‖∂ψ⁢...
In order to apply two-level optimization to the regularized inverse problem (1.2), the coarse grid model function ψ𝜓\psiitalic_ψ given by (3.8) has to be computed. Similar to the evaluation of the objective function at both levels, we assume that the operator A𝐴Aitalic_A can be directly evaluated at both levels. For...
The core ingredient of two level optimization is a coarse grid model, that is a coarse grid representation of the fine grid problem in terms of the objective function f𝑓fitalic_f evaluated at the coarse grid, the current iterate y0subscript𝑦0y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and its restriction x0...
As motivated in Section 1, we assume a generic objective function f𝑓fitalic_f to be given that can be evaluated at different discretization levels. In this section, we consider two discretization levels called fine grid and coarse grid, respectively, and use the following notation.
the projection matrix A𝐴Aitalic_A corresponds to the length of the line segment of the i𝑖iitalic_i-th projection ray passing through the j𝑗jitalic_j-th pixel in the image domain (Figure 1.1). At every level the width of the detector-array was set to the grid size, so that at each scale every pixel intersects with at...
A
Furthermore, any Boolean circuit with only AND and OR gates (monotone circuit) that computes fTarsubscript𝑓Tarf_{\mathrm{Tar}}italic_f start_POSTSUBSCRIPT roman_Tar end_POSTSUBSCRIPT has size eΩ⁢(dα),superscript𝑒Ωsuperscript𝑑𝛼e^{\Omega(d^{\alpha})},italic_e start_POSTSUPERSCRIPT roman_Ω ( italic_d start_POSTSUPERSC...
The Tardos function, introduced in [43], will satisfy these conditions. The Tardos function builds upon the seminal work of Razborov, [35], who studied the hardness of monotone computation for perfect matching. The function is constructed as a graph invariant and is always sandwiched between the clique and chromatic nu...
To supply some further intuition, beyond the equivalent definitions, it is instructive to think about graph properties. In this case, for a graph with vertex set [n]delimited-[]𝑛[n][ italic_n ], the domain is the adjacency matrix of the graph {0,1}n×nsuperscript01𝑛𝑛\{0,1\}^{n\times n}{ 0 , 1 } start_POSTSUPERSCRIPT ...
The function hℎhitalic_h we use is the harmonic extension of a graph invariant function, introduced by Éva Tardos in [43]. The Tardos function and its properties build upon the seminal works of Razborov [35], and Alon and Boppana [1]. The mentioned works constitute a highly influential line of work, about the limitatio...
Razborov’s original work [35] proves a similar result for finding a perfect matching. However, the bound is super-polynomial instead of truly exponential. In what comes next, we could have also used the indicator function of a perfect matching in a graph to obtain another separation, albeit weaker, result.
D
}-{\boldsymbol{m}}}u_{h}(\cdot,\boldsymbol{y})\|_{V_{h}}.start_ROW start_CELL ∥ ∂ start_POSTSUBSCRIPT bold_italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_ν end_POSTSUPERSCRIPT italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , bold_italic_y ) ∥ start_POSTSUBSCRIPT italic_V start_POSTSUBSCRIPT i...
In Figures 1 and 2 we recognize that the error plots of all stable methods are almost identical. Thus we conclude that the cubature error dominates the discretization error and hence the particular choice of the discretization appears to be almost irrelevant.
Figure 2 deals with the lognormal case. The left picture uses linear approximations of SIPG, NIPG, and the conforming finite element method, while the right picture shows the results for second order SIPG and NIPG. In the left picture, we see that, again, all three methods work fine, and similarly well. Their convergen...
for some appropriately chosen norm ∥⋅∥\|\cdot\|∥ ⋅ ∥. We focus on the cubature error, and discuss the spatial discretization error briefly in Section 5.3. We remark that the order of the last two error contributions—the finite element error and cubature error, respectively—can be flipped when the diffusion coefficient...
The error estimate does not significantly deviate from standard DG error estimates. Thus, we keep it short and refer to the already mentioned references [11, 34] for the details. We start introducing the broken H2superscript𝐻2H^{2}italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT space
D
As data continue to grow, multi-agent learning has emerged as an important direction in scalable machine learning and has attracted much attention under the name of federated learning (KMR15, ; KMRR16, ; MMR+17, ), where multiple agents try to learn an objective function in parallel via communication. While the majorit...
As data continue to grow, multi-agent learning has emerged as an important direction in scalable machine learning and has attracted much attention under the name of federated learning (KMR15, ; KMRR16, ; MMR+17, ), where multiple agents try to learn an objective function in parallel via communication. While the majorit...
In this paper, we investigate heterogeneous collaborative learning. We will use a basic problem in bandit theory named best arm identification in multi-armed bandits (BAI) as a vehicle to deliver the following message: Collaborative learning in the heterogeneous environment is provably more difficult than that in the ...
In the CL model studied by (TZZ19, ) and (KZZ20, ), each agent interacts with the same environment; for the BAI problem in particular, by pulling the same arm, the agents sample from the same data distribution. However, as mentioned earlier, heterogeneous environments are inherent in many real-world collaborative learn...
The Collaborative Learning Model.   Most study for BAI has been done in the centralized model, in which just one agent pulls the set of arms sequentially. (TZZ19, ; KZZ20, ) studied BAI in the collaborative learning (CL) model, where there are K𝐾Kitalic_K agents, who try to learn the best arm in parallel via communica...
B
Figure 3: Performance of different algorithms for the NN-PCA problem of (5.4). MNIST (left) with N=60000𝑁60000N=60000italic_N = 60000, n=784𝑛784n=784italic_n = 784, covtype (left center) with N=581012𝑁581012N=581012italic_N = 581012, n=54𝑛54n=54italic_n = 54, a9a (right center) with N=32561𝑁32561N=32561italic_N =...
Despite an upsurge in developing optimization methods to address such a problem, the potential of low-memory quasi-Newton methods has largely been neglected which can be partially attributed to the absence of theoretical foundations for handling nonsmooth settings. In the smooth strongly convex settings, competitive co...
For the nonconvex nonnegative principal component analysis problem, we compare against [40], which addresses Finito/MISO in the general nonsmooth nonconvex case. Additionally, we compare SPIRAL against SMD and the Bregman Finito/MISO method [39] for the phase retrieval problem, where the cost function lacks a Lipschitz...
As depicted in Figure 3, the quasi-Newton updates in SPIRAL significantly enhance the convergence rate compared to (low-memory) Finito/MISO, which lacks such updates. Although proxSARAH exhibits faster convergence for this problem, it performs slower in the Lasso problem and is unable to handle non-Lipschitz different...
As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a). It is impor...
C
where 1≤i,j,k≤500formulae-sequence1𝑖𝑗𝑘5001\leq i,j,k\leq 5001 ≤ italic_i , italic_j , italic_k ≤ 500. The (numerical) tubal rank of the tensor Case I is 21 and it is 14141414, for the tensor Case II. We have used the Matlab code tubalrank.m included in Tensor-Tensor Product Toolbox 2.0 to estimate the tubal rank of...
Then, we applied the truncated t-SVD with the mentioned tubal ranks to the underlying data tensors. The running times of the proposed algorithm and the truncated t-SVD are reported in Table II. It is seen that the Algorithm 4 outperforms the truncated t-SVD in terms of running time.
Then to examine the speed-up of our algorithm, we used the truncated t-SVD with the estimated tubal rank 10. The running times of the proposed algorithm and the truncated t-SVD algorithm for different dimensions for Cases I-III are reported in Figure 4. The linear scaling of Algorithm 4 compared to the truncated t-SVD ...
In this example, we apply Algorithm 4 to the COIL-100 dataset [51] which is the extension of the COIL-20 dataset. This data tensor consists of 7200 color images (100 objects under 72 rotations per object, see Figure 6 for some samples of this data tensor). The size of each image is 128×128×31281283128\times 128\times 3...
In this example, we apply our proposed algorithm to compress the Yale B dataset. This dataset includes images of size 192×168192168192\times 168192 × 168 for 38383838 persons under 30303030 different illumination conditions and as a results a tensor of size 192×168×30×381921683038192\times 168\times 30\times 38192 × 1...
A
Interpretability of the predictive models is often a desirable property of machine learning algorithms. Since the models produced by the SSL-PCTs are in the form of a decision tree, they are readily interpretable. To the best of our knowledge, in the literature, no other semi-supervised method for MLC and HMLC produces...
Interpretability of the predictive models is often a desirable property of machine learning algorithms. Since the models produced by the SSL-PCTs are in the form of a decision tree, they are readily interpretable. To the best of our knowledge, in the literature, no other semi-supervised method for MLC and HMLC produces...
To exemplify the interpretability and to highlight the possible differences between SL-PCTs and SSL-PCTs, we provide an example of supervised and semi-supervised predictive clustering trees obtained for the Emotions dataset with 100 labeled examples (Figure 5) where the task is to predict an emotion evoked by music on...
The degree of interpretability of the tree-based models is typically expressed in terms of their size. A large tree can be more difficult to interpret, and vice versa, a small tree can be easier to interpret. The tree size is often a trade-off between accuracy and interpretability. Small trees are easy to interpret, b...
A closer analysis of the results is shown in Figure 6, where it is possible to evaluate the influence of parameter w𝑤witalic_w on the tree size. The analysis reveals that unsupervised trees (w=0𝑤0w=0italic_w = 0) are much bigger than semi-supervised (0<w<10𝑤10<w<10 < italic_w < 1) or supervised (w=1𝑤1w=1italic_w = ...
C
In the navigational controls estimation task, DeepIPC also has the best performance in line with the waypoints prediction result. The MLP agent can leverage useful features encoded from both RGB and BEV semantic maps. Therefore, the MLP agent can perform as well as the PID agent in estimating steering and throttle. Wi...
The best drivability is defined by the lowest intervention count and intervention time. In the online test, there is no need to measure the inference speed as we limit the observation sampling to 4 Hz (the same configuration as the data gathering process used for training and validation) to perform a fair evaluation f...
The purpose of the online test is to evaluate the model’s drivability in driving the vehicle. The model must drive the vehicle safely by following a set of route points while avoiding obstacles (e.g., a vehicle stopped on the left side of the road). The experiment is conducted three times for each condition and on dif...
Table IV shows that DeepIPC achieves the best drivability at noon where it has the lowest intervention count and intervention time. Meanwhile, DeepIPC is comparable to Huang et al.’s model in the evening where it achieves the lowest intervention time but has a higher intervention count. Keep in mind that a model with a...
In imitation learning and behavior cloning, a considerable amount of expert driving records is needed for training and validation (train-val) [52][53][54][55]. To create the dataset, we drive the vehicle at a speed of 1.25 m/s in a certain area inside Toyohashi University of Technology, Japan. As shown in Fig. 3, the ...
B
First, one could ask what is the most general width-parameter defined by a min-max formula over the bags of a tree decomposition (see, e.g. [2, 36]) that allows us to solve problems like Maximum Independent Set in polynomial time when bounded? For parameters where the width of a bag depends only on the induced subgraph...
In particular, we recall that Maximum Independent Set is NP-hard on graphs with each edge subdivided twice, but such graphs admit a tree decomposition where one bag is a large independent set, and the induced subgraphs of the other bags are isomorphic to 4444-vertex paths. It follows that if the width-measure of a bag ...
However, it is not always the size of a bag that matters. For example, suppose that every bag of the decomposition is a clique, that is, the graph is chordal. Since every independent set intersects each of the clique-bags in at most one vertex, dynamic programming still computes maximum weight independent sets in such...
A similar example shows that this type of parameters where the width of a bag Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT depends on G⁢[N⁢[Xt]]𝐺delimited-[]𝑁delimited-[]subscript𝑋𝑡G[N[X_{t}]]italic_G [ italic_N [ italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] ] cannot be g...
minor-matching hypertree-width, escapes this argument because it does not only depend on the subgraphs induced by the bags, but also on the neighborhoods of the bags. In particular, for 𝗍𝗋𝖾𝖾⁢-⁢μ𝗍𝗋𝖾𝖾-𝜇\mathsf{tree}\textnormal{-}\musansserif_tree - italic_μ the width of a bag Xtsubscript𝑋𝑡X_{t}italic_X start_P...
A
Table VI shows that (1) the accuracy of ADMM-based methods under DP is on par with the non-private accuracy (ϵ=∞italic-ϵ\epsilon=\inftyitalic_ϵ = ∞) on MNIST, NUS-WIDE and ModelNet40. Nevertheless, there is a discernible decrease of 13.6% for VIMADMM on CIFAR when ϵ=1italic-ϵ1\epsilon=1italic_ϵ = 1, which underscores t...
Methods w/o model splitting (FDML, VIMADMM-J) generally performs better than methods w/ model splitting. This is mainly because the logits have a smaller dimension than the embeddings, and the total amount of noise added to the logits output is smaller than the embedding output; thus VFL w/o model splitting methods re...
our ADMM-based methods converge faster and achieve higher accuracy than gradient-based baselines, especially on CIFAR. This is because the multiple local updates enabled by ADMM lead to higher-quality local models at each round, thereby speeding up the convergence.
We note that multiple local updates of Eq. III-B enabled by ADMM lead to better local models at each communication round compared to gradient-based methods, thus VIMADMM requires fewer communication rounds to converge as we will show in Section VI-A. These six steps of VIMADMM are summarized in Algorithm 1.
(2) Our ADMM-based methods reach significantly higher utility than gradient-based methods, especially under small ϵitalic-ϵ\epsilonitalic_ϵ. We attribute this to the fact that ADMM-based methods converge in fewer rounds than gradient-based methods at each round, which is also evident in the non-DP setting as shown in F...
D
Graph structured data are ubiquitous in a variety of domains, such as the Internet and the world-wide web [1, 2, 3], social networks [4, 5, 6], scientific citation network [7, 8, 9], bioinformatics [10, 11, 12], and so on. To better model graph structured data, graph neural networks have recently attracted increasing ...
Dynamic graph neural networks aim to capture the temporal dynamics for updating the node embeddings, when new connections or links between nodes are established. Based on the properties of dynamic graphs, current dynamic graph neural networks can be roughly divided into two categories [54, 55, 56, 57]: discrete-time ba...
Graph neural networks mentioned above are originally designed for static graphs. However, graph structured data are often dynamic in nature in many real-world applications [24, 25, 26, 27, 28, 29, 30]. Thus, these static graph neural network models often fail in handling such graph data, due to their oversight of the ...
The aforementioned temporal graph neural network models have achieved promising performance on dynamic graphs of various domains. Typically, these models assume that the embeddings of neighbor nodes need to be updated to capture temporal dynamics once new links are added.
Graph structured data are ubiquitous in a variety of domains, such as the Internet and the world-wide web [1, 2, 3], social networks [4, 5, 6], scientific citation network [7, 8, 9], bioinformatics [10, 11, 12], and so on. To better model graph structured data, graph neural networks have recently attracted increasing ...
B
DistributionNet [35] uses uncertainty learning to deal with noisy samples, such as the samples with the wrong label or outliers’ data. SCFU [46] exploits spatial-wise and channel-wise uncertainty to solve the occluded person re-identification problem.
According to the sequence quantity proportion in 𝒳vsubscript𝒳𝑣\mathcal{X}_{v}caligraphic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and 𝒳csubscript𝒳𝑐\mathcal{X}_{c}caligraphic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, there is a data imbalance in benchmarks, which will make the model biased if it l...
Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution in the feature spaces instead of a point, which can relax the constraints for the model. The identity-branch accepts features output from the backbone and learns the mean feature, which is used to re...
DUL [43] expands PFE to learn the feature and uncertainty simultaneously. Therefore, the learned uncertainty can affect feature learning by adaptively reducing the negative influence of noisy training samples. Shi et al.[44] learns a universal representation by splitting the feature representation into sub-embeddings a...
Inspired by their works, we use uncertainty learning to solve the cross-cloth gait recognition problem at both the distribution and feature levels. The variance generated by our framework can model the optimization difficulty caused by the quantity imbalance and silhouettes variation diversity.
D
Q-learning is a common unit of RL-based dialogue policies. The overestimation bias of ME propagates into model action values Q⁢(st,⋅)𝑄subscript𝑠𝑡⋅Q\left(s_{t},\cdot\right)italic_Q ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , ⋅ ). In dialogue Q⁢(st,⋅)𝑄subscript𝑠𝑡⋅Q\left(s_{t},\cdot\right)italic_Q (...
We use a dialogue turn to show the negative effects of the overestimation bias. In Figure 1(b), the dialog state tracker module outputs state embedding, dialogue policy processes state embedding and predicts the wrong dialogue action B instead of the correct action A based on the biased action values.
Task-completion dialogue systems are commonly implemented in two schemes. One is by end-to-end training, such as Zhang et al. (2020a). The other is a pipeline framework (Chen et al., 2017), which typically consists of four modules that are independently trained, as shown in Figure 1(a): natural language understanding (...
Reinforcement learning (RL) algorithms, specifically Q-learning (Watkins and Dayan, 1992) based algorithms, have become a mainstream method for training the dialogue policy module (Peng et al., 2018; Zhang et al., 2020b). For each step, the policy agent updates its action value 111This value is the expected return for ...
Figure 4: The learning curves of the averaged maximal action value of the dialogue starting state when dialogue policies are evaluated on the movie test set during the training. The Y-axis means the averaged maximal action value of the starting state.
A
Following the division of our methodology, we define our framework as a two-step. First, we propose a Hierarchical Multitask Multi-Layer Perceptrons (MLP) Mixer (H3M), to classify each observed video to an action label, as well as to extract the overall intention of the human. The MLP Mixer-based architecture [32] has ...
The fundamental challenge of LTA of human actions is the inherent uncertainty of the future. The uniqueness of the human being results in high variability of how each of us executes a certain task. Moreover, this behaviour may vary for the same individual at different moments and also depending on the environment. How...
Therefore, we develop a methodology that aims to constrain the variability of future actions based on the human intention estimated from past observations. We predict a hierarchical structure from a sequence of videos, each depicting a particular human action. From this given video clip sequence, we define two differe...
Finally, we investigate the performance of our whole framework based on the end-to-end evaluation. First, H3M classifies the actions and the intention from the observed clips. Then, based on these predictions, our I-CVAE model anticipates the Z=20𝑍20Z=20italic_Z = 20 actions in the future. In Table 4 we evaluate the L...
To demonstrate the effectiveness of our approach, we make use of the most diverse dataset of human videos currently available, Ego4D [13], and in particular we evaluate our results in the LTA benchmark. Ego4D provides first-person videos of humans experiencing everyday activities around the world. In the case of the L...
D
ℒcanonical=𝔼𝒔∼𝒮⁢[‖ψ⁢(ϕ⁢(𝒔))−𝒄‖2],subscriptℒcanonicalsubscript𝔼similar-to𝒔𝒮delimited-[]superscriptnorm𝜓italic-ϕ𝒔𝒄2\mathcal{L}_{\text{canonical}}=\mathbb{E}_{\bm{s}\sim\mathcal{S}}\Big{[}\big{% \|}\psi\big{(}\phi(\bm{s})\big{)}-\bm{c}\big{\|}^{2}\Big{]},caligraphic_L start_POSTSUBSCRIPT canonical end_POSTSUBSC...
We resort to model uncertainty to tackle this problem. These anomalies are typically accompanied by rare and inconsistent behaviors, and as a result, the one-class learning model tends to make predictions unconfidently. As shown in Fig. 1 (c), we aim to use this type of uncertainty to weaken the contribution of anomaly...
Our one-class learning objective ℒUMCsubscriptℒUMC\mathcal{L}_{\text{UMC}}caligraphic_L start_POSTSUBSCRIPT UMC end_POSTSUBSCRIPT is calibrated to adaptively penalize uncertain predictions and simultaneously encourage confident predictions, thus accomplishing the masking of anomaly contamination in the training set. Be...
We address two key challenges in the current one-class learning pipeline, i.e., the presence of anomaly contamination and the absence of knowledge about anomalies. COUTA achieves this goal through two novel calibration methods – uncertainty modeling-based calibration (UMC) and native anomaly-based calibration (NAC). In...
Therefore, to address the anomaly contamination problem, we can give a relatively mild penalty to predictions that are with high model uncertainty, thus masking anomaly contamination in a soft manner. On the other hand, to ensure effective optimization of hard normal samples, the one-class classification model should a...
D
Often, appending contextual examples from outer sources to the training set, or permuting the training samples themselves to append variation, helps mitigate data sparsity. This is known as data augmentation. Nayak et. al. (Nayak et al., 2017) propose the creation of pseudo-samples by permuting the slot orderings of th...
Similar to regularization in the greater deep learning landscape (Goodfellow et al., 2016), regularization practices in D2T append additional constraints to the loss function to enhance generation fidelity. As such, Mei et. al. (Mei et al., 2016) introduce a coarse-to-fine aligner to the seq-to-seq framework that uses...
In the D2T premise, language-conditional reinforcement learning (RL) (Luketina et al., 2019) often aids in model optimization through its role as auxiliary loss functions. While traditionally, the BLEU (see §6.1) and TF-IDF (Ramos et al., 2003) scores of generated texts were used as the basis for reinforcement learnin...
Following the successful applications of knowledge-grounded language models (Ahn et al., 2016; Logan et al., 2019), Konstas et. al. (Konstas et al., 2017) propose a domain-specific pretraining strategy inspired by Sennrich et. al. (Sennrich et al., 2016) to combat the challenges in data sparsity, wherein self-training ...
Chen et. al. (Chen et al., 2019b) append knowledge-graphs representing external context to the table-text pairs and quantify its efficacy through their metric KBGain - the ratio of tokens unique to the external context to the total number of tokens in the narrative. Similarly, Ma et. al. (Ma et al., 2019) augment the ...
D
Fig. 10 demonstrates some qualitative results generated by Motifs, Motifs+NICE, and Motifs+NICEST. From Fig. 10, we can observe that Motifs tends to predict coarse-grained (i.e., head) predicates, such as near, while Motifs+NICE tends to predict fine-grained (i.e., tail) predicates, such as sitting on and covering. Th...
In this paper, we argued that two plausible assumptions about the ground-truth annotations are inapplicable to existing SGG datasets. To this end, we reformulated SGG as a noisy label learning problem and proposed a novel model-agnostic noisy label correction and sample training strategy: NICEST. It is composed of NICE...
Datasets. To ensure a comprehensive evaluation of our proposed method, we conducted all experiments on three datasets: the challenging VG [19], our newly split VG-OOD and GQA [20]. 1) VG: VG is the most widely utilized benchmark for SGG with over 108k images. We selected VG to thoroughly evaluate NICEST and ensure a f...
Learning with Noisy Labels. Existing noisy label learning methods can be roughly grouped into two categories: 1) Utilizing an explicit or implicit noisy model to estimate the distributions of noisy and clean labels, and then deleting or correcting these noisy samples. These models can be in different formats, such as n...
In this paper, we try to get rid of these two questionable assumptions and propose to reformulate SGG as a noisy label learning problem. Specifically, we propose a novel model-agnostic NoIsy label CorrEction and Sample Training strategy for SGG, dubbed NICEST. NICEST mitigates the noisy label learning problem from two...
A
Selfish mining is not reported frequently in existing cryptocurrencies because selfish miners cannot find practical ways to launch the attack easily. It is widely accepted that the attacker’s computational power needs to exceed 33% to gain higher profits than honest mining. Attackers need to either occupy more than 33%...
Selfish Mining: Selfish mining was first proposed by Eyal et al. [9eyal2014majority]. A selfish mining attacker can earn extra rewards by intentionally generating a fork. When an attacker discovers a new block in selfish mining, it will keep the block as its private branch and keeps mining after it. When other miners ...
Selfish mining is not reported frequently in existing cryptocurrencies because selfish miners cannot find practical ways to launch the attack easily. It is widely accepted that the attacker’s computational power needs to exceed 33% to gain higher profits than honest mining. Attackers need to either occupy more than 33%...
It is impractical to launch selfish mining by a single miner for large-scale public blockchains. On the one hand, it is hard to occupy more than 33% of mining power to ensure successful selfish mining, e.g., Bitcoin. According to [blockexplorer.com], the largest mining pool in Bitcoin only occupies 16% of overall mini...
Based on the partial block sharing strategy, we propose a new and practical mining attack called Partial Selfish Mining (PSM). As shown in Figure 1(b), PSM starts as selfish mining to withhold a newly mined block. Then, the attacker can launch the partial block-sharing strategy and finally releases the secret by broadc...
C
The fact that non-adaptive gradient descent is blocked from entering sharp (as quantified by maximum Hessian eigenvalue) regions of the loss landscape constitutes one implicit bias [35] of non-adaptive gradient descent. It is plausible that this implicit bias could impact generalization (e.g. see [33, 19]).
We now demonstrate that the behavior of the preconditioned sharpness during minibatch Adam parallels that of the sharpness during minibatch SGD. Namely, we observe that during minibatch Adam, the preconditioned sharpness (1) never rises more than a bit beyond the stability threshold of the full-batch algorithm, provid...
We now move beyond the full-batch setting to the more general setting of minibatch training. In the case of gradient descent and minibatch SGD, it is clear from prior work [20, 21] that during minibatch training, the sharpness is subject to similar effects as during full-batch training. For one, provided that training ...
At the EoS, gradient descent would still be moving into regions of higher curvature were it not being constantly repelled from these high-curvature regions by unstable dynamics. As we confirm below, these findings also generalize to preconditioned gradient descent (with a static preconditioner).
Beyond formal convergence analyses, [4, 10] proposed to model Adam as an system of ordinary differential equations in continuous time; this approach also cannot explain the unstable dynamics that we observe. [3] argued that Adam should be viewed as a variant of sign gradient descent.
D
An example of such data-driven approaches is manifold learning 28. The core of most manifold learning methods is having a notion of similarity between high-dimensional data samples, usually through a distance metric 29, 30, 31. The distances are integrated into a global parametrization of the data using kernels to repr...
Consider data obtained from enhanced sampling simulations in which we record or select samples of the high-dimensional configuration variables 𝐱𝐱\mathbf{x}bold_x. These data define the training set from which manifold learning methods construct a low-dimensional manifold. The training data set can be generally expres...
We can circumvent this issue by using learning data set from enhanced sampling simulations where transitions between metastable states are more frequently observed and are no longer rare events. However, in this case, the simulation data set is biased and does not correspond to the real system, as it is sampled from a ...
When using manifold learning on dynamical data resulting from atomistic simulations, these data must contain statistically sufficient information about the sampled chemical process. If a high-dimensional data set used in manifold learning does not capture the rare transitions between metastable states, the learned low...
Among the main challenges in atomistic simulations of chemical systems is the significant temporal disparity between the timescales explored in standard atomistic simulations and the long timescales observed in experiments. Atomistic simulations can only reach timescales of up to milliseconds and thus cannot exhaustive...
C
The American Heart Association division of the left ventricle (Manuel et al., 2002) is well established and was developed to standardise the nomenclature of the various subregions of the left ventricle. For consistency with the figures here, the longitudinal axis is taken to run up the centre of the ventricular cavity,...
Given that this division is geometrically based, and the one discussed here is physiologically based, an exact correspondence should not be expected between the AHA subdivision and any seen here. However, the AHA division is physiologically motivated and is essentially intended as a tool to standardise and clarify dis...
In order to compare the divisions, we must make the AHA regions. This is straight forward with the definitions given by Manuel et al. (Manuel et al., 2002). The 17 subregions can be seen in Fig. 13a in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT with the same view as is seen in...
Such a map is useful as it could ease knowledge exchange between modellers and stakeholders in these models, such as interested clinicians. In the specification of the AHA regions, the septum corresponds to regions 3, 4, 9, 10, & 15. Here, the septum corresponds to regions 3, 9, 14, & 17. The alignment between the subd...
Other than the physiologically motivated subdivision augmented with what is essentially a nearest neighbour algorithm based on a 3D analogue of a taxicab metric, it is possible to divide the ventricle with other metrics. Perhaps the easiest to implement is a Euclidean distance based algorithm.
A
A limitation in our approach is that we do not incorporate the scenarios of peak hours, where unseen amount and duration of burst traffic triggers the network orchestration scheme, which eventually results in a dynamic-routing topology. Generalization to dynamic and size-variant graphs with GNN models remain to be stud...
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph, which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal...
Software-defined networking (SDN) is a state-of-the-art approach to network management, which permits dynamic and programmatic network configurations (e.g., routing), aimed at improved network performance and monitoring, abstracted away from individual network elements into a centralized network control layer. Consequ...
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We thank Dr. Pedro Batista and Dr. Alessandro Previti from Ericsson Research for the insightful discussions.
Dataset. We verify the performance of the model on a simulated dataset provided by [24]. The training, validation and test sets are produced by a packet level simulator (OMNeT++ v5.5.1 [25]). Though lacking of other benchmark dataset, this dataset includes a summary of patterns of real-world network topologies from The...
C
(1) SPR considers multi-step consistency in addition to the one-step prediction of our proposed contrastive objective, namely, SPR incorporates the information of multiple steps ahead of (sh,ah)subscript𝑠ℎsubscript𝑎ℎ(s_{h},a_{h})( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT ...
In particular, we adopt the same hyper-parameters as that of SPR (Schwarzer et al., 2021). Meanwhile, we adopt the last layer of the Q-network as our learned representation ϕ^^italic-ϕ\widehat{\phi}over^ start_ARG italic_ϕ end_ARG which is linear in the estimated Q-function.
To improve the sample efficiency of RL algorithms, recent works propose to learn low-dimensional representations of the states via solving auxiliary problems (Jaderberg et al., 2016; Hafner et al., 2019a, b; Gelada et al., 2019; François-Lavet et al., 2019; Bellemare et al., 2019; Srinivas et al., 2020; Zhang et al., ...
For our algorithm, we make the following assumption for the negative sampling distribution 𝒫𝒮−⁢(⋅)superscriptsubscript𝒫𝒮⋅\mathcal{P}_{\mathcal{S}}^{-}(\cdot)caligraphic_P start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ( ⋅ ).
We remark that we adopt the architecture of SPR as an empirical simplification to our proposed contrastive objective, which does not require explicit negative sampling and the corresponding parameter tuning (Schwarzer et al., 2021). This leads to better computational efficiency and avoidance of defining an improper neg...
D
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha...
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it. We decouple the computation o...
Importance sampling for MV-SDEs has been studied in (dos Reis et al., 2023; Ben Rached et al., 2023). The decoupling approach developed by (dos Reis et al., 2023) defines a modified, decoupled MV-SDE with coefficients computed using a realization of the MV-SDE law estimated beforehand using a stochastic particle syste...
The decoupled MV-SDE (8) for the given empirical law {μtP:t∈[0,T]}conditional-setsubscriptsuperscript𝜇𝑃𝑡𝑡0𝑇\left\{\mu^{P}_{t}:t\in[0,T]\right\}{ italic_μ start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ [ 0 , italic_T ] } is a standard SDE, making it po...
B
{5.5⁢ m,if ⁢r≥6,km0.1⁢ m,otherwisecases5.5 mif 𝑟6km0.1 motherwise\begin{cases}5.5\text{ m},&\text{if }r\geq 6,\text{km}\\ 0.1\text{ m},&\text{otherwise}\end{cases}{ start_ROW start_CELL 5.5 m , end_CELL start_CELL if italic_r ≥ 6 , km end_CELL end_ROW start_ROW start_CELL 0.1 m , end_CELL start_CELL otherwise end_CELL...
We assume that the spacecraft is equipped with a LiDAR (Light Detection and Ranging), two optical navigation cameras and a set of accelerometers, for navigation with respect to the asteroid. A summary of the values used in the simulation is presented in Table 2. We consider that no radiometric data is available for the...
in which I⁢F⁢O⁢V=F⁢O⁢V/Np𝐼𝐹𝑂𝑉𝐹𝑂𝑉subscript𝑁𝑝IFOV=FOV/N_{p}italic_I italic_F italic_O italic_V = italic_F italic_O italic_V / italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is the instantaneous field of view of the camera, with Np=1024subscript𝑁𝑝1024N_{p}=1024italic_N start_POSTSUBSCRIPT italic_p end...
Hayabusa 2 spacecraft has three optical navigation cameras, ONC-T, ONC-W1, and ONC-W2 [48]. For the optical navigation of our analysis scenario, we consider the ONC-T and ONC-W1. The ONC-T is a telescopic camera with a FOV of 6.27°°\degree° and pixel size of 1024×\times×1024, while the ONC-W1 has a wide FOV of 69.71°°...
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as...
C
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
In future work, we hope to further investigate approaches for computing δ,Δ𝛿Δ\delta,\Deltaitalic_δ , roman_Δ explicitly. In addition, we hope to apply the techniques presented here to geodesically convex optimization problems similar to  (1.3), such as the operator scaling problem (Garg et al., 2018) and difference o...
A growing body of literature has analyzed a formulation of the problem, which links to the more general operator scaling problem (Allen-Zhu et al., 2018; Garg et al., 2015, 2018; Kwok et al., 2019; Franks, 2018; Bürgisser et al., 2018, 2019). We briefly recall some of the key results in this line of work and comment on...
The map G𝐺Gitalic_G resembles the alternate scaling algorithm for Brascamp-Lieb constants (Garg et al., 2018, Alg. 1). The resemblance of both approaches derives from an exploitation of the difference-of-convex structure of problem 1.3 (see also (Weber and Sra, 2023)). However, the Thompson geometry perspective employ...
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
A
The latter remark can be further visualized as follows. Consider two 1111-cycles colored red and blue in the simplicial complex shown in Figure 2. These cycles may appear different, but if their difference can be expressed as the boundary of a 2222-simplex in the complex, they are considered homologous. This implies t...
This section lays the groundwork for extracting multiscale topological signatures from weighted graphs. First, we model higher-order interactions among vertices in the graph using simplicial complexes, similar to the approach used in loop centrality [centrality]. This captures interactions beyond simple pairwise connec...
In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. Integral to this is the development of an algorithm that captures the merge history of homology classes. These homology-based centrality measures produce, for all cycle gen...
When two homology classes merge due to simplex removal, the elder rule comes into play. This rule selects the generator formed at the lower threshold as the natural representative of the merged class. A key observation is made: even when a different generator survives a merge, the persistence information can be "trans...
The generators of the homology group play a crucial role. They represent the distinct topological cycles embedded within our combinatorial model of the graph. Collectively, these cycles characterize the overall topology of the graph. Intriguingly, it is these generators and their corresponding homology classes that we ...
D
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui...
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ...
Based on the upper bound of the generalization error in Eq. 4, the proposed method needs to satisfy two requirements: 1) most of modules in the model are shared for all domains, which can be sufficiently trained by all samples, and 2) the model can reduce the interference of domain gap between different domains. Theref...
We propose a simple yet effective multi-task learning (i.e., MultiMatch) for semi-supervised domain generalization, which can effectively reduce the interference from different domains during pseudo-labeling. Also, most of the modules in the model are shared for all domains, which can be sufficiently trained by all sa...
Remark. In our multi-task learning framework, using the independent BN can effectively mitigate the interference of different domains, as shown in Eq. 4. In addition, in our method, most of the modules are shared for all domains, which can sufficiently exploit all samples to reduce the third item in Eq. 4. Hence, our m...
C
We evaluate 5 FGVC datasets using 1, 2, 4, 8 shots for each class following following previous studies [45, 25, 63] including Food101 [4], Oxford Flowers [41], Oxford Pets [42], Stanford Cars [29], and Aircraft [38]. Averaged top-1 accuracy is reported in Tab. 3. We search from the same range as before and adopt the sa...
Meanwhile, Conv-Adapter provides a better accuracy-efficiency trade-off than Visual Prompt Tuning on few-shot classifications. It surpasses VPT with an average margin of 1.35% with ResNet50 Bit-M and 3.69% with ConvNext-B. In the 8-shot case, VPT drops around 8% performance compared with Fine-tuning due to limited capa...
For large models such as ConvNext-L and Swin-L, conducting traditional fine-tuning requires training nearly 196M parameters, whereas Conv-Adapter improves the parameter efficiency with only 7.8% and 4.5% of the fine-tuning parameters on ConvNext-L and Swin-L respectively. Although the transfer performance of Conv-Adapt...
This section verifies the transferability and parameter efficiency of Conv-Adapter from various aspects, including image classification, few-shot classification, object detection, and semantic segmentation. Additionally, we provide an ablation study of Conv-Adapter for its design choices and an analysis of its performa...
Compared with Fine-tuning, Conv-Adapter boosts few-shot classifications with an average 3.39% margin over different shots using only around 5% trainable parameters. Especially for 1/2-shot cases, Conv-Adapter shows supreme performance compared with Fine-tuning and VPT [25] (11.07% on 1-shot and 6.99% on 2-shot with la...
D
Existing PINNs methods face challenges in managing abrupt variations or discontinuities in dynamical systems. Such changes often signal shifts in system dynamics or the influence of external factors. For example, detecting leakages in pipelines using limited sensor data [18]; traffic flow management by predicting conge...
While changepoints detection methods have shown promise in identifying significant shifts in data characteristics across various fields—from high-dimensional time series data [31, 32, 33], computer vision [34, 35], speech recognition [36, 37], real-time medical monitoring [38, 39], to disturbance localization in power...
Deep learning and machine learning methods are widely studied and used in academia and industry. They perform successfully in tasks such as dimensionality reduction [1], computer vision [2, 3], multimodal learning [4, 5], and time series analysis [6]. Recent advancements have further expanded the applicability of deep ...
We introduce a novel method for identifying changepoints in dynamic systems governed by general PDEs dynamics. Our approach works with piecewise-constant time-changing parameters and leverages total variation regularization on the first-order differences of parameters. We also propose an online learning strategy that ...
Existing PINNs methods face challenges in managing abrupt variations or discontinuities in dynamical systems. Such changes often signal shifts in system dynamics or the influence of external factors. For example, detecting leakages in pipelines using limited sensor data [18]; traffic flow management by predicting conge...
A
For example, b⁢a⁢b⁢c⁢d𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and a⁢b⁢c⁢b⁢c⁢d𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by a⁢b⁢c⁢d𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but a⁢b⁢c⁢b⁢d⁢a𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital...
for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=x⁢a⁢a⁢y𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ...
change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once. If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w.
(go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive, let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let
Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that, provided only tha...
D
A practical question is how to create this subdivision. Oftentimes, the subdivision is chosen fine enough to resolve the features of interest but coarse enough to keep the computational cost in check. Regularization is frequently used to ensure that an overly fine mesh does not lead to unwanted oscillations in the reco...
The relevant question to ask is whether this mesh is better suited to the task than any other potential mesh. Answering this question is notoriously difficult in inverse problems because, in general, the exact solution of the problem is unknown if only finitely many measurements are available and if regularization is ...
accuracy of measurements on the one hand, and the uncertainty in the recovered parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us...
Section 2.3 of [50] and the references therein), but, as with regularization, no overarching scheme is available to guide this choice of the mesh. [17] is also an example where the mesh is made part of what needs to be estimated in a Bayesian inversion scheme, which in practice appears to lead to meshes that are more r...
However, [17] is concerned with choosing the mesh used for the solution of the state equation, not the discretization of the parameter we seek – although it seems reasonable to assume that the scheme could be adapted to the latter as well. At the same time, the scheme described in [17] requires the solution of a Bayesi...
C
Important in this process in the initial rounds were the indications of frequency, which allowed for commonly occurring labels to be explored with priority. In the cases of synonymous or near synonymous labels, the hierarchy was useful in the beginning as a way of linking and establishing an order between them.
The annotation space (Figure 5) allows one to see the current annotation status of a number of images as well as to add and remove labels and zoom in on the details of the images. It is also possible to add new labels that are not part of the dataset and to filter images based on metadata.
The space allows to progressively check and extend the metadata at will, as well as remove any extraneous labels. On the other hand, depending on the frequency of the labels or subjects that can be used to filter at the previous stage, the annotation space can become crowded with aligned images that do not exhibit rel...
Clicking on a circle, a label can be added or removed. All saved changes can be inspected in a history pop-up showing the timestamp, the user, and the changes. This allows annotators to keep track of their interactions as well as those of others. Images can be viewed in high resolution in a pop-up If a specific word i...
On mouseover, the image is shown. A lasso selection can be used at the points to select a set of images for the labeling process. The selected points are increased in size to better highlight them. The re-annotation space is accessed from a button in the left-hand drawer. The current state of the graph and the point c...
B
In this part, we conduct a case study to verify the effectiveness of PanDa’s knowledge transfer ability and reveal its potential limitations in detail. First, we analyze why the vanilla PoT (i.e., SPoT) works well (even outperforms the model-tuning and PanDa) in some settings, but fails in others. We use the MNLI and ...
In this part, we conduct a case study to verify the effectiveness of PanDa’s knowledge transfer ability and reveal its potential limitations in detail. First, we analyze why the vanilla PoT (i.e., SPoT) works well (even outperforms the model-tuning and PanDa) in some settings, but fails in others. We use the MNLI and ...
Second, we would like to see whether PanDa can effectively transfer the knowledge between the source and target tasks. In the Example #2 of Figure 8, it can be seen that, although the individual source models (MNLI and CoNLL05) make the wrong predictions, PanDa can benefit from their useful general knowledge and help t...
Second, we would like to see whether PanDa can effectively transfer the knowledge between the source and target tasks. In the Example #2 of Figure 8, it can be seen that, although the individual source models (MNLI and CoNLL05) make the wrong predictions, PanDa can benefit from their useful general knowledge and help t...
Second, we would like to see whether PanDa can effectively transfer the knowledge between the source and target tasks. In the Example #2 of Figure 8, it can be seen that, although the individual source models (MNLI and CoNLL05) make the wrong predictions, PanDa can benefit from their useful general knowledge and help t...
B
We use categories linked with targets by default; when unavailable, we rely on root domains e.g., .tv and .gov are likely news and government sites. Categories of generic domains (e.g., .net, .com) are identified by direct visits (via Russian IP relays) or querying Internet archives if they are down. Some targets were...
Researchers, politicians, and journalists have long been fascinated by ‘cyberwar’ – the spectre of armed conflict between nations spilling over into attacks conducted over the Internet (Rid, 2012). ‘Colder’ forms of inter-state conflict are characterised by espionage and intelligence gathering, which may facilitate the...
Information warfare has long been part of ‘hybrid’ modern conflicts, especially around the control of communications (Hoffman, 2007; Libiseller, 2023). The enemy’s ability to spread news and propaganda can be degraded by targeting crucial sites, public services, broadcast and telecom infrastructure. Censorship is ofte...
Categories vary, yet five dominate 80.21% of all targets, see Figure 8. ‘News, media and propaganda’, including TV broadcasting, has been consistently promoted since the war began but only became the most common one in May when it overtook ‘IT solutions and services’. ‘Government and public services’, which includes mi...
Web Defacement Attacks. We fully scrape the most popular active defacement archives during the period; see Table 1. We started with Zone-H, the largest and most popular one (since March 2002) providing cybersecurity news and self-reported defacements along with hacking content (Kurzmeier, 2020). We then took out the mo...
C
When dissecting performance relative to the datasets, HET exhibits robust results for ImageNet, CIFAR10, and X-Ray. Nonetheless, there are instances within the Road Sign dataset where HET does not perform optimally at lower k𝑘kitalic_k values but recovers effectiveness at higher k𝑘kitalic_ks. This could be attributed...
Table 2. The comparative performance of HET across different attack algorithms and architecture combinations for the X-Ray and Road Sign datasets. Columns categorize the various attack algorithms employed, while rows detail the architecture pairings, with surrogate models (F0subscript𝐹0F_{0}italic_F start_POSTSUBSCRI...
In Tables 3 and 4 we present the results when ranking images for different k𝑘kitalic_k after applying the best perturbation to each image. Table 3 presents the findings for the CIFAR10 and ImageNet datasets, while Table 4 provides the results for the X-Ray and Road Sign datasets. Each cell within these tables indicate...
We direct the reader’s attention to the results presented in Figures 4 and 5, which presents the performance of our proposed ranking strategy, HET, across various datasets and model architecture pairings. Figure 4 details the outcomes for CIFAR10 and ImageNet, while Figure 5 delves into the X-Ray and Road Sign datasets...
Table 1. The comparative performance of HET across different attack algorithms and architecture combinations for the ImageNet and CIFAR10 datasets. Columns categorize the various attack algorithms employed, while rows detail the architecture pairings, with surrogate models (F0subscript𝐹0F_{0}italic_F start_POSTSUBSCRI...
D
In addition, with probability exponentially close to 1111, the model prediction on unseen data is statistically indistinguishable from the data-independent random variables that result from measuring K^N(rand)subscriptsuperscript^𝐾rand𝑁\widehat{K}^{\rm(rand)}_{N}over^ start_ARG italic_K end_ARG start_POSTSUPERSCRIP...
Figure 2: Schematic of effect of exponential concentration and shot noise on training and generalization performance. For the unseen (test) data, the behavior depends on how kernel values are statistically estimated. In the case of the Loschmidt Echo test, the model predictions are zero with high probability. On using...
On the other hand, the data independence of the kernel values means that the predictions of the trained model are completely independent of the training data and so the trained model in general performs trivially on unseen data. That is, the model generalizes terribly. By incorporating the effect of shot noise, this h...
Corollary 1 shows that, regardless of the measurement strategy to estimate the kernel value, exponential concentration leads to a trained model where the predictions on unseen inputs are independent of the training data. A visual illustration of the effect of exponential concentration in the presence of shot noise on ...
Finally, the analysis in the case of the projected quantum kernel is slightly more complicated as estimating the kernel requires us to first obtain the statistical estimates of the 2-norms between the reduced data encoding states on all individual qubits from quantum computers. Two common strategies to to do so include...
C
We extract the samples of each class using the annotation provided by the PREVENTION dataset. The samples are initially centre-cropped from 1920 × 600 to 1600 × 600 pixels, then resized to 400400400400 × 400400400400 pixels in spatial resolution. In order to reduce the computational cost, the data is further downsample...
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition. Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) ...
: The second method is designed over the first one. It uses the same 3D action recognition networks as the first method. Bounding box information is embedded to each frame of the RGB video data to improve classification and prediction accuracy. This method assumes that a separate vehicle prediction method has been used...
The data used for the first approach, RGB+3DN, and the second approach, RGB+BB+3DN, are RGB video data and video combined bounding box data respectively. Because of the limited annotation available in the PREVENTION data, training and validation data used for the second approach is more limited than used for the first...
To generate video combined bounding box data, we firstly use the same method of generating RGB video data to extract raw video clips of each class. Then the vehicle bounding boxes of vehicles in each frame are rendered into colour channels of each frame: the red channel is used to store the scene appearance as a gray ...
D
semantically equivalent. Then, as long as 𝒴⁢(pa)𝒴subscript𝑝𝑎\mathcal{Y}(p_{a})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) and 𝒴⁢(pb)𝒴subscript𝑝𝑏\mathcal{Y}(p_{b})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) are not identical, there always exists an
attribution method 𝒜δsubscript𝒜𝛿\mathcal{A}_{\delta}caligraphic_A start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT that can differentiate the developers. This 𝒜δsubscript𝒜𝛿\mathcal{A}_{\delta}caligraphic_A start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT can be constructed as follows: We
describe the difference between the anonymized programs as δ=𝒴⁢(pa)\𝒴⁢(pb)𝛿\𝒴subscript𝑝𝑎𝒴subscript𝑝𝑏\delta=\mathcal{Y}(p_{a})\,\backslash\,\mathcal{Y}(p_{b})italic_δ = caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) \ caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSC...
the method 𝒴𝒴\mathcal{Y}caligraphic_Y is forced to normalize the programs to the same representation, such that we have 𝒴⁢(pa)=𝒴⁢(pb)𝒴subscript𝑝𝑎𝒴subscript𝑝𝑏\mathcal{Y}(p_{a})=\mathcal{Y}(p_{b})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) = caligraphic_Y ( italic_p start_POSTSUBSC...
be changed and API functions can be substituted. Given a program pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, there typically exist many pb∈Psubscript𝑝𝑏𝑃p_{b}\in Pitalic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ∈ italic_P, such that
A
Figure 1: A conceptual comparison of different prompt tuning methods. (a) class-agnostic CoOp, (b) class-specified CoOp, (c) hard prompt sharing for CoOp, (d) our soft prompt sharing, (e) average performances on four datasets. In (a)-(d), we assume there are 2 classes per task and shared prompt contexts are in the sam...
CoOp was proposed by Zhou et al. [5], which enables the few-shot adaption of pre-trained CLIP model for image recognition. CoOp inherits the two-stream structure from CLIP to bridge the gap between pre-training and fine-tuning. In other words, it has an image encoder denoted as e⁢(⋅)𝑒⋅e(\cdot)italic_e ( ⋅ ) to extract...
Provided with pre-trained VLMs, how to adapt their implicit knowledge to various downstream tasks becomes an essential problem. Recently, Zhou et al. [5] introduced CoOp, a method that incorporates prompt tuning, originally developed in NLP [13, 14, 15], into computer vision to tackle the challenge of few-shot image re...
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in...
For few-shot classification, assume that there are C𝐶Citalic_C classes, and the associated C𝐶Citalic_C class name texts are available. For each class name, CoOp prepends a learnable prompt context to each class name before feeding it to text encoder. Actually, this context consists of a sequence of vectors. These ve...
B