context
stringlengths
250
7.19k
A
stringlengths
250
4.62k
B
stringlengths
250
4.17k
C
stringlengths
250
4.99k
D
stringlengths
250
8.2k
label
stringclasses
4 values
”Like if I read a news about an app or something, I would send community message probably to all, I’d say I’m concerned about an app. I will be probably cautioning everyone not to use that app because it’s using something fishy accessing private information or something like that. That’s how everyone could also take pa...
Almost half of the participants (42%, N=8 Parents and 53%, N=10 Teens) pointed out that this feature may affect the transparent relationship in their families. They primarily believed in a bi-directional transparency-based relationship and therefore, also expect their teens/parents not to hide any apps from them. Many...
Overall, we found that most parents and teens made few considerations toward their own online safety or privacy when installing new apps or granting permissions to the apps they installed (RQ1). Meanwhile, parents often manually monitored the apps their teens installed but gave little thought to the permissions granted...
Although most participants disapproved of the feature that allowed them to hide apps from one another, some (37%, N=7 Parents and 47%, N=9 Teens) mentioned a positive aspect of this feature. They identified that this feature enabled users to have personal privacy on their app usage and a sense of independence. For exa...
The feature that garnered the most discussion among parents and teens was the ability to hide or show apps to one another. Overall, parents and teens were both concerned about this feature because it promoted secrecy and negated some of the purpose behind the app. One important thing to be noted here is that when we a...
D
We experiment with Voronoi diagrams in which the cells in the center of the diagram tend to be smaller. A point is sampled by first choosing a random cell and then choosing a uniform point on its boundary. This results in a higher sampling density on boundaries of smaller cells. We further inject additive noise. Furthe...
Consider, again, the additive noise-corrupted “Antman” two-square dataset on the right of Figure 5. The persistence diagrams for the distance-to-measure filtration and the RDAD filtration are shown in Figure 8 with different confidence bands. Note that in both figures the bands constructed by oracle bootstrapping and b...
Figure 12, followed by the persistence diagrams of the two filtrations in Figure 13. Even without the aid of the confidence bands, one point is conspicuously far away from the diagonal in the persistence diagram of each filtration. The RDAD filtration picks up 2 more significant loops.
We compare the performances of the proposed filtration against that of the distance-to-measure filtration. The sample points are shown in Figure 9, the persistence diagrams are shown in Figure 11, and the significant loops found by oracle and subsample bootstrapping are shown in Figure 10.
Recall the two-square dataset “David and Goliath" in the right subplot of Figure 1 in the introduction. 100 points are uniformly sampled from the bigger square annulus, and 400 from the smaller annulus. Since the dataset has no additive noise or outliers, we compare the distance filtration and the DAD filtration for t...
C
Q3. Does CIS module truly assist the model to estimate Y𝑌Yitalic_Y only based on what’s in X𝑋Xitalic_X? Q4. Does the AU representations extracted by our CISNet invariant to subjects? Q5. What’s the differences between models approximating P⁢(Y|X)𝑃conditional𝑌𝑋P(Y|X)italic_P ( italic_Y | italic_X ) or P⁢(Y|d⁢o⁢(X))...
Figure 5: PCC among AUs for different subjects. From left to right, PCC matrices are computed based on the ground-truth AU labels, predicted ones using CISNet (w/ CIS), and predicted ones using the baseline model (w/o CIS), respectively. Numbers under PCC heatmaps are cosine similarities between themselves and the corr...
Conventional AU recognition models aim to estimate AU occurrence probabilities Y𝑌Yitalic_Y as precisely as possible. From our causal diagram, we can see that Y𝑌Yitalic_Y is the effect of two causal paths, which are X→Y→𝑋𝑌X\rightarrow Yitalic_X → italic_Y and R→Y→𝑅𝑌R\rightarrow Yitalic_R → italic_Y. The first cau...
To illustrate that CIS module acts as a causal intervention on Subject and makes the model estimate Y𝑌Yitalic_Y only based on X𝑋Xitalic_X without unnecessary or even harmful prior from the training data, we visualize the Pearson Correlation Coefficient (PCC) matrices computed based on ground-truth or predicted AU la...
Fig. 5 shows that the PCC heatmaps computed for the baseline model and CISNet differ from each other, and the PCC heatmaps for CISNet are more similar to the PCC heatmaps computed based on the ground-truth AU labels according to their cosine similarities, which demonstrates that by using CIS to deconfound Subject, the ...
C
We implemented BBP and conducted experiments over a large-scale blockchain network with many nodes to evaluate its performance. We compare BBP with other block propagation schemes based on the experimental results. The experiment results show that BBP has the least block propagation time. Compared with the current prot...
The rest of this paper is organized as follows. Section 2 analyzes the theoretical scalability of our basic BBP scheme. Section 3 discusses the design of a practical BBP scheme based on Ethereum. Section 4 follows by elaborating on the systematic design of BBP. Section 5 analyzes the block propagation delays of differe...
While the basic BBP scheme can achieve scalability in theory, its applicability in real blockchain systems, e.g., Ethereum, needs further discussion. This section delves into experiments on block validation and transmission time in Ethereum. The results highlight both the prospects and technical challenges in designin...
This section presents the detailed design of BBP in Ethereum. We assume the transaction forwarding and PoW consensus protocols of Ethereum as the building components of the blockchain333Since the implementation aspect of our work is based on Ethereum, we give a brief technical overview of the Ethereum blockchain in Ap...
In this section, we implement and evaluate our BBP scheme over a test network. For comparison, we also implement three typical block propagation protocols: the Legacy Block Propagation (LBP) and Compact Block Propagation (CBP) of Bitcoin, and the Block-Hash Propagation (BHP) of Ethereum.
A
In this paper, we study linear function approximation in POMDPs to address the statistical challenges amplified by infinite observation and state spaces. In particular, our contribution is fourfold. First, we define a class of POMDPs with a linear structure and identify an ill conditioning measure for sample-efficient ...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
B
SSD result from an underlying motor/neurological, structural, or sensory/perceptual cause, there is no known cause for functional SSD (ASHA, \APACyear\bibnodate) (see Figure 1). The prevalence of SSD varies significantly according to different studies; however, these studies reflect the magnitude of the problem (cite t...
SSD. Personalized speech therapy and practice monitored by SLPs can improve the acquisition of speech skills (Duval \BOthers., \APACyear2018). However, the accessibility of SLPs is crucial for such intervention. A report suggests that up to 70 % of SLPs have waiting lists, which indicates
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
The effectiveness of AI-based automated speech therapy tools depends on their performance compared to the conventional mode of speech therapy provided by SLPs. Moreover, automated speech therapy tools providing wrong feedback can be disastrous to children’s speech improvement. Few studies (4 out of 24) compared the re...
a shortage in the workforce (Duval \BOthers., \APACyear2018; V. Robles-Bykbaev \BOthers., \APACyear2017). Furthermore, according to United Nations Children’s Fund (UNICEF), there are not adequate speech language therapy services for children with communication disorders and disabilities (Lansdown \BOthers., \APACyear20...
A
Finally, we note that the compression ratio is not overly sensitive to the choice of PCA dimension, and if we use more dimensions than the number of communities, we still get favorable results. For theoretical support, we show in Section E.4 of the appendix that the compression ratios of most points change only mildly ...
Finally, we note a few limitations with our outlier removal algorithm. First, the algorithm is dependent on selecting a reasonable removal percentage. While we observed greater NMI improvement with greater removal rates, it is important to understand what is a suitable choice for different datasets. Another concern is ...
LOF has the best performance in the Zheng4eq and Zheng4uneq datasets. We also add the results for 5%percent55\%5 % removal in the Appendix, in Section E.3. We also add the improvements in the purity index for the 5% and 10% point removal cases, which is another popular measure of clustering accuracy. As aggregate infor...
Finally, we note that the compression ratio is not overly sensitive to the choice of PCA dimension, and if we use more dimensions than the number of communities, we still get favorable results. For theoretical support, we show in Section E.4 of the appendix that the compression ratios of most points change only mildly ...
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt...
A
In addition, we also show again as in the baseline situations, that the first two levels of visual missingness, are innocuous for the SGG tasks. It provides empirical evidence and further insights to bring deliberately obfuscations for privacy concerns as in [13].
We conduct experiments on the Scene Graph Generation (SGG) task to test the feasibility of the task setting with missing visual input and to demonstrate the effectiveness of our proposed method. SGG task aims to generate a graphical representation of the scene from given images.
In this paper, we investigate the SGG task setting with missing input visions, and propose to supplement the missing visual information via interactive natural language dialog using the proposed SI-Dial framework. Extensive experiments on the benchmark dataset with various levels of missingness demonstrate the feasibi...
2) We propose a model-agnostic dialog framework, SI-Dial, which can be jointly trained with various existing models and endows the AI systems with the interactive communication abilities. 3) We perform extensive experiments and analysis with insufficient visual input in three different levels of data missingness, and d...
As the primary information source for various computer vision tasks, the visual input data play a significant role in most existing works to achieve competitive and promising performance. It is reasonable to expect the performance drop under the task setting with incomplete visual input. To tackle the problem, we propo...
B
This paper extends the classical facility location game on the real line by incorporating entrance fee functions, adding versatility to the model. The extension prompts a reevaluation of existing facility location games, like capacitated and heterogeneous facilities, opening avenues for broader applications. Our arbitr...
However, the arbitrariness of the entrance fee function introduces new challenges in designing strategyproof mechanisms. Agent preferences may no longer adhere to single-peakedness [22, 5], and standard mechanisms for the classical model cannot be directly extended to our setting while preserving strategyproofness. To ...
Moreover, we complement the proposed mechanisms with tight or nearly tight lower bounds, also parameterized by resubscript𝑟𝑒r_{e}italic_r start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT. While lower bounds for the classical model are applicable in our model, given that the classical model is a special case of ours, w...
A notable open problem is to narrow the gaps between our bounds in Table 3. In the classical model, randomized mechanisms such as the left-right-middle and proportional mechanisms achieve better ratios than deterministic mechanisms. However, these do not extend to our models while remaining strategyproof. Designing imp...
When the entrance fee is consistently 0, our model reduces to the classical model. Therefore, our mechanisms must also encompass those of the classical model: By letting re=1subscript𝑟𝑒1r_{e}=1italic_r start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 1, one can verify that the ratios in Table 3 match those in Table 1...
C
It is clear that after the execution of the above procedure, the number of connected components of G⁢(F*)𝐺superscript𝐹G(F^{*})italic_G ( italic_F start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) has decreased by 1 since there is a path connecting v1subscript𝑣1v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and v2...
The polynomial reduction given in [30] transforms connected instances of PLANAR POSITIVE 1in3SAT to connected instances of CUBIC PLANAR POSITIVE 1in3SAT. The NP-Completeness of this variant is guaranteed by the NP-Completeness of CONNECTED CUBIC PLANAR POSITIVE 1in3SAT and the correctness of this reduction.
Firstly we describe a polynomial-time reduction from an input instance of Connected cubic planar positive 1in3SAT, a formula F𝐹Fitalic_F, to an input instance of Connected subcubic planar C4subscript𝐶4C_{4}italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-free positive 1in3SAT, a formula F′superscript𝐹′F^{\prime}ita...
The NP-Completeness proof of this more restricted variant can be found in [30] where a polynomial reduction is presented to transform an input instance F𝐹Fitalic_F of PLANAR POSITIVE 1in3SAT to an instance F′superscript𝐹′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT of CUBIC PLANAR POSITIVE 1in3SAT....
We prove this restricted variant 1in3SAT is also NP-Complete. Let F𝐹Fitalic_F be an input instance of PLANAR POSITIVE 1in3SAT, we will construct a positive formula F*superscript𝐹F^{*}italic_F start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT in CNF, input instance of CONNECTED PLANAR POSITIVE 1in3SAT, and show that F𝐹Fit...
A
We use the examples in Tab. 1 to verify Theorem 5.1 numerically, also discussing potential limitations of the proposed methodology to design fast ReLU-based proxies for ultimate boundedness control of polytopic systems. Simulations are run in Matlab using Gurobi 24 as an MILP solver on a laptop with a Quad-Core Intel ...
The NN complexity, characterized by its depth L𝐿Litalic_L (the number of hidden layers) and width N¯¯𝑁\bar{N}over¯ start_ARG italic_N end_ARG (the number of neurons, for simplicity identical through the layers) plays a fundamental role to reproduce a given Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) as faithfully as possible. Gi...
For all the examples in Tab. 1, we have n=2𝑛2n=2italic_n = 2 and m=1𝑚1m=1italic_m = 1, while k¯=2normal-¯𝑘2\bar{k}=2over¯ start_ARG italic_k end_ARG = 2, which coincides with the dimension of the state-space. Thus, to reproduce any of our controllers Φ⁢(⋅)normal-Φnormal-⋅\Phi(\cdot)roman_Φ ( ⋅ ) exactly, the ReLU ne...
To set the complexity of each ReLU network, we have followed the indications reported in the corresponding rows of Tab. 2, where we have considered the case of equally distributed neurons across the L𝐿Litalic_L hidden layers. Note that, in accordance to Tab. 2 indeed, we are implicitly designing minimum complexity Re...
On the other hand, a well-known drawback of MI optimization is poor scalability with increasing problem size. While it has already been observed in 17 that the computation time of the worst-case error e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT is only weakly ...
C
We model the dynamics of the objects using an ordinary differential equation (ODE) and use implicit neural representations to model the appearance, where the static background and the planar dynamics allow us to model the appearance in 2D. Our objective is to estimate the unknown physical parameters, and the initial co...
To show the capabilities of our approach on real world data, we captured videos of three physical systems: A block sliding on an inclined plane, a thrown ball, see Figure 6, and a pendulum, see Figure 1. For the block, the initial position and velocity, the angle of the plane and the coefficient of friction are the unk...
For many physical phenomena, humans are able to infer (a rough estimation of) physical quantities from observing a scene, and are even capable to predict what is going to happen in the (near) future. In contrast, physical understanding from videos is an open problem in machine learning. The physics of many real-world p...
Qualitative results for a single scene can be seen in Figure 5, Table 2 shows a quantitative evaluation over all sequences. For more results we refer to the appendix. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. While both baselines yield similar results on the t...
For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the deflection angle and the angular velocity, and a two dimensional ODE can be used to describe the dynamics.
D
Quantum communication networks (QCNs) utilize quantum mechanics principles to enhance information transfer. QCNs transmit data using quantum states that are entangled and can exist in a superposition of multiple states simultaneously, offering greater efficiency than classical networks [1]. However, these quantum stat...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m...
The majority of QCN models optimize the quantum resource allocation and network overall performance by embedding classical data into quantum states that are shared over quantum channels between distant nodes [3, 4, 5, 6]. Additionally, numerous approaches have been proposed to develop resource-efficient QCNs, including...
Here, the stored quantum vectors are initialized to different clusters either in an arbitrary fashion or by utilizing efficient heuristic approaches. Then, multiple iterations are performed such that in each iteration, the goal is to minimize the loss function in (2), which ensures that each vector is assigned to the c...
C
Since the set of states of GDsubscript𝐺𝐷G_{D}italic_G start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT is identical to (or a subset of) the set of states of Gvsubscript𝐺𝑣G_{v}italic_G start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT, we can also classify the states of GDsubscript𝐺𝐷G_{D}italic_G start_POSTSUBSCRIPT it...
In this section, a verifier and a defensive verifier are constructed to respectively capture system behavior and all feasible defensive actions following system activity. Then, an E𝐸Eitalic_E-verifier is built by a special synchronization mechanism between a verifier and a defensive verifier to verify C𝐶Citalic_C-enf...
Given an E𝐸Eitalic_E-verifier, we can check the necessary condition for the defensive function to be C𝐶Citalic_C-enforcing by following Algorithm 1. However, it is possible that a defensive function may not be C𝐶Citalic_C-enforcing even though the necessary condition is satisfied.
For the given system G𝐺Gitalic_G, we denote the set of possible defensive actions outputted via the defensive function under deletion, insertion, and replacement constraints by EDsubscript𝐸𝐷E_{D}italic_E start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, which is defined as ED=⋃t∈EoD⁢(t)subscript𝐸𝐷subscript𝑡subscrip...
A C𝐶Citalic_C-enforcing defensive function should ensure that all possible defensive actions keep the eavesdropper confused regardless of system activity. Thus, when the defensive function is subject to constraints, we propose a new construction by composing a verifier and a defensive verifier of a given system to cap...
D
\mathcal{I}_{s}},[L_{n}]_{n\in\mathcal{R}_{i}}]},italic_o start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = [ [ italic_I start_POSTSUPERSCRIPT italic_p italic_r italic_e italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_s end_POSTSUBSCRIPT ] start_P...
Therefore, we add a fourth sub-vector, [BnN]n∈ℛisubscriptdelimited-[]subscriptsuperscript𝐵𝑁𝑛𝑛subscriptℛ𝑖[B^{N}_{n}]_{n\in\mathcal{R}_{i}}[ italic_B start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] start_POSTSUBSCRIPT italic_n ∈ caligraphic_R start_POSTSUBSCRIPT it...
The transmissions follow and satisfy the rules and constraints below. A parent IAB-node – more specifically, its RL agent(s) – will have to choose between (1) sending data bits to a child IAB-node via a backhaul link to refill its buffer, and (2) directly transmitting to a UE via an access link to myopically improve i...
Figs. 8(c) and 9(c) provide an insight into the average fraction of the traffic delivered to UEs through the IAB wireless backhaul per frame. The upper translucent part of each bar represents the average percentage of the data volume received by UEs from IAB-nodes via multi-hops, while the lower opaque part reports the...
HD Mode: All the above three sub-vectors also appear in the observation vector for the HD mode. However, differently from FD mode, operating in HD mode introduces restrictions between parent and child nodes: if a parent node transmits to a child node, the receiving child node cannot simultaneously transmit. This may hi...
D
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F...
Is this a good statistical protocol? The answer depends on how much money the pharmaceutical company will make, among other things. In particular, depending on the total profit the company earns when they are approved, even companies with ineffective drugs may be incentivized to run a trial.
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F...
We report on the expected value of a placebo drug under the three protocols above in Table 1. We find that for typical drugs with $1-10B profit if approved, the standard protocol requiring two trials is incentive-aligned. For extremely profitable drugs earning $100B or more, the protocol ceases to be incentive-aligned....
Conversely, the statistical protocol changes the incentives of the agents. Consider again the large profit case above, where agents receive 100 times their initial investment if they receive approval. Now, however, suppose the principal changes to a stricter protocol such that the probability of approval is only 0.005...
A
Image Registration is a crucial task in medical imaging applications, allowing to spatially align imaging features between two or multiple scans. Registration methods are today a central component of state-of-the-art methods for atlas-based segmentation [41, 14], morphological and functional analysis [17, 4], multi-mod...
Image registration is the workhorse of many real-life medical imaging software and applications, including public web-based services for automated segmentation and labelling of medical images. Using these services generally requires uploading and exchanging medical images over the Internet, to subsequently perform imag...
Image Registration is a crucial task in medical imaging applications, allowing to spatially align imaging features between two or multiple scans. Registration methods are today a central component of state-of-the-art methods for atlas-based segmentation [41, 14], morphological and functional analysis [17, 4], multi-mod...
While PPIR focuses on the privacy-preserving formulation of classical image registration methods based on gradient-based optimization, throughout the past years the research community has been steering the attention towards deep learning (DL)-based image registration [42, 27, 46, 9]. Among the medical imaging applicati...
This work presents privacy-preserving image registration (PPIR), a new methodological framework allowing image registration under privacy constraints. To this end, we reformulate the image registration problem to integrate cryptographic tools, namely MPC or FHE, thus preserving the privacy of the image data. Due to the...
A
In order to verify the effectiveness of our method, we compare several methods of response-based KD and black-box KD. We select KD [19] proposed by Hinton et al. and ML [3] proposed by Ba and Caruana as the baselines, and we also compare the recently published DKD [58] based on decoupled KLD.
We conduct different teacher-student model pairs for distillation experiments, and use ResNet32 / ResNet56 / VGG13 / ResNet110 / ResNet50 / ResNeXt101 as teacher models and use ResNet8 / ResNet32 / VGG11 / MobileNet / ResNet34 / ResNeXt50 as student models.
Distillation performance is tested on various datasets, such as MNIST, CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet-1K, as top-1 classification accuracy is exploited as an evaluation metric. The experimental results are shown in Tab. 7, Tab. 8 and Tab. 9.
On MNIST, CIFAR, and Tiny ImageNet, we use ResNet32/56/110 and VGG13 as the teacher model and use ResNet8/32, VGG11, and MobileNet as the student model. We compare the top-1 classification accuracy (ACC) of different teacher-student pairs, the results are shown in Tab. 1.
Figure 7: Curve of top-1 classification accuracy on the datasets of CIFAR-100 (a,b) and CIFAR-10 (c,d). Using MEKD with soft (a,c) or hard (b,d) responses with or without ℒI⁢Msubscriptℒ𝐼𝑀\mathcal{L}_{IM}caligraphic_L start_POSTSUBSCRIPT italic_I italic_M end_POSTSUBSCRIPT and ℒK⁢Lsubscriptℒ𝐾𝐿\mathcal{L}_{KL}caligr...
C
Ea⁢(cos⁡(π⁢N⁢x),𝖱𝖾𝖫𝖴)=Ea⁢(TN⁢(x),𝖱𝖾𝖫𝖴)=1π⁢π22−4≃0.31subscript𝐸𝑎𝜋𝑁𝑥𝖱𝖾𝖫𝖴subscript𝐸𝑎subscript𝑇𝑁𝑥𝖱𝖾𝖫𝖴1𝜋superscript𝜋224similar-to-or-equals0.31E_{a}(\cos(\pi Nx),{\sf ReLU})=E_{a}(T_{N}(x),{\sf ReLU})=\frac{1}{\pi}\sqrt{% \frac{\pi^{2}}{2}-4}\simeq 0.31italic_E start_POSTSUBSCRIPT italic_a end_PO...
Theorem 1 predicts that in certain situations FNO has a systematic bias. Namely, if one trains FNO on an insufficiently fine grid, activation functions introduce distortions that FNO will learn to mitigate. When the grid is sufficiently refined, aliasing errors disappear, but since FNO was trained to mitigate them, it ...
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. The dec...
Suppose our function f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is (exactly) represented as Fourier series with |k|<N𝑘𝑁|k|<N| italic_k | < italic_N terms. We can equivalently store values of the function on the uniform grid with 2⁢N+12𝑁12N+12 italic_N + 1 points. However, when we apply activation function σ⁢(x)𝜎𝑥\sigma(x...
The proof can be found in Appendix A. More results on aliasing for composition with smooth functions can be found in [Ber+06]. The aliasing error is quite substantial, but since all energy in the theorem above is confined in highest possible harmonics, in practice one can expect to have milder discrepancy.
D
The success of Transformer [44] in NLP has attracted lots of attention from the community of computer vision [13, 24, 29, 45]. While the original ViT [13] suffered from computation burden, Liu et al. [29] proposed the shifted-window that computes the attention on patch-level. The Pyramid ViT [45] proposed a progressive...
As our method computes the correlation between feature spatial locations, it might become intractable when feature maps are large. To this end, we extend our pipeline in a two-step hierarchical fashion: 1) instead of computing correlation of all spatial locations, we split the feature maps into several groups of patche...
Figure 2: Illustration of our framework. (a) Target-aware Transformer. Conditioned on the teacher feature and the student feature, the transformation map Corr. is computed and then applied on the student feature to reconfigure itself, which is then asked to minimize the L22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRI...
In this section, we first briefly describe the fundamental elements of feature map knowledge distillation and then introduce the general formulation of our knowledge distillation via a target-aware transformer. As our method computes the point-wise correlation of the given feature maps, the computational complexity bec...
To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude...
C
Although the original paper proposes default values and ranges for the t-SNE hyperparameters (perplexity, learning rate, etc.), automatically selecting these parameters is also a topic of interest [9, 7]. recent paper reviews t-SNE and applications thereof [15].
However, feature grouping is not always clear. The data might have hundreds of features or come from where the meaning of features is unclear. Subspace clustering algorithms can efficiently find subspaces of interest. The USDA food composition dataset is frequently analyzed in subspace clustering literature; we use tw...
Tatu et al. [34] use the SURFING [5] algorithm to prune away uninteresting subspaces, while the interesting subspaces are embedded with MDS and incorporated into a visual analytics tool for further filtering and exploration. Fujiwara et al. provide a feature learning method and visualization tool to explore non-axis al...
Compare this to how MPSE performs; see Fig. 3. Clusters are more mixed and not clearly separable. For example, the projections that is supposed to separate the data by color, mixes up the blue and red clusters, and both of them are too close to the green. If the cluster identities were not given as part of the input, w...
A single projection or perspective may not be sufficient to understand diverse patterns in high-dimensional data. The goal of subspace clustering is to find multiple embeddings, each capturing a different aspect of the data [27]. Indeed, two different sets of dimensions may hold different, or even conflicting, patterns...
D
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
In the sequel, we decribe the procedure of ETC. In summary, ETC iteratively (i) interacts with the environment to collect observations, (ii) fits the density mappings defined in (3.6) and (3.7), respectively, by observations, (iii) identifies a confidence set of parameters by fitting the Bellman equations according to ...
In this paper, we propose Embed to Control (ETC) as a unified framework for embedding and control in POMDPs. In particular, by exploiting the low-rank transition and the future sufficiency condition, we decompose the embedding learning into the learning of Bellman operators across multiple steps. By assembling the Bell...
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Be...
We analyze the sample efficiency of ETC under the future and past sufficiency assumptions. In particular, such assumptions ensure that the future and past observations are sufficient for identifying the belief state, which captures the information-theoretic difficulty of POMDPs. We prove that ETC attains an O⁢(1/ϵ2)𝑂1...
A
where 𝒫hb⁢(Oh|Ah=a,Oh−k−1)superscriptsubscript𝒫ℎ𝑏conditionalsubscript𝑂ℎsubscript𝐴ℎ𝑎subscript𝑂ℎ𝑘1\mathcal{P}_{h}^{b}(O_{h}|A_{h}=a,O_{h-k-1})caligraphic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT ( italic_O start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT |...
From a theoretical perspective, the identification result and the backward induction property of the bridge functions provide a way of decomposing the suboptimality of the learned policy in terms of statistical errors of the bridge functions. When combined with the pessimism and the fast statistical rates enjoyed by an...
Now combining the confidence region (3.14) and the identification formula (3.6), for any policy π∈Π⁢(ℋ)𝜋Πℋ\pi\in\Pi(\mathcal{H})italic_π ∈ roman_Π ( caligraphic_H ), we adopt an pessimistic estimate of the value of J⁢(π)𝐽𝜋J(\pi)italic_J ( italic_π ) as
Now given Assumption 3.1 and Assumption 3.5 on the existence of proxy variables and bridge functions, we are ready to present the main identification result. It represents the true policy value J⁢(π)𝐽𝜋J(\pi)italic_J ( italic_π ) via the value bridge functions (3.1),
According to Theorem 3.8 and Assumption 3.5, to estimate the value J⁢(π)𝐽𝜋J(\pi)italic_J ( italic_π ) of π∈Π⁢(ℋ)𝜋Πℋ\pi\in\Pi(\mathcal{H})italic_π ∈ roman_Π ( caligraphic_H ), it suffices to estimate the value bridge functions {bhπ}h=1Hsuperscriptsubscriptsuperscriptsubscript𝑏ℎ𝜋ℎ1𝐻\{b_{h}^{\pi}\}_{h=1}^{H}{ italic...
C
where ℒ⁢(𝒙,𝝀)=f⁢(𝒙)+𝝀T⁢c⁢(𝒙)ℒ𝒙𝝀𝑓𝒙superscript𝝀𝑇𝑐𝒙\mathcal{L}({\bm{x}},{\bm{\lambda}})=f({\bm{x}})+{\bm{\lambda}}^{T}c({\bm{x}})caligraphic_L ( bold_italic_x , bold_italic_λ ) = italic_f ( bold_italic_x ) + bold_italic_λ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_c ( bold_italic_x ) is the La...
Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me...
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi...
On the other hand, a growing body of literature leverages optimization procedures to facilitate online inference, starting with Robbins1951stochastic; Kiefer1952Stochastic and continuing through Robbins1971convergence; Fabian1973Asymptotically; Ermoliev1983Stochastic. To study the asymptotic distribution of stochastic ...
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
A
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
In this paper we focus on the related generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes under the following assumptions.
The present paper suffers from the same rather severe restrictions on hexahedral meshes in 3D as in previous work. The analysis of discrete inf-sup conditions for general hexahedral meshes remains an open problem. Another open problem is the analysis of isoparametric generalized Taylor-Hood families in 2D and 3D to co...
The discrete LBB condition could also be shown for the isogeometric generalized Taylor-Hood family, see [6], [7]. The proof there relies on a continuously differentiable parametrization of the domain ΩΩ\Omegaroman_Ω on each of a fixed number of patches, which does not cover general quadrilateral/hexahedral meshes.
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
B
ConvMixer [17], MLP-Mixer [15] and PoolFormer [50] as token-mixers. Top-1 accuracy on the test set of best of three runs with random initialization is reported as a generalization metric based on prevailing protocols [44]. For ImageNet-1k, Places-365 and iNAT-mini datasets, we have reported Top-1 accuracy in validation...
To evaluate the performance of WaveMix on domain specific datasets where data availability is low, we choose the task of galaxy morphology classification on Galaxy 10 DECals dataset [64]. There is lack of efficient methods that can extract information from astronomical surveys to classify galaxies and creating large a...
For Cityscapes, we used the full-resolution 1024×2048102420481024\times 20481024 × 2048 images for training and inference. Images were resized to 256×256256256256\times 256256 × 256 for Places-365 and 224×224224224224\times 224224 × 224 for iNAT-mini and ImageNet-1k datasets. Only horizontal flip was used as data augm...
Table 3 shows the performance of WaveMix for image classification on multiple datasets with image sizes ranging from small to large. WaveMix achieved state-of-the-art (SOTA) accuracy of 56.45% on Places-365 standard (365 classes) validation set (256×256256256256\times 256256 × 256) among the models that were not pre-tr...
Table 3: WaveMix outperforms all previous models for image classification and achieves state-of-the-art (SOTA) results on Galaxy 10 DECals, all five EMNIST datasets, Places-365 validation set (365 classes) and INAT-mini validation set (10,000 classes). See Table 7 for architectural details. *TrivialAugment [78] was use...
A
model, which is defined by ℝ12×(ℝℕ)4superscriptℝ12superscriptsuperscriptℝℕ4\mathbb{R}^{12}\times\left(\mathbb{R}^{\mathbb{N}}\right)^{4}blackboard_R start_POSTSUPERSCRIPT 12 end_POSTSUPERSCRIPT × ( blackboard_R start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and a d...
ð1subscriptð1\text{\dh}_{1}ð start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the trivial derivation on (ℝℕ)4superscriptsuperscriptℝℕ4\left(\mathbb{R}^{\mathbb{N}}\right)^{4}( blackboard_R start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT: ð1:=∑z∈{F,δl,δm,δn}m∑k∈ℕzi(k+1)⁢∂/∂...
model, which is defined by ℝ12×(ℝℕ)4superscriptℝ12superscriptsuperscriptℝℕ4\mathbb{R}^{12}\times\left(\mathbb{R}^{\mathbb{N}}\right)^{4}blackboard_R start_POSTSUPERSCRIPT 12 end_POSTSUPERSCRIPT × ( blackboard_R start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and a d...
ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ui,ksubscript𝑢𝑖𝑘u_{i,k}italic_u start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT, (i,k)∈ℕ×[1,m]𝑖𝑘ℕ1𝑚(i,k)\in\mathbb{N}\times[1,m]( italic_i , italic_k ) ∈ blackboard_N × [ 1 , italic_m ], for (ℝℕ)msuperscrip...
The trivial diffiety 𝐓msuperscript𝐓𝑚\mathbf{T}^{m}bold_T start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT is (ℝℕ)msuperscriptsuperscriptℝℕ𝑚\left(\mathbb{R}^{\mathbb{N}}\right)^{m}( blackboard_R start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT equipped wit...
A
The evolution of the overall area can be traced from an early ranking task pagael2014mathematical reliant on heuristics and rules alexeeva2020mathalign, through edge classification with machine learning stathopoulos2018variable, to language modelling with Transformers jo2021modeling. There are separate datasets propose...
There is a high variability in scoping definitions. The scope from which identifiers are linked to descriptions varies significantly, and is one of the reasons it is difficult to compare performance of methods even when tackling the same variant of the task schubotz2017evaluating; alexeeva2020mathalign. At the smalles...
Identifier-definition extraction limitations. Methods considering the specific link between identifiers and their definitions have split off into at least three recent tasks: identifier-definition extraction schubotz2017evaluating; alexeeva2020mathalign, variable typing stathopoulos2018variable, and notation auto-sugge...
Identifier-Definition Extraction. Leading work in premise selection ferreira2020premise; ferreira2021star and informal theorem proving welleck2021towards has explicitly highlighted the need for improved pairing of variables with descriptions. The varied tasks related to identifier-definition extraction lack communally...
A significant proportion of variables or identifiers in formulae or text are explicitly defined in the context wolska2010symbol. Descriptions are usually local to the first instance of the identifiers in the discourse. It is the broad goal of identifier-definition extraction and related tasks to pair up identifiers wi...
A
A classical tool to infer such a mesoscale structure of a single network is the Stochastic Block Model (Holland et al.,, 1983; Snijders and Nowicki,, 1997, SBM). In the SBM, a latent variable is associated with each node giving its group/block membership. Nodes belonging to the same block share the same connectivity pa...
In practice, Luczkovich et al., (2003) find that species are grouped into blocks by trophic level and some separation might occur based on trophic chains. Other papers lead to a similar interpretation of the blocks for stochastic equivalence when fitting SBMs on food webs. It is also noticed that communities (blocks of...
However, assuming that the blocks are represented in the same proportions in each network is a strong assumption that may lead to the model being of little practical use. In food webs, the proportion of species at a given trophic level may differ between networks that nevertheless share the same structure, for example...
We obtain respectively Q^1=5subscript^𝑄15\widehat{Q}_{1}=5over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 5 blocks for Martins, Q^2=3subscript^𝑄23\widehat{Q}_{2}=3over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3 blocks for Cooper and Q^3=4subscript^𝑄34\widehat{...
In social (resp. ecological) networks, individuals (resp. species) with the same block membership play the same social/ecological role in its system (Boorman and White,, 1976; Luczkovich et al.,, 2003). In food webs, species playing the same ecological role are said to be ecologically equivalent (see Cirtwill et al.,,...
D
To illustrate this, if we look at Fig. 1(a) [24], at least two interpretations can be made (Fig. 1(b) and 1(c)), based on the same observation. This simple example illustrates a typical process of image perception, in which causal inference (in the anti-causal direction) is made by utilizing the mechanisms of either oc...
Figure 2: The CED architecture. Potential classes are hypothesized by the classifier C𝐶Citalic_C, and verification on these classes is made by the estimator E𝐸Eitalic_E and the identifier D𝐷Ditalic_D through the pipeline of (1) analyzing possible transformations, (2) reconstructing from candidates and (3) matching ...
Figure 1: What is in image (a)? There are at least two ways to interpret it, i.e., (b) three black circles partly covered by a white triangle, or (c) three black circles with a notch on each of them. (The former one may have a stronger tendency in perception, according to the Gestalt principles [19].)
Specifically, the process consists of a hypothesis (of the content of three circles and a triangle) and the verification (whether a figure like this can be generated by covering the triangle over the circles). If another hypothesis (e.g. of just three circles) and a corresponding verification (by making a notch in each...
To illustrate this, if we look at Fig. 1(a) [24], at least two interpretations can be made (Fig. 1(b) and 1(c)), based on the same observation. This simple example illustrates a typical process of image perception, in which causal inference (in the anti-causal direction) is made by utilizing the mechanisms of either oc...
C
In real settings when we do not have knowledge about the dataset exact π𝜋\piitalic_π cannot be computed and needs to be estimated - referred to as the Mixture proportion estimation (MPE) problem. Formally speaking, Mixture proportion estimation (MPE) refers to the task of estimating the weight of a component distribu...
Unlike recent supervised variants of infoNCE (Khosla et al., , 2020; Assran et al., , 2020; Zhong et al., , 2021) which can only leverage explicit (strong) supervision (e.g in form of labeled data), puNCE is also able to leverage implicit (weak) supervision from the unlabeled data. The main idea is to use the fact that...
Owing to its importance in several real world problems (e.g. recommendation), developing specialized learning algorithms for PU setting have received renewed impetus in the machine learning community. Most of the recent research in this area can be broadly categorized into two major class of algorithms - based on how t...
PU Learning.  Owing to its importance in several real world problems (e.g. recommendation), developing specialized learning algorithms for PU setting have received renewed impetus in the machine learning community. Most of the recent research in this area can be broadly categorized into two major class of algorithms - ...
Note that, this is a standard assumption made in PU Learning literature and is at the heart of most classical cost sensitive PU Learning algorithms (Elkan and Noto, , 2008; Kiryo et al., , 2017; Du Plessis et al., , 2014; Chen et al., 2020a, ; Niu et al., , 2016).
C
Analyzing the third factor matrix is a significant focus of our work, and we propose three methods for interpreting it to quantify layer interdependence based on the structure of that factor matrix. Furthermore, we propose definitions of layer interdependence based on likelihood ratio tests (LRTs)between different mo...
Situated in this related work, the contributions of our work are as follows: (i) we use and expand the motivation of the nonnegative Tucker decomposition with KL-divergence as a natural extension of the dc-mm-SBM to multilayer networks by allowing for distinct latent structure in the nodes and layers; (ii) we propose ...
In this work we use the nonnegative Tucker decomposition (NNTuck)with KL-divergence as an extension of the stochastic block model (SBM)to multilayer networks. The NNTuck allows for layers in the network to have latent structure, just as the SBM allows for latent structure in the nodes of a single layer network. Usin...
The structure of this work is as follows. In Section 2 we discuss and define the notation of stochastic block models (SBMs), multilayer networks, and previous and related approaches to multilayer SBM s. In Section 3 we define the nonnegative Tucker decomposition (NNTuck)and its notation, discuss the connection of its...
In this section we discuss related work and define notation and vocabulary. For easy reference, the primary notation is organized in Table 1. We present stochastic block models (SBMs)in Section 2.1 and nonnegative matrix factorization (NMF)in Section 2.2. In Section 2.3 we introduce tensor vocabulary and notation use...
C
Inspired by the promising performance that graph convolutional networks (GCNs) [21] achieved in learning semantic representations of KG entities, we propose to adapt GCNs for answering multi-relation questions with single-step implicit reasoning that is simpler, more efficient, and easier to adopt than existing reason...
Also, TransferNet and NSM are state-of-the-art (SOTA) reasoning-based methods in multi-relation QA. Furthermore, considering that our answer search in the learned embedding space is similar to embedding-based methods, we also select a prominently adopted embedding-based method: EmbedKGQA [22].
Specifically, in this paper, we propose a novel Question-Aware GCN-based QA method, called QAGCN, which encodes questions and KG entities in a joint embedding space where questions are close to correct answers (entities). The intuition of our method is as follows:
Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task. Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:...
The model consists of a question encoder and a graph encoder that compute semantic representations (embeddings) of the question and subgraph entities, respectively, through several layers of encoding. We select answers from subgraph entities according to their distances to the question in the output embedding space and...
B
When running the densely encoded version of the word embeddings classifier from Section 3.3, 100% accuracy was achieved using 16 embedding dimensions and only 4 qubits (alexander2022quantum, ). This model achieves perfect accuracy for the lambeq set and in the fewest qubits of all the methods covered.
and to avoid redundancy and get more information out of each dimension, more compact distributional vector embeddings are often preferred. The work by Alexander and Widdows alexander2022quantum details the method used to classify words from their vector embeddings using a quantum support vector machine (QSVM), and dem...
and results accuracy can be considered in the light of what problems users are trying to solve. For example, the work by Alexander and Widdows alexander2022quantum investigates solely the effects of decreasing space in the QSVM using a densely encoded feature map. Improved accuracy from 90% to 100% in fewer qubits on ...
For the problem of correctly classifying general text data samples using quantum computation, properly reflecting the complexity of language in quantum representations is challenging. The accuracies from quantum circuits are dependent on the compatibility between the language dataset, type of classes (sentiment, topic...
To draw deeper conclusions on scalability, Alexander and Widdows alexander2022quantum tested the QSVM classifiers on a more complex dataset of IMDb movie reviews (maas2011imdb, ). Actual reviews taken from the database incorporated varied word combinations and colloquial language,
D
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
where 𝒩Y⁢(k)subscript𝒩𝑌𝑘\mathcal{N}_{Y}(k)caligraphic_N start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ( italic_k ) is a set of negative samples whose labels are different from node i𝑖iitalic_i, i.e. yi≠yksubscript𝑦𝑖subscript𝑦𝑘y_{i}\neq y_{k}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≠ italic_y st...
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
Node classification tasks predict labels of nodes based on graph structure and node features. We aim to improve the prediction accuracy of GNN models by restructuring edges via the adaptive SC method, particularly for heterophilic graphs. The evaluation results are shown in Equation 1. On average, the performance of GN...
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01906, Artificial Intelligence Graduate School Program(POSTECH)) and National Research Foundation of Korea (NRF) grant funded by the Korea government...
D
.\end{split}start_ROW start_CELL ( italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , roman_Θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) ∈ end_CELL start_CELL roman_arg roman_min start_POSTSUBSCRIPT italic_α , roman_Θ end_POSTSUBSCRIPT caligraphic_R start_POSTSUPERSCRIPT sansserif_total end_POSTSUPERSCRIPT ( ita...
The second observation is that the basis for the updates is the average loss, i.e. risk. This motivates the following definition: given participation α𝛼\alphaitalic_α and parameters ΘΘ\Thetaroman_Θ, the average risk experienced by each subpopulation i𝑖iitalic_i and each
First note that the total risk can be decomposed into either a weighted sum of average subpopulation risk or average learner risk. Thus the fact that learner and subpopulation dynamics are risk reducing ensures that the total risk is decreasing after the sequential updates. ∎
The allocations described by α𝛼\alphaitalic_α correspond to (fuzzy) cluster assignment and each risk function ℛi⁢(θj)subscriptℛ𝑖subscript𝜃𝑗\mathcal{R}_{i}(\theta_{j})caligraphic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) corresponds to a measure of “...
The total risk objective can be viewed as an instance of the k𝑘kitalic_k-means clustering problem with k=m𝑘𝑚k=mitalic_k = italic_m. In the language of this literature (e.g., Selim and Ismail (1984)), each subpopulation is a data point and the parameter selected by each learner is a cluster center.
D
We report experiments that demonstrate their success in various scenarios. Our experiments further demonstrate diverse applications of these tools for real-world scenarios, in applications such as election prediction, disease diagnosis, and analysis of access to health care. An implementation of the procedures propose...
We discuss related work in Section 2. The setting and notation are defined in Section 3. We extend the equalized odds criterion to multiclass classifiers in Section 4. In Section 5, we discuss lower-bounding the error using label proportions when the classifier is known to be fair. In Section 6, we discuss possible way...
In this section, we report experiments demonstrating the various algorithms proposed in this work, as well as the diversity of possible applications of bounding fairness from aggregate data. In Section 9.1, we report experiments with binary classifiers. In Section 9.2, we report experiments with multiclass classifiers....
In this section we report experiments on multiclass classifiers. In Section 9.2.1, we report experiments in which we calculate a lower bound and an upper bound on the unfairness of a classifier when confusion matrices are available, using the algorithm proposed in Section 7.2. In Section 9.2.2, we report experiments in...
In the previous section, we discussed the calculation of unfairness based on the confusion matrices of the classifier in each sub-population. However, as discussed in the Introduction, these confusion matrices might be unavailable, especially if we do not have access to the classifier or to individual-level validation...
A
In this paper, we introduce k𝑘kitalic_k-Motiflets, a novel definition for MD that turns the problem upside-down. k𝑘kitalic_k-Motiflets take the desired motif set size k𝑘kitalic_k as parameter and maximize the similarity of the motif set. As we will show, this k𝑘kitalic_k is an integer with an easily understood inte...
We argue that guessing k is almost always easier, as the concept of how many repetitions of a motif do you expect is much easier to understand - though the guess itself need not be easy, and thus we will also offer algorithms to learn k𝑘kitalic_k. Furthermore, as k𝑘kitalic_k is an integer, there is only a very limit...
Both algorithms expect parameters l𝑙litalic_l and k𝑘kitalic_k to be given. While we share the necessity to set l𝑙litalic_l manually with all other methods except VALMOD, we replace the usual parameter r𝑟ritalic_r (distance threshold) with k𝑘kitalic_k (size of the motif set). Although k𝑘kitalic_k is much easier t...
All of the aforementioned MD definitions have in common that their motif sets depend on two parameters, i.e., the length l𝑙litalic_l of subsequences and the distance threshold r𝑟ritalic_r. Especially r𝑟ritalic_r is very hard to set in practice, as it is very difficult to get an intuition regarding a threshold on the...
In this paper, we introduce k𝑘kitalic_k-Motiflets, a novel definition for MD that turns the problem upside-down. k𝑘kitalic_k-Motiflets take the desired motif set size k𝑘kitalic_k as parameter and maximize the similarity of the motif set. As we will show, this k𝑘kitalic_k is an integer with an easily understood inte...
A
The stochastic persistence of excitation condition was first proposed in the analysis of the centralized Kalman filter algorithm in [41] and then refined in [42]. For the decentralized adaptive filtering algorithms in [32]-[33], the cooperative information condition on the conditional expectations of the regression mat...
Historically, Guo [41] first proposed the stochastic persistence of excitation condition for analyzing the centralized Kalman filtering algorithm, which was then refined in [42]. Whereafter, the cooperative information condition on the conditional expectations of the regression matrices over the deterministic connected...
To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c...
Therefore, the sample path spatio-temporal persistence of excitation condition weakens the stochastic spatio-temporal persistence of excitation condition in [38]. To our best knowledge, we have obtained the most general persistence of excitation condition ever.
Here, in Corollary 1, we show that this is actually not needed. Therefore, the sample path spatio-temporal persistence of excitation condition in this paper is more general than the stochastic spatio-temporal persistence of excitation condition in [38].
D
In the numerical sections, we assess the effectiveness of our heuristics using two different approaches to inject inaccuracies into AAR. The first approach injects error in the evaluations of the fixed-point operator, and the heuristics are used to dynamically adjust the magnitude of the injected error. The second app...
The numerical results with ILU(0) and ILUT(τ𝜏\tauitalic_τ) when AA uses full history (m=k𝑚𝑘m=kitalic_m = italic_k) are shown in Figures 2a and 2b, respectively. Randomized Alternating AA(m=k)𝑚𝑘(m=k)( italic_m = italic_k ) outperforms standard Alternating AA(m=k)𝑚𝑘(m=k)( italic_m = italic_k ) and Subselected Alte...
Both Subselected Alternating AA and Randomized Alternating AA effectively reduce the computational time to solve the least-squares problem by projecting the least-squares problem in a reduced space. The reduction of dimensionality of the least-squares problem precludes Subselected Alternating AA from reaching convergen...
Numerical results confirm that using approximate calculations to solve the least-squares problem for AA saves computational time without sacrificing convergence to a desired accuracy. The proposed method has appealing properties for HPC since it reduces computational requirements, inter-process communications, and stor...
Our theoretical results allow for accuracy reduction in different calculations performed by AAR on linear fixed-point problems. When the fixed-point operator evaluations are the dominant computational cost of AAR, one may choose to approximate the evaluations of the fixed-point operator to reduce the computational cost...
C
For example, suppose that we pre-process the sentence below, as a part of an input document, from which we aim to guide the generation towards the topic “Business & Finance”. Following the aforementioned procedure, we will enclose with the special token [TAG], the words “businesses”, “billion” and “tax” since they belo...
In this section, we present the proposed topic-controllable summarization methods that fall into two different categories: a) incorporating topic embeddings into the Transformer architecture and b) employing control tokens before feeding the input to the model. Note that similar to the STAS measure, for all the propos...
There exist several controllable approaches that prepend information to the input source to influence the different aspects of the text such as the style [11] or the presence of a particular entity [12, 13]. Even though this technique can be readily combined with topic controllable summarization, this direction has not...
More specifically, [5] create a topic-oriented dataset which contains new super-articles by combining two different articles of the original dataset and keeping the summary of only one of them. First, they extract BoW vector representations for each topic from the Vox dataset [28]. Then, they compute the dot-product b...
All the aforementioned methods assume the existence of a training dataset, where each summary is associated with a particular topic. However, currently there are no existing large-scale training datasets for abstractive summarization that contain summaries according to the different topical aspects of the text. Thus, w...
D
Our tiles are useful, for example, for Toffoli+H circuits. Toffoli gates cannot, in general, be executed natively, and are decomposed into sequences of Clifford+T gates. The latter are compatible with the NISQ device or the error-correcting code. The Hadamard gate is comparatively straightforward to execute on almost ...
Figure 1: The standard cell for a 3D implementation of a Toffoli gate: a) Green vertices are the control qubits of the Toffoli gate, and the orange vertex is the target. In the Clifford+T decomposition of the Toffoli gate, the orange and green qubits are CNOT controls and the grey qubits are CNOT targets; b) Pink edge...
Our tiles are useful, for example, for Toffoli+H circuits. Toffoli gates cannot, in general, be executed natively, and are decomposed into sequences of Clifford+T gates. The latter are compatible with the NISQ device or the error-correcting code. The Hadamard gate is comparatively straightforward to execute on almost ...
Figure 7: The Toffoli gate from a) can be implemented using: b) an AND gate and a controlled-S gate, c) an ancilla initialised in |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩, two AND gates and a CNOT, d) an AND, an ancilla, and a CNOT; here the uncomputation is measurement-based. The wire labels denote [c]ontrol, [t]arget, ...
Critically, the fact that quantum circuits are often formed by repeating patterns of sub-circuits inspires an opportunity to use this information for speeding up the compilation and the routing of the qubits. For example, this is the case for many arithmetic circuits which were imported from classical computing (e.g. a...
C
Our encoder consists of 8888, 3×3333\times 33 × 3 convolutional layers with stride of 2222 for downsampling, each of which is followed by batch normalization layer [15] and Leaky-ReLU [16] activation function with a slope of 0.20.20.20.2. Our decoder consists of 7777, 3×3333\times 33 × 3 transposed convolutional layers...
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image....
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p...
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ...
A
It’s important to note that, with the exception of DRM, all existing neural-network-based methods are developed based on either the strong or weak forms of PDEs. While DRM utilizes the free energy functional, it is only suitable for solving static problems, i.e., finding the equilibrium of the system. Additionally, mos...
Our primary aim is to develop structure-preserving Eulerian algorithms to solve L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and structure-preserving Lagrangian algorithms to solve generalized diffusions based on their energy-dissipation law by utilizing neural networks as a...
The rest of the paper is organized as follows. Section 2 reviews the EnVarA and some existing neural network-based numerical approaches for solving PDEs. Section 3 of the paper is devoted to the development of the proposed EVNN schemes for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradi...
In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat...
In this section, we present the structure-preserving EVNN discretization for solving both L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and generalized diffusions, As mentioned earlier, the goal is to construct a neural-network discretization based on the energy-dissipation l...
D
This section is devoted to the study of linear dimensionality reduction of sheaves on a finite-dimensional real vector space 𝕍𝕍\mathbb{V}blackboard_V through pushforwards along linear forms. Using projective duality, we prove that a sheaf F𝐹Fitalic_F on 𝕍𝕍\mathbb{V}blackboard_V is zero if and only if its pushforwa...
One of the challenges of multi-parameter persistence is to provide a meaningful notion of distance between persistence modules which can be computed in a reasonable time complexity. Indeed, it has been shown that the usual interleaving distance between persistence modules is NP-hard to compute in the multi-parameter ca...
In [17], Masaki Kashiwara and Pierre Schapira provide a sheaf-theoretic construction of sublevel sets multi-parameter persistence. The aim of this section is to prove that the sheaf encoding the sublevel sets multi-parameter persistence of a pair (S,f)𝑆𝑓(S,f)( italic_S , italic_f ) where S𝑆Sitalic_S is a good compac...
This section is devoted to the study of linear dimensionality reduction of sheaves on a finite-dimensional real vector space 𝕍𝕍\mathbb{V}blackboard_V through pushforwards along linear forms. Using projective duality, we prove that a sheaf F𝐹Fitalic_F on 𝕍𝕍\mathbb{V}blackboard_V is zero if and only if its pushforwa...
We emphasize that the right-hand side is nothing but the sublevel sets persistence of the real-valued function u∘f𝑢𝑓u\circ fitalic_u ∘ italic_f, which can be computed with already existing software packages dedicated to TDA. Then, we provide a counter-example to the above isomorphism when the positivity assumption is...
D
We show that the impact on some commonly used and competitive approximate score-based algorithms which search in DAG-space is considerable, and noteworthy effects are also found in some hybrid and constraint-based algorithms. We recognise that this sensitivity is unlikely to arise in score-based algorithms which search...
By examining the way a DAG develops iteration by iteration in the simple HC algorithm, we find that arbitrary decisions about edge modifications play an important role in determining the accuracy of the learnt graph and thus, in judging the structure learning capability of an algorithm. This is particularly so when HC...
We first examine the individual changes - arc addition, reversal or removal - which the HC algorithm makes at each iteration as it learns the DAG structure. In particular, we note where changes are arbitrary; that is, where two neighbouring DAGs are Markov equivalent. Figure 2 shows the proportion of graphical modific...
We can contrast this behaviour with that of Hailfinder in Figure 2(b). In that case, the first eight highest-scoring arc additions are all between completely separate pairs of nodes, so that the DAG at iteration eight consists of eight unconnected arcs. Thus, at that iteration, the proportion of arbitrary arcs remains...
Figure 2 thus provides an overview of the HC learning process and the proportion of edge modifications whose orientation is determined arbitrarily by the variable order. Based on these results, it is reasonable to conclude that variable ordering has a significant effect in the initial iterations and that part of this e...
A
Accordingly, in recent years, several large-scale generative language models, including GPT-3 (175B) (Brown et al., 2020), HyperCLOVA (204B) (Kim et al., 2021a), Gopher (280B) (Rae et al., 2021), Chinchilla (70B) (Hoffmann et al., 2022), Megatron Turing NLG (530B) (Smith et al., 2022), PaLM (540B) (Chowdhery et al., 20...
From our observations, we can conclude the following: 1) Reducing the group size (g𝑔gitalic_g) effectively decreases perplexity, even when employing a simple RTN quantization scheme, at the cost of a marginal increase in latency, 2) Increasing the number of GPUs (and, consequently, parallelism) does not significantly ...
Note that due to the limited memory capacity of a single GPU, large LMs may need multiple GPUs, resulting in increased communication latency. GPUs are commonly adopted to accelerate inference as GPUs embed lots of arithmetic units and support multiple threads, critical for speeding up matrix multiplications (Narayanan ...
However, for the LLaMA-65B model with FP16 weights, the model size exceeds the memory capacity of a single GPU (80GB for A100), necessitating model parallelism techniques. Nevertheless, when the weights of the LLaMA-65B model are quantized to 3 or 4 bits, as demonstrated to be a viable solution in (Frantar et al., 2022...
To address such a concern, researchers have proposed to use model parallelism, which distributes computations over multiple GPUs through GPU-to-GPU communication (Shoeybi et al., 2019; Narayanan et al., 2021). Nevertheless, it is worth noting that model parallelism introduces additional overheads, stemming from the int...
D
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided...
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
B
In the realm of ANNs, the Squeeze and Excitation (SE) block, introduced by Hu et al. [15], has proven to be a highly effective module for enhancing representation. The SE block can be seamlessly incorporated into a network, requiring only a minimal increase in parameters to recalibrate channel information. By employing...
As mentioned above, we contend that the frame at the current time step exhibits a significant correlation with its neighboring frames in both the channel and temporal dimensions. This correlation opens up the possibility of employing a mechanism to establish a connection between these two dimensions. Initially, we emp...
In the realm of ANNs, the Squeeze and Excitation (SE) block, introduced by Hu et al. [15], has proven to be a highly effective module for enhancing representation. The SE block can be seamlessly incorporated into a network, requiring only a minimal increase in parameters to recalibrate channel information. By employing...
In this paper, we propose the TCJA mechanism, which innovatively recalibrates temporal and channel information in SNN. Specifically, instead of utilizing a generic fully connected network, we use 1-D convolution to build the correlation between frames, reducing the computation and improving model performance. Moreover...
Based on the aforementioned analysis, the utilization of a temporal-wise attention mechanism in SNNs has exhibited substantial progress in effectively processing time-related data streams. Moreover, it has been observed in both biological neural networks [32] and ANNs [15] that recalibrating channel features within con...
D
\llbracket v_{h}\rrbracket\bigr{\rangle}_{\mathcal{E}_{h}^{\circ}}+\langle\hat% {p}_{h}\times n,v_{h}\times n\rangle_{\mathcal{E}_{h}^{\partial}}= divide start_ARG 1 end_ARG start_ARG 2 end_ARG ⟨ ⟦ over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⟧ , ⟦ italic_v start_POSTSUBSCRIPT italic_...
{h}^{\partial}},= divide start_ARG 1 end_ARG start_ARG 2 end_ARG ⟨ ⟦ italic_p start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT + italic_γ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⟧ , ⟦ italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⟧ ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h...
{\mathcal{E}_{h}^{\partial}},⟨ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT - over^ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⟩ start_POSTSUBSCRIPT ∂ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POST...
}^{\circ}}+\langle p_{h}\cdot n,\hat{v}_{h}\cdot n\rangle_{\mathcal{E}_{h}^{% \partial}}=0,2 ⟨ { { italic_p start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT } } , over^ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCR...
\rangle}_{\mathcal{E}_{h}^{\partial}}=0.2 ⟨ { { italic_p start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT } } + italic_γ ( { { italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT } } - over^ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) , over^ start_ARG italic_v end_ARG start_POSTSUBSCR...
A
12:              𝖺𝗇𝗌𝗐𝖾𝗋𝗅𝗂𝗌𝗍.𝖺𝖽𝖽⁢(𝗉⋆,𝗉𝗋𝗈𝖿𝗂𝗍)formulae-sequence𝖺𝗇𝗌𝗐𝖾𝗋𝗅𝗂𝗌𝗍𝖺𝖽𝖽superscript𝗉⋆𝗉𝗋𝗈𝖿𝗂𝗍\mathsf{answerlist}.\textsf{add}(\mathsf{p}^{\star},\textsf{profit})sansserif_answerlist . add ( sansserif_p start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , profit );
Algorithm 1 Attack vectors synthesis procedure. Its inputs are actions 𝐀𝐜𝐭𝐀𝐜𝐭\mathbf{Act}bold_Act, the maximum length 𝗅𝖾𝗇𝗅𝖾𝗇\mathsf{len}sansserif_len, a blockchain state 𝗊𝗊\mathsf{q}sansserif_q, and a threshold number of iterations 𝗇𝗇\mathsf{n}sansserif_n. Its outputs are attack vectors that yield posit...
An attack vector by an adversary 𝖺𝖽𝗋𝖺𝖽𝗋\mathsf{adr}sansserif_adr consists of a symbolic actions vector 𝐒𝐒\mathbf{S}bold_S where the symbolic arguments are replaced by concrete values (integer values) and 𝐒𝐒\mathbf{S}bold_S transforms a blockchain state 𝗊𝗊\mathsf{q}sansserif_q to another state 𝗊′superscript...
Algorithm 1 gives the overall synthesis procedure of FlashSyn. FlashSyn first collects initial data points to approximate the actions in 𝐀𝐜𝐭𝐀𝐜𝐭\mathbf{Act}bold_Act (line 3) where FlashSyn uses the state 𝗊𝗊\mathsf{q}sansserif_q as a starting blockchain state. Then, using the sub-procedure Approximate FlashSyn g...
(\mathsf{q},\mathsf{adr},\mathsf{t})\cdot\mathbf{P}(\mathbf{t})}caligraphic_B ( sansserif_q , sansserif_adr ) = ∑ start_POSTSUBSCRIPT sansserif_t ∈ bold_T end_POSTSUBSCRIPT bold_M ( sansserif_q , sansserif_adr , sansserif_t ) ⋅ bold_P ( bold_t ), where 𝐓𝐓\mathbf{T}bold_T represents tokens hold by 𝖺𝖽𝗋𝖺𝖽𝗋\mathsf{...
A
While the assumption of a finite parametric space volume is reasonable in practice, the volume size, appearing in the ∥⋅∥∞subscriptdelimited-∥∥⋅\left\lVert\cdot\right\rVert_{\infty}∥ ⋅ ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT bound that we shall use in the proceeding analysis, grows exponentially with the dimension of...
We now consider the special case of the KDE (cf. Section 2.2) with a Gaussian kernel function and an optimal selection of bandwidth, as calculated in the following lemma. We focus on the Gaussian kernel both for simplicity and because it is a popular choice in practice. Our method can be extended to any kernel function...
VariBAD Dream: Recall that our pipeline is to learn a KDE over the task parameters θ𝜃\thetaitalic_θ, and then train a policy on tasks from the estimated KDE. Unfortunately, in our meta-RL setting, we do not assume that we directly know the θ𝜃\thetaitalic_θ representation for each task. However, the VAE in VariBAD al...
We start by showing that we can bound the regret of an estimated Bayes optimal policy, as a function of the estimation error of the prior itself. The proof, detailed in Section A.2 in the supplementary, is a simple application of norm inequalities, and exploiting the fact that the total cost is bounded.
To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. [39]. In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural networ...
A
Our proposed method can be generalized to align word vector spaces of two more languages to compare consumer-oriented expressions across multiple languages. The framework can be used for exploring different vocabulary usage patterns in different countries and generate a language-agnostic CHV for facilitating cross-lin...
Fig. 1 illustrates our cross-lingual ATR framework for expanding CHV across languages. Our framework consists of four main modules. With collected health Q&A corpora, the first pre-processing module employs existing NLP toolkits to clean the raw texts and transforms them into processed contents. Subsequently, using a w...
Our proposed method can be generalized to align word vector spaces of two more languages to compare consumer-oriented expressions across multiple languages. The framework can be used for exploring different vocabulary usage patterns in different countries and generate a language-agnostic CHV for facilitating cross-lin...
This study aims at proposing a cross-lingual ATR framework that helps extend the English CHV into other languages. The framework starts with collecting HCGC corpora of two different languages. With the corpora, the framework adopts word2vec techniques [27] on each monolingual corpus to learn the word associations used ...
To automatically expand English OAC CHV into other languages, we proposed a cross-lingual ATR framework, capturing word semantics from HCGC within each language using the word embedding technique. The words of different languages will be aligned into a cross-lingual word space by referring to a few pairs of medical wo...
D
We showcased the effectiveness of this approach in various AI applications, including image classification, text analysis, and molecular simulations. While several methodsFisher, Rudin, and Dominici (2019); Lundberg and Lee (2017); Sundararajan, Taly, and Yan (2017); Wachter, Mittelstadt, and Russell (2017) have been p...
Recent applications of our framework (TERP), have been instrumental in uncovering key mechanisms behind crystal nucleationWang et al. (2024) and hydrophobic ligand dissociation.Beyerle and Tiwary (2024) Given the critical role of molecular sciences in uncovering chemical reaction pathways,Yang et al. (2017) understand...
We call our approach Thermodynamics-inspired Explainable Representations of AI and other black-box Paradigms (TERP). Owing to its model-agnostic implementation, TERP can be used for explaining predictions from any AI classifier. We demonstrate this generality by explaining the following black-box models in this work: ...
We showcased the effectiveness of this approach in various AI applications, including image classification, text analysis, and molecular simulations. While several methodsFisher, Rudin, and Dominici (2019); Lundberg and Lee (2017); Sundararajan, Taly, and Yan (2017); Wachter, Mittelstadt, and Russell (2017) have been p...
Performing predictions based on observed data is a general problem of interest in a wide range of scientific disciplines. Traditionally, scientists have tackled this problem by developing mathematical models that connect observations with predictions using their knowledge of the underlying physical processes. However, ...
A
Additionally, we compared the performance of FuSeBMC v4 utilizing smart seeds with the version of FuSeBMC v4 using only primary seeds (i.e. all zeros, all ones, and randomly chosen values) on the ECA (which stands for event-condition-action systems) subcategory in Cover-Error (where FuSeBMC v4 demonstrated 28% improvem...
FuSeBMC v4 achieved the overall first place at Test-Comp 2022, obtaining a score of 3003 out of 4236 with the closest competitor, VeriFuzz (Chowdhury et al., 2019), scoring 2971 and significantly outperforming several state-of-the-art tools such as LibKluzzer (Le, 2020), KLEE (Cadar et al., 2008), CPAchecker (Beyer and...
Symbolic execution and BMC have shown competence in producing high-coverage test-cases and detecting errors in complex software. One of the more popular symbolic execution engines is KLEE (Cadar et al., 2008). KLEE is a tool that explores the search space path-by-path by utilizing LLVM compiler infrastructure and dynam...
The combination of symbolic execution and BMC with fuzzing has been used recently to combine the strengths of both techniques. For example, VeriFuzz (Chowdhury et al., 2019) is a state-of-the-art tool we have previously compared to FuSeBMC. The authors describe it as a program-aware fuzz tester that combines feedback-...
Table 6 demonstrates the code coverage capabilities of FuSeBMC v4 in comparison to other state-of-the-art software testing tools. It can be seen that FuSeBMC achieved first place with an overall score of 2104 out of 3460. FuSeBMC participated in all 16 subcategories, in 9 of which (i.e. Arrays, BitVectors, Floats, Hea...
A
where Xb,p(α,β)superscriptsubscript𝑋𝑏𝑝𝛼𝛽X_{b,p}^{(\alpha,\beta)}italic_X start_POSTSUBSCRIPT italic_b , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_α , italic_β ) end_POSTSUPERSCRIPT and Ib,p(α,β)superscriptsubscript𝐼𝑏𝑝𝛼𝛽I_{b,p}^{(\alpha,\beta)}italic_I start_POSTSUBSCRIPT italic_b , italic_p e...
The bandwidths of the matrices follow from Lemmas 7 and 8. The first equation (28) follows immediately from the commutativity of fractional (and integer-order) integration matrices stated in (3). To derive (29), we apply Proposition 6 to the JFP basis, then
The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in...
Our pseudo-stabilization technique discussed in Appendix A is essential to the scalability of the JFP method and thus also for its application to practical computational problems. We emphasize that high-precision computations are required only for the computation of fractional integration matrices and not for the solu...
We consider our method to be the successor of that of Bhrawy and Zaky [7]. They applied a change of variables to classical Jacobi polynomials such that the algebraic singularities of the resulting basis, the JFP basis (which is called thus for reasons we explain in Section 3), conform to those of the solution333The met...
A
Supplementary Figure S23: The maximum capability of the topological features measured by AUC-mROC in the supervised approach. The imbalanced positive and negative samples are generated by sample2. We use 21 indexes from four families and measure the performance of the supervised prediction by these indexes in all 550 n...
p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the percentage of samples in LPsuperscript𝐿𝑃L^{P}italic_L start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT that hold the topological feature, and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the percentage of samples in...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
C
For sparse layer update, we prune away the gradient nodes of the frozen weights, only keeping the nodes for bias update. Afterwards, we traverse the graph to find unused intermediate nodes due to pruning (e.g., saved input activation) and apply dead-code elimination (DCE) to remove the redundancy. For sparse tensor upd...
Traditional training frameworks usually derive the gradients of all the trainable parameters before applying the update. Such a practice leads to significant memory waste for storing the gradients. By reordering operators, we can immediately apply the gradient update to a specific tensor (in-place update) before back-p...
We measure the training memory of three models on STM32F746 MCU to compare the memory saving from TTE. We measure the peak SRAM usage under three settings: general full update, sparse update, and sparse update with TTE graph reordering (Figure 10(a)). The sparse update effectively reduces peak memory by 7-9×\times× com...
In this paper, we aim to bridge the gap and enable tiny on-device training with algorithm-system co-design. We investigate tiny on-device training and find two unique challenges: (1) the model is quantized on edge devices. A real quantized graph is difficult to optimize due to low-precision tensors and the lack of Batc...
We visualize the update schedule of the MCUNet [47] model searched under 100KB extra memory (analytic) in Figure 11 (lower subfigure (b), with 10 classes). It updates the biases of the last 22 layers, and sparsely updates the weights of 6 layers (some are sub-tensor update). The initial 20 layers are frozen and run for...
A
\end{array}\right].italic_A = [ start_ARRAY start_ROW start_CELL 2 end_CELL start_CELL 0 end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 3 end_CELL start_CELL end_CELL end_ROW end_ARRAY ] , italic_B = [ start_ARRAY start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL end_CELL...
ρ⁢(|A−1⁢B|)<1𝜌superscript𝐴1𝐵1\rho(|A^{-1}B|)<1italic_ρ ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B | ) < 1; (2) σmin⁢(A)>σmax⁢(B)subscript𝜎𝐴subscript𝜎𝐵\sigma_{\min}(A)>\sigma_{\max}(B)italic_σ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_A ) > italic_σ start_POSTSUBSCRIPT roma...
σm⁢a⁢x⁢(A−1⁢B)≤σm⁢a⁢x⁢(A−1)⁢σm⁢a⁢x⁢(B)=σm⁢a⁢x⁢(B)σm⁢i⁢n⁢(A).subscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵subscript𝜎𝑚𝑎𝑥superscript𝐴1subscript𝜎𝑚𝑎𝑥𝐵subscript𝜎𝑚𝑎𝑥𝐵subscript𝜎𝑚𝑖𝑛𝐴\sigma_{max}(A^{-1}B)\leq\sigma_{max}(A^{-1})\sigma_{max}(B)=\frac{\sigma_{max% }(B)}{\sigma_{min}(A)}.italic_σ start_POSTSUBSCRIPT italic_...
σm⁢a⁢x⁢(A−1⁢B)=0.75<1,σm⁢i⁢n⁢(A)=2>σm⁢a⁢x⁢(B)=1.5.formulae-sequencesubscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵0.751subscript𝜎𝑚𝑖𝑛𝐴2subscript𝜎𝑚𝑎𝑥𝐵1.5\sigma_{max}(A^{-1}B)=0.75<1,\sigma_{min}(A)=2>\sigma_{max}(B)=1.5.italic_σ start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ( italic_A start_POSTSUPERSCRIP...
σm⁢a⁢x⁢(A−1⁢B)=0.5<1,σm⁢i⁢n⁢(A)=2>σm⁢a⁢x⁢(B)=1.5.formulae-sequencesubscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵0.51subscript𝜎𝑚𝑖𝑛𝐴2subscript𝜎𝑚𝑎𝑥𝐵1.5\sigma_{max}(A^{-1}B)=0.5<1,\sigma_{min}(A)=2>\sigma_{max}(B)=1.5.italic_σ start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ( italic_A start_POSTSUPERSCRIPT - ...
D
A precise runtime analysis of the classic (1+1)11(1+1)( 1 + 1 ) EA on the bit-string Jump benchmark [DLMN17] has shown (i) that the classic mutation rate of 1n1𝑛\frac{1}{n}divide start_ARG 1 end_ARG start_ARG italic_n end_ARG is far from optimal for this benchmark, (ii) that the optimal mutation rate asymptotically is...
Concentrating on the plots ignoring easy-to-detect void mutations, that is, the dotted lines in Figure 1 (note that there are no void mutations for the heavy-tailed swap operator, hence this line is identical to (and thus covered by) the corresponding solid line), we see that the Poisson scramble operator leads to the ...
For reasons of brevity, we shall concentrate on the most interesting result in [DLMN17], namely that a heavy-tailed choice of the mutation strength gives a significant speed-up for all jump functions. We note cursory that heavy-tailed parameter choices found ample uses subsequently and often overcame in an elegant man...
Since our analyses so far suggest that the scramble mutation operator is more natural than the one based on swaps, we shall only regard a heavy-tailed version of the former. So we proceed by defining a heavy-tailed scramble mutation operator. We say that an integer random variable X𝑋Xitalic_X follows a power-law distr...
Finally, we analyze the performance of a heavy-tailed variant of the scramble mutation operator. For bit-string representations, it was observed in [DLMN17] that heavy-tailed mutation operators, and more generally heavy-tailed parameter choices [ABD21], can greatly speed up the runtime of evolutionary algorithms. In p...
B
We introduced a novel approach to geometric multilevel optimization. The approach employs information geometry in order to devise all ingredients of the iterative multilevel scheme. Invoking coarse level representations for computing descent directions effectively accelerates convergence. Experiments conducted for a r...
Although we consider the specific problem (1.1) in this paper, we believe that our approach generalizes to other constrained convex programs, analogous to the way how open convex parameter sets of probability distributions are turned into statistical manifolds [Lau87, AN00].
We introduced a novel approach to geometric multilevel optimization. The approach employs information geometry in order to devise all ingredients of the iterative multilevel scheme. Invoking coarse level representations for computing descent directions effectively accelerates convergence. Experiments conducted for a r...
Results for non-convex optimization problems using a multilevel approach are not yet available in the Euclidean setting. Recently, a geometric multilevel optimization approach has been proposed by [SV21] for the specific case of low-rank matrix manifolds. Our approach differs in that we focus on information geometry fo...
The derivation of the approach for boxed-constrained convex programs can be transferred to other convex programs with simply structured feasible sets, analogous to turning parameter spaces of probability distributions into Riemannian manifolds in information geometry. Simplices instead of boxes as feasible sets provide...
D
One aspect we did not consider here is learning neural networks with positive parameters using gradient descent. It would be interesting to examine the efficacy of gradient methods both empirically and theoretically. Such a study could lead to further insights regarding methods that ensure that a neural network approxi...
We focused on the threshold activation function. It is an interesting direction to extend our results for other activation functions such as sigmoids. For the universality result of depth 4 monotone networks it seems plausible that one could approximate thresholds by sigmoids and prove that monotone networks of depth ...
We thank Arkadev Chattopadhyay for helpful feedback and Todd Millstein for discussing [42] with us which led us to think about monotone neural networks. We are grateful to David Kim for implementing our construction of a monotone neural network and testing it over several monotone data sets. Finally, we thank Bruno Pa...
We are unaware of previous work studying the interpolation problem for monotone data sets using monotone networks. There is extensive research regarding the size and depth needed for general data sets and networks to achieve interpolation [49, 7, 13, 45] starting with the seminal work of Baum [4]. Known constructions ...
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was c...
B
We use in all experiments an off-the-shelf generating vector [35, lattice-39101-1024-1048576.3600] using a total amount of 16384, 32768, 65536, 131072, 262144, and 524288 cubature points with R=16𝑅16R=16italic_R = 16 random shifts. Although this generating vector has not been obtained using the CBC algorithm with the ...
In this study, we assumed the data to allow for solutions u∈H2𝑢superscript𝐻2u\in H^{2}italic_u ∈ italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT concerning the spatial variable. At the cost of some technical but standard extensions of the DG analysis, we can extend our results to the case that u∈H3/2+ϵ𝑢superscr...
A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ...
Thus, the spatial grid of the discontinuous Galerkin method is significantly coarser than the conforming finite element grid. We do so to underline that it is generally considered ‘unfair’ to compare discontinuous Galerkin and conforming finite elements on the same grid, since DG usually has many more degrees of freed...
Figure 2 deals with the lognormal case. The left picture uses linear approximations of SIPG, NIPG, and the conforming finite element method, while the right picture shows the results for second order SIPG and NIPG. In the left picture, we see that, again, all three methods work fine, and similarly well. Their convergen...
C
Explicit forms of distribution classes like those used in (TZZ19, ) in the homogeneous setting are difficult to obtain in the heterogeneous setting due to the intricate structures of the hard input distributions μAsuperscript𝜇𝐴\mu^{A}italic_μ start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT and μBsuperscript𝜇𝐵\m...
Implicit Forms of Distribution Classes.   Our first technical innovation is that we implicitly define the classes of distributions for the generalized round elimination by quantifying the relationship between each distribution in the class and the original hard input distribution. The discussion below is again a simpli...
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. On the other hand, the ti...
In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first...
In the rest of this section, we first introduce the hard input distribution that we use to prove the lower bound and discuss its properties. We then introduce the classes of distributions on which we will perform the generalized round elimination. After these preparation steps, we present our main lower bound proof for...
A
In its outer loop, one distinguishing characteristic of the proposed algorithm, which sets it apart from stochastic algorithms like SVRG and SARAH, is its utilization of quasi-Newton directions integrated with a linesearch while preserving the advantageous low-memory characteristic. In the context of this study, variou...
Even though in Algorithm 1 quasi-Newton directions based on the residual mapping were suggested (cf. (3.5)), any superlinear direction can be employed in the algorithm. As a result, our theory provides a direct globalization strategy for works that employ quasi-Newton direction with only local convergence guarantees. F...
In its outer loop, one distinguishing characteristic of the proposed algorithm, which sets it apart from stochastic algorithms like SVRG and SARAH, is its utilization of quasi-Newton directions integrated with a linesearch while preserving the advantageous low-memory characteristic. In the context of this study, variou...
To attain a superlinear convergence rate, the IQN method [45] has integrated quasi-Newton directions with incremental updates, albeit with only local convergence guarantees. Conversely, the approach introduced in [59] also exhibits a superlinear convergence rate but necessitates Hessian evaluation. It is noteworthy tha...
In the finite sum setting, approaches proposed by [47] and [60] have utilized quasi-Newton updates with global convergence guarantees and linear convergence rates. Furthermore, [74] has extended the utilization of quasi-Newton directions to decentralized learning scenarios.
D
In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a...
Then, we applied the truncated t-SVD with the mentioned tubal ranks to the underlying data tensors. The running times of the proposed algorithm and the truncated t-SVD are reported in Table II. It is seen that the Algorithm 4 outperforms the truncated t-SVD in terms of running time.
This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ...
After acceptance of the paper, the author found similar incremental algorithms for the computation of the t-SVD in [39, 40]. Thanks Dr. Ugochukwu Ugwu for bringing these works to the author’s attention. The author would like to thank Stanislav Abukhovich for his help in the implementation of the algorithms. He introdu...
Then to examine the speed-up of our algorithm, we used the truncated t-SVD with the estimated tubal rank 10. The running times of the proposed algorithm and the truncated t-SVD algorithm for different dimensions for Cases I-III are reported in Figure 4. The linear scaling of Algorithm 4 compared to the truncated t-SVD ...
C
However, only a few approaches are suited for the more complex tasks of multi-label classification (MLC) or hierarchical multi-label classification (HMLC), even though many applications (including the ones listed above) have an inherent complexity suitable for MLC and HMLC. Multi-label classification is a predictive ma...
Hierarchical multi-label classification is a particular case of MLC, where the output space is structured so that it accommodates dependencies between labels. In particular, labels are organized in a hierarchy: An example labeled with label c𝑐citalic_c is also labeled with all parent/super-labels of c𝑐citalic_c. MLC ...
The definition of ω⁢(c)𝜔𝑐\omega(c)italic_ω ( italic_c ) is general enough to represent classes that are organized as a directed acyclic graph (DAG). Generally, a DAG-like hierarchy can be interpreted in two ways: an example belonging to a class c𝑐citalic_c, either i) belongs to all super-classes of c𝑐citalic_c, or...
is proposed to exploit both the feature distribution and the label relation between examples. Therefore, the optimization simultaneously takes into account instance-level relations across labeled and unlabeled samples in feature space and the relations across labels. This approach has been only applied in the image mul...
However, only a few approaches are suited for the more complex tasks of multi-label classification (MLC) or hierarchical multi-label classification (HMLC), even though many applications (including the ones listed above) have an inherent complexity suitable for MLC and HMLC. Multi-label classification is a predictive ma...
A
Mounting two different sensors can be another issue as more space, equipment, and extra budgets are needed. Therefore, using an equivalent sensor that is cheaper and can do a similar function may be preferable to tackle this problem. For example, a LiDAR can be replaced with a depth camera (merged with an RGB camera) ...
Together with AIM-MT [27], we deploy Huang et al.’s model [34] for a comparative study with the objective of comparing the performance of different sensor fusion strategies. Huang et al. fuse the information by processing RGB and depth at the early stage to extract a deeper relation on each pixel. Meanwhile, DeepIPC fu...
Similar to Ishihara et al. [26] and Chitta et al. [27], the perception parts of DeepIPC are guided by completing a vision task to provide better features. However, it only uses semantic segmentation as auxiliary supervision since the depth is considered as an input. Then, the controller is equipped with two decision-ma...
In addition, we conduct a comparative study with some recent models to get a clearer performance justification. Table II shows the specification of the models evaluated in this study where our DeepIPC is considered to be the smallest model as it has the lowest number of parameters. We evaluate a model proposed by Huan...
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t...
A
\mathcal{O}(k^{2})}n^{\mathcal{O}(k)}caligraphic_O ( italic_n start_POSTSUPERSCRIPT 6 italic_k end_POSTSUPERSCRIPT ) + 4 start_POSTSUPERSCRIPT 6 italic_k end_POSTSUPERSCRIPT ⋅ 2 start_POSTSUPERSCRIPT caligraphic_O ( italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIP...
Tree decompositions are among the most popular tools in graph algorithms. The crucial property of tree decompositions exploited in the majority of dynamic programming algorithms is that each bag of the decomposition can interact with an optimal solution only in a bounded number of vertices. The common measure of a tree...
The algorithm will be based on recursively constructing the decomposition, using W𝑊Witalic_W as the interface in the recursion. First, if α⁢(G)≤6⁢k𝛼𝐺6𝑘\alpha(G)\leq 6kitalic_α ( italic_G ) ≤ 6 italic_k, we return the trivial tree decomposition with only one bag V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ).
Both our algorithm and the algorithm of Yolov [44] are based on the general approach of constructing a tree decomposition by repeatedly finding balanced separators, originating from Graph Minors XIII [41]. The main difference of our algorithm and the algorithm of Yolov is the approach for finding separators with bounde...
Our algorithm constructs a tree decomposition from the root to the leaves by maintaining an “interface” W𝑊Witalic_W and breaking it with balanced separators. This is a common strategy used for various algorithms for constructing tree decompositions and branch decompositions.
D
From Table IV, we observer that (1) for each round, all methods under w/ model splitting setting have the same number of parameters sent from each client to the server (i.e., 0.23 MB for a batch of embeddings), and VIMADMM has a smaller number of parameters sent from server to each client (i.e., 0.08 MB in total for a ...
TABLE IV: Communication costs (in megabytes) comparison. VIMADMM requires lower communication costs per round than baselines under w/ model splitting setting. ADMM-based methods require lower communication costs to achieve the same target accuracy performance.
our ADMM-based methods converge faster and achieve higher accuracy than gradient-based baselines, especially on CIFAR. This is because the multiple local updates enabled by ADMM lead to higher-quality local models at each round, thereby speeding up the convergence.
(2) With smaller # of communicated parameters at each round and faster convergence (i.e., smaller # of communicated rounds to achieve a target accuracy), VIMADMM requires significantly lower communication costs than baselines. For example, to achieve 65.0% accuracy on CIFAR, VAFL needs 5381.4 MB while VIMADMM only requ...
Our methods (VIMADMM and VIMADMM-J) outperforms baselines due to multiple local updates enabled by ADMM (τ>1𝜏1\tau>1italic_τ > 1). Compared with FedBCD under different number of local steps τ𝜏\tauitalic_τ, VIMADMM also achieves faster convergence and higher accuracy, which shows that the strategic utilization of ADMM...
C
When new connections between nodes are established, Unlike previous approaches that blindly update the embeddings of related nodes (e.g., neighbors), Ada-DyGNN dynamically and adaptively distinguish which nodes should be influenced and updated. Since determining whether one node should be updated will influence the sub...
In addition, Actor-Critic methods [78, 79, 80, 81] are a hybrid of these two kinds of methods, which makes decisions according to a policy network and estimates the reward by a value function. In our method, we attempt to explore reinforcement learning to effectively capture temporal evolution for dynamic graph learnin...
The basic idea of reinforcement learning is to train an agent for decision making by interacting with the environment [67, 68, 69, 70, 71]. There are mainly two lines of methods in reinforcement learning [72, 73]: policy-based methods and value-based methods. Value-based methods, such as DQN [74] and SARSA [75], aim t...
Note that, since sampling which of neighbors to update is discrete, we could not optimize it through stochastic gradient descent based methods [42, 82]. More importantly, the process of deciding whether neighbor nodes should be updated or retained can be regraded as a sequence decision problem. Thus, we address this pr...
Moreover, the process of sampling which neighbors to update is discrete, posing a challenge for direct optimization through stochastic gradient descent-based methods [42]. In light of these, we attempt to address this problem via reinforcement learning, which excels in optimizing discrete sampling problems and can capt...
D
The sequence number of each cloth condition in original datasets and our benchmarks used for training can be seen in Figure 4. It can be seen that our benchmarks have a relatively smaller dataset volume compared with previous datasets, but the accuracy can be improved with further collected data.
And during the test, NM-1,2,3,4 are grouped into the gallery, and NM-5,6; BG-1,2; CL-1,2 are divided into three subsets according to the waking conditions and are regarded as the probe. However, since the segmentation of CASIA-B is relatively rough, we collected some pedestrian images and trained a new segmentation mod...
TABLE II: The rank-1 accuracy (%) on CASIA-BN-RCC for different probe views excluding the identical-view cases. For evaluation, the sequences of NM-01,02,03,04 for each subject are taken as the gallery. The probe sequences are divided into three subsets according to the walking conditions (i.e. NM, BG, CL).
The performance comparison on OUMVLP-RCC is shown in Table III. Notice that we also divide the probe into two subsets according to the walking conditions: NM and CL, and they have been evaluated separately, and we take the accuracy for CL as the main criteria.
TABLE III: The rank-1 accuracy (%) on OUMVLP-RCC for different probe views excluding the identical-view cases. For evaluation, the sequences of NM-01 for each subject are taken as the gallery and the probe sequences of the two walking conditions, NM-00 and CL-00, are respectively evaluated.
B
=E\left[S_{a^{*}}^{B}\right]\approx\mu_{*}over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D italic_E end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT , italic_S start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT ) = over^ start_ARG i...
is used by deep Q-learning to solve the overestimation problem of ME in DQN. The Double DQN has two estimators, and one estimator decides the action index while the other estimator evaluates the action value of the selected action. Then Double DQN (DDQN) uses the evaluated action value to estimate the ground truth maxi...
ME uses the maximum of sample means to estimate the ground truth maximal expected value (MEV), while DPAV takes the partial average over the maximum and minimum of sample means. The minimum will shift the DPAV estimation towards the ground truth, so the upper bound of the DPAV estimator bias will be smaller than that o...
We start with representing the main results about the bias of Maximum Estimator (ME) and Double Estimator (DE) reported in (Van Hasselt, 2013). As for the direction of the bias, ME is positively biased, while DE is negatively biased. ME is bounded by:
Since the MSE loss of an estimator is the sum of its squared bias and its variance, so we should also consider its variance to evaluate its goodness. Van Hasselt (2013) proved that both the variance of ME and the one of DE could be upper bounded by the sum of variances of sample means: Var⁡(μ^∗M⁢E/D⁢E)≤∑i=1Mσi2|Si|.Var...
C
Our model is able to recognize verb-noun pairs with similar performance as the baseline, as reported in Table 2. However, our model is trained on pre-extracted features 𝐅𝐅\mathbf{F}bold_F for each clip, and not the image-based video clip 𝐕𝐕\mathbf{V}bold_V . Due to the lower dimensionality of these features (𝐅∈ℝT×...
In Table 2, it is shown that applying all techniques (M+I+N) obtains the best Top-1 accuracy results of our framework, but our model slightly decreases its performance regarding Top-5 predictions. We claim that by dealing with imbalance of the Ego4D dataset through Focal Loss, the model takes more risk by attempting t...
Our model is able to recognize verb-noun pairs with similar performance as the baseline, as reported in Table 2. However, our model is trained on pre-extracted features 𝐅𝐅\mathbf{F}bold_F for each clip, and not the image-based video clip 𝐕𝐕\mathbf{V}bold_V . Due to the lower dimensionality of these features (𝐅∈ℝT×...
We also compare the influence of predicting intention correctly to the accuracy of action classification, illustrated in Table 3. The results show that there is a significant and direct relationship between noun and intention. By conditioning the action-level prediction framework through the intention, it is shown tha...
Finally, we investigate the performance of our whole framework based on the end-to-end evaluation. First, H3M classifies the actions and the intention from the observed clips. Then, based on these predictions, our I-CVAE model anticipates the Z=20𝑍20Z=20italic_Z = 20 actions in the future. In Table 4 we evaluate the L...
A
Let 𝒳=⟨𝒙1,𝒙2,⋯,𝒙N⟩𝒳subscript𝒙1subscript𝒙2⋯subscript𝒙𝑁\mathcal{X}=\langle\bm{x}_{1},\bm{x}_{2},\cdots,\bm{x}_{N}\ranglecaligraphic_X = ⟨ bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , bold_italic_x start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIP...
The bottom panel of Fig. 7 shows the anomaly scores produced by our method COUTA. COUTA successfully identifies all of these anomaly cases with distinguishably higher scores on true anomalies and consistently lower scores on normal moments. We use three pre-defined fixed operations in creating native anomaly examples. ...
This experiment is to quantitatively measure the interference of anomaly contamination to time series anomaly detectors, that is, we test the robustness of each anomaly detector w.r.t. different anomaly contamination ratios in the training set. Due to the continuity of time series data, we cannot directly remove or in...
Let 𝒳=⟨𝒙1,𝒙2,⋯,𝒙N⟩𝒳subscript𝒙1subscript𝒙2⋯subscript𝒙𝑁\mathcal{X}=\langle\bm{x}_{1},\bm{x}_{2},\cdots,\bm{x}_{N}\ranglecaligraphic_X = ⟨ bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , bold_italic_x start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIP...
Unsupervised time series anomaly detection f𝑓fitalic_f is to measure the abnormal degree of each observation and give anomaly scores without accessing any label information, i.e., f:𝒳↦ℝ:𝑓maps-to𝒳ℝf:\mathcal{X}\mapsto\mathbb{R}italic_f : caligraphic_X ↦ blackboard_R. Higher anomaly scores indicate a higher likelihoo...
D
Binary variable that selects either pg⁢e⁢nsubscript𝑝𝑔𝑒𝑛p_{gen}italic_p start_POSTSUBSCRIPT italic_g italic_e italic_n end_POSTSUBSCRIPT or pc⁢o⁢p⁢ysubscript𝑝𝑐𝑜𝑝𝑦p_{copy}italic_p start_POSTSUBSCRIPT italic_c italic_o italic_p italic_y end_POSTSUBSCRIPT
Contrary to the other facets of NLG, such as chatbots, for which large-scale data can be harvested (Lowe et al., 2015; Abbott et al., 2016), D2T datasets are often smaller in scale and task-specific. Ferreira et. al. (Ferreira et al., 2017) note that phrase-based translation models (Koehn et al., 2007) can outperform n...
Seq2Seq models (see §3.3), serve as the basis for neural NLG (Sutskever et al., 2011; Cho et al., 2014; Vaswani et al., 2017). As such, to compare the efficacy of neural architectures for long-form D2T, Wiseman et. al. (Wiseman et al., 2017) compare the performance of various seq2seq models to their templated counterpa...
While human judgment is often considered to be the ultimate D2T evaluation measure, they are subject to a high degree of inconsistency (even with the same utterance), which may be attributed to the judge’s individual preferences (Walker et al., 2007; Dethlefs et al., 2014) - an issue that could be circumvented through...
Frameworks for MR (and graph) narration that shy from dedicated graph encoders rely on effective linearization techniques - the representation of graphs as linear sequences, as illustrated in Figure 6b. While Ferreira et. al. (Ferreira et al., 2017) note improvements in neural models with the adoption of a 2-step class...
A
For the second assumption, although almost all the prior SGG work has noticed that ground-truth visual relations in existing datasets are always sparsely identified and annotated [32] (cf., Fig. 1(c)), they still train their models by treating all the un-annotated pairs as background, i.e., there is no visual relation...
To be more specific, NICE consists of three steps: 1) negative noisy sample detection (Neg-NSD): We reformulate the negative NSD as an out-of-distribution (OOD) detection problem, i.e., regarding all the positive samples as in-distribution (ID) training data, and all un-annotated negative samples as OOD test data. In t...
In this paper, we try to get rid of these two questionable assumptions and propose to reformulate SGG as a noisy label learning problem. Specifically, we propose a novel model-agnostic NoIsy label CorrEction and Sample Training strategy for SGG, dubbed NICEST. NICEST mitigates the noisy label learning problem from two...
Learning with Noisy Labels. Existing noisy label learning methods can be roughly grouped into two categories: 1) Utilizing an explicit or implicit noisy model to estimate the distributions of noisy and clean labels, and then deleting or correcting these noisy samples. These models can be in different formats, such as n...
In this paper, we argued that two plausible assumptions about the ground-truth annotations are inapplicable to existing SGG datasets. To this end, we reformulated SGG as a noisy label learning problem and proposed a novel model-agnostic noisy label correction and sample training strategy: NICEST. It is composed of NICE...
B
Selfish mining is not reported frequently in existing cryptocurrencies because selfish miners cannot find practical ways to launch the attack easily. It is widely accepted that the attacker’s computational power needs to exceed 33% to gain higher profits than honest mining. Attackers need to either occupy more than 33%...
Selfish mining is not reported frequently in existing cryptocurrencies because selfish miners cannot find practical ways to launch the attack easily. It is widely accepted that the attacker’s computational power needs to exceed 33% to gain higher profits than honest mining. Attackers need to either occupy more than 33%...
Based on the partial block sharing strategy, we propose a new and practical mining attack called Partial Selfish Mining (PSM). As shown in Figure 1(b), PSM starts as selfish mining to withhold a newly mined block. Then, the attacker can launch the partial block-sharing strategy and finally releases the secret by broadc...
Selfish Mining: Selfish mining was first proposed by Eyal et al. [9eyal2014majority]. A selfish mining attacker can earn extra rewards by intentionally generating a fork. When an attacker discovers a new block in selfish mining, it will keep the block as its private branch and keeps mining after it. When other miners ...
It is impractical to launch selfish mining by a single miner for large-scale public blockchains. On the one hand, it is hard to occupy more than 33% of mining power to ensure successful selfish mining, e.g., Bitcoin. According to [blockexplorer.com], the largest mining pool in Bitcoin only occupies 16% of overall mini...
D
Finally, in Figure 1(c), we vary β2∈{0.9,0.99,0.999}subscript𝛽20.90.990.999\beta_{2}\in\{0.9,0.99,0.999\}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ { 0.9 , 0.99 , 0.999 } as we fix β1=0.9subscript𝛽10.9\beta_{1}=0.9italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9 and η∈{\eta\in\{italic_η ∈ {5e-5, 4e-4}}\...
Figure 10: Adam finds sharper solutions when η𝜂\etaitalic_η or β2subscript𝛽2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is small. We train a WRN on CIFAR-100 until reaching train loss 0.1, and we record the sharpness λ1⁢(𝐇⁢(𝐱∗))subscript𝜆1𝐇superscript𝐱\lambda_{1}(\mathbf{H}(\mathbf{x}^{*}))italic_λ...
To test this hypothesis, in Figure 10(a), we use full-batch Adam at a range of learning rates to train a Wide Resnet on CIFAR-100 until reaching a training loss value of 0.1. We plot the sharpness λ1⁢(𝐇⁢(𝐱∗))subscript𝜆1𝐇superscript𝐱\lambda_{1}(\mathbf{H}(\mathbf{x}^{*}))italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSC...
In particular, we consider (a) a CNN on CIFAR-10; (b) an un-normalized Wide ResNet (WRN) [47] on CIFAR-10; and (c) a (batch-normalized) WRN on CIFAR-100. Furthermore, Figure 7 in the subsequent section verifies that our minibatch findings apply to transformers on WMT machine translation.
In Figure 11, we train a WRN on CIFAR-100 (left two panes) and a fully-connected network on CIFAR-10 (right two panes) using both Adam and momentum GD. It is impossible to make a blanket claim that Adam finds sharper solutions than momentum GD, since momentum at a small learning rate finds sharper solutions than Adam a...
C
We can observe that the free-energy landscape in the low-dimensional manifold calculated by mrse is highly heterogeneous, with multiple partially unfolded intermediate states and many possible reaction pathways, as shown in Fig. 4(a𝑎aitalic_a). Such a complex free-energy landscape shows that the dynamics of CLN025 is...
Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol. We can compare our results to the unbiased simulation data from...
In practice, free-energy landscapes for systems severely affected by the sampling problem are characterized by many metastable states separated by high kinetic barriers that impede transitions between metastable states. Consequently, on the timescales we can simulate, the system stays kinetically trapped in a single fr...
We can observe that the free-energy landscape in the low-dimensional manifold calculated by mrse is highly heterogeneous, with multiple partially unfolded intermediate states and many possible reaction pathways, as shown in Fig. 4(a𝑎aitalic_a). Such a complex free-energy landscape shows that the dynamics of CLN025 is...
In Fig. 4, we can see the lower-lying free-energy basins in the reweighted stochastic embeddings are captured by both mrse and stke. We can also notice a slight difference between the metastable states lying higher in free energy. Specifically, mrse captures more states below a threshold of 25 kJ/mol in comparison to t...
D
The main drawback of the methods discussed here is the availability of data to which they can be applied. Such data comes from processed images. We have treated both the acquisition and initial processing of images to form a skeletonistaion as a black-box. In order to move towards patient specific coronary flow model, ...
The data discussed here do not include a right coronary arterial tree nor a venous side, yet such networks are vital for coronary flow models. As has been seen, large data sets detailing the porcine coronary arterial and cardiac venous morphometry have been presented by Kassab and colleagues (Kassab et al., 1993, 1994...
The goals of the study were to develop algorithms that robustly and reliably allow a user to generate arterial trees from a large graph representing the left coronary vascular tree, and to determine the region(s) of the ventricle that are perfused from the terminal arteries of a generated subtree. Both goals have been...
Briefly, using spatial data describing the coronary arterial vasculature from a single porcine heart obtained from fluorescence cryomicrotome images (Goyal et al., 2012) and image processing techniques, we have developed algorithms to organise and search the data in order to build subtrees from the data. These subtrees...
Kassab et al. (Kassab et al., 1993) find the Strahler order of the left porcine coronary arterial tree to be 11. In their corrosion casting study, Kassab et al. captured images of vessels with radii in the range 4.23 µ⁢mmicrometer\mathrm{\SIUnitSymbolMicro m}roman_µ roman_m – 1.7 mmmillimeter\mathrm{mm}roman_mm. The ra...
A
In a survey by [7], the extrapolation ability of GNNs regards mainly two aspects: the ability to extrapolate towards unseen structural features as well as to extrapolate towards out-of-distribution attributes. [14] conclude that the size generalization can only be applied on graphs with certain structural features. Mo...
Based on the above results, one can see that our model outperforms all baselines in terms of accuracy, inference speed and generalization ability to open-world input. For Table IV, we observed that our model outperforms in terms of accuracy. We consider this verifies the validity of both our modified problem formulatio...
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph, which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal...
For handling the out of distribution attribute value per-node in larger graph, we have used NALU [16] to replace MLP in common GNN setup. Another motivation is to build equivalence in the embedding space between a node’s out-traffic attribute and its in-traffic attributes’ from its neighbors. This is because the in-tra...
In a survey by [7], the extrapolation ability of GNNs regards mainly two aspects: the ability to extrapolate towards unseen structural features as well as to extrapolate towards out-of-distribution attributes. [14] conclude that the size generalization can only be applied on graphs with certain structural features. Mo...
B
We compare the SPR-UCB method with several baselines in Atari 100K benchmark, including (1) SimPLe (Kaiser et al., 2020), which learns a environment model based on the video prediction task and trains a policy under the learned model; (2) DER (van Hasselt et al., 2019) and (3) OTR (Kielak, 2020), which improve Rainbow ...
We compare the SPR-UCB method with several baselines in Atari 100K benchmark, including (1) SimPLe (Kaiser et al., 2020), which learns a environment model based on the video prediction task and trains a policy under the learned model; (2) DER (van Hasselt et al., 2019) and (3) OTR (Kielak, 2020), which improve Rainbow ...
(1) SPR considers multi-step consistency in addition to the one-step prediction of our proposed contrastive objective, namely, SPR incorporates the information of multiple steps ahead of (sh,ah)subscript𝑠ℎsubscript𝑎ℎ(s_{h},a_{h})( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT ...
We illustrate the aggregated mean of human normalized scores among all tasks in Figure 1. We report the score for each task in Appendix F. In our experiments, we observe that (1) Both SPR and SPR-UCB outperform baselines that do not learn temporal consistent representations significantly, including DER, OTR, SimPLe, CU...
the state-of-the-art RL approach with contrastive learning on the benchmark Atari 100K (Kaiser et al., 2020). SPR utilizes the temporal information and learns the representation via maximizing the similarity between the future state representations and the corresponding predicted next state representations based on the...
C
We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ...
We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ...
We extend the DLMC estimator introduced by (Ben Rached et al., 2023) to the multilevel setting and propose a multilevel DLMC estimator for the decoupling approach (dos Reis et al., 2023) for MV-SDEs. We include a detailed discussion on the bias and variance of the proposed estimator and devise a complexity theorem, di...
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha...
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
B
Suppose we extrapolate the ≈\approx≈0.05 m/s spent by the spacecraft in the Hohmann-like transfer plus orbital maintenance in the 800 m orbit (tighter than the tightest 1 km orbit of OSIRIS-REx [54]). In that case, the spacecraft could still orbit Bennu, and make similar orbital transfers, for about 227 days before re...
Well-designed guidance and control laws can allow an autonomous spacecraft to have a bolder operation, even with a higher level of uncertainty in the navigation. On top of that, there is not a significant compromise in budget Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V as one could expect. Therefore, a fully autonomous mission in c...
We would like to emphasize that our intention is not to advocate for a universal approach of “rapid exploration” in all asteroid missions. Instead, our objective is to illustrate the lack of necessity in minimizing uncertainties to an excessively low level for autonomous robotic spacecraft. We aim to demonstrate that ...
It is also crucial to emphasize that the comparison of these magnitudes with the OSIRIS-REx mission and other missions hereafter serves only to provide a notion of the order of magnitude of the Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V budget in real mission cases. The intention is only to showcase that the architecture proposed...
In addition to these benefits, and more importantly, an autonomous and rapid approach to exploration can shape current scientific asteroid missions to be more cost-effective and time-efficient. Current missions have a conservative and cautious operational profile, often taking months of surveying and slowly approaching...
A
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
A growing body of literature has analyzed a formulation of the problem, which links to the more general operator scaling problem (Allen-Zhu et al., 2018; Garg et al., 2015, 2018; Kwok et al., 2019; Franks, 2018; Bürgisser et al., 2018, 2019). We briefly recall some of the key results in this line of work and comment on...
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
In future work, we hope to further investigate approaches for computing δ,Δ𝛿Δ\delta,\Deltaitalic_δ , roman_Δ explicitly. In addition, we hope to apply the techniques presented here to geodesically convex optimization problems similar to  (1.3), such as the operator scaling problem (Garg et al., 2018) and difference o...
The map G𝐺Gitalic_G resembles the alternate scaling algorithm for Brascamp-Lieb constants (Garg et al., 2018, Alg. 1). The resemblance of both approaches derives from an exploitation of the difference-of-convex structure of problem 1.3 (see also (Weber and Sra, 2023)). However, the Thompson geometry perspective employ...
C
An (abstract) simplicial complex is a collection 𝒞𝒞\mathscr{C}script_C of subsets of a finite set V𝑉Vitalic_V such that τ⊆σ∈𝒞𝜏𝜎𝒞\tau\subseteq\sigma\in\mathscr{C}italic_τ ⊆ italic_σ ∈ script_C implies τ∈𝒞𝜏𝒞\tau\in\mathscr{C}italic_τ ∈ script_C. An element σ∈𝒞𝜎𝒞\sigma\in\mathscr{C}italic_σ ∈ script_C is cal...
We begin by introducing the concept of weights in simplicial complexes. A weight function w:𝒞→ℝ≥0:𝑤→𝒞subscriptℝabsent0w:\mathscr{C}\to\mathbb{R}_{\geq 0}italic_w : script_C → blackboard_R start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT assigns weights (nonnegative real numbers) to each simplex σ𝜎\sigmaitalic_σ in the com...
Mathematically, the boundary operator acts on a k𝑘kitalic_k-simplex by summing over all its (k−1)𝑘1(k-1)( italic_k - 1 )-dimensional faces. For each face, it creates a new (k−1)𝑘1(k-1)( italic_k - 1 )-dimensional chain with the opposite coefficient. Intuitively, this captures the idea that the boundary of a k𝑘kital...
In the Euclidean space, lower-dimensional simplices are named as vertex, edge, triangle, and tetrahedron for a 0-simplex, a 1-simplex, a 2-simplex, and a 3-simplex, respectively. Furthermore, the higher-dimensional simplices are polytopes analogous to triangles and tetrahedra.
By modding out the cycle space by the boundary space, we effectively exclude cycles that are simply the boundaries of higher-dimensional simplices. This ensures that we focus on nontrivial cycles that capture the true topological features of the complex. Lastly, homologous cycles represent k𝑘kitalic_k-dimensional loo...
C
where ϕitalic-ϕ\phiitalic_ϕ is the domain set, and Mμ⁢(ϕ)subscript𝑀𝜇italic-ϕM_{\mu}(\phi)italic_M start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT ( italic_ϕ ) and Mσ⁢(ϕ)subscript𝑀𝜎italic-ϕM_{\sigma}(\phi)italic_M start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_ϕ ) represent the statistics for the ϕitalic-ϕ\...
Based on the upper bound of the generalization error in Eq. 4, the proposed method needs to satisfy two requirements: 1) most of modules in the model are shared for all domains, which can be sufficiently trained by all samples, and 2) the model can reduce the interference of domain gap between different domains. Theref...
Remark. In our multi-task learning framework, using the independent BN can effectively mitigate the interference of different domains, as shown in Eq. 4. In addition, in our method, most of the modules are shared for all domains, which can sufficiently exploit all samples to reduce the third item in Eq. 4. Hence, our m...
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ...
In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis...
B
Prior arts on fine-tuning ConvNets to multiple visual domains are restrictive in generalization and parameter efficiency. Bias Tuning [2], which tunes only the bias terms of the backbone, might fail on domains with significant distribution shifts from the pre-training tasks.
Residual Adapter [48] and TinyTL [7] are mainly designed for small networks such as ResNet-26 [19] and MobileNet [23, 6]. It is prohibitive to scale these previous designs to larger ConvNets [36] or more diverse domains [60]. Besides, previous PET methods [21, 24, 31, 30, 18] are mainly designed with Transformer [56] a...
It is non-trivial to design effective PET methods for ConvNets because previous PET modules are mainly developed on Transformers rather than ConvNets. Besides, the components of the architecture and computation dynamics of ConvNets and Transformers are inherently different.
Previous PET methods insert the adapting modules to Self-Attention blocks, Feed-Forward blocks, or both [18] of Transformers, which have a relatively unified architecture. In contrast, modern ConvNets usually stacks either residual blocks [51, 19, 61] or inverted residual blocks [23, 52, 53, 36], which consists of a se...
Recent works [25, 1, 10] that attempt to use Prompt Tuning [30] and Adapters [21] on CV tasks are also designed for Vision Transformers rather than ConvNets. Furthermore, the downstream CV tasks are usually more diverse with a larger domain gap compared with NLP [45].
A
The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl...
Figure 3 shows the performance of CP-PINNs in discovering changepoints and solving (16). Specifically, the leftmost panel illustrates the precise solution across a uniform temporal scale. Identifying the locations of changepoints remains challenging even when the solutions are known. In the second panel, the identical ...
The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl...
Figure 1: The dataset is partitioned based on spatial information, with each batch encompassing the full temporal information. In the online learning approach, the network is trained using the previous distribution of loss weights and updated based on the data from the subsequent batch.
In this work, we present an innovative methodology that combines changepoints detection with PINNs to address changes and instabilities in the dynamics of PDEs. This approach marks the first exploration into simultaneously detecting changepoints and estimating unknown parameters within PDE dynamics based on observed d...
A
For example, b⁢a⁢b⁢c⁢d𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and a⁢b⁢c⁢b⁢c⁢d𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by a⁢b⁢c⁢d𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but a⁢b⁢c⁢b⁢d⁢a𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital...
Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that, provided only tha...
change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once. If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w.
for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=x⁢a⁢a⁢y𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ...
(go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive, let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let
A
In the examples below, we will consider the situation in which we have discretized the source term q⁢(𝐱)=∑kqk⁢sk⁢(𝐱)𝑞𝐱subscript𝑘subscript𝑞𝑘subscript𝑠𝑘𝐱q(\mathbf{x})=\sum_{k}q_{k}s_{k}(\mathbf{x})italic_q ( bold_x ) = ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_q start_POSTSUBSCRIPT italic_k end_P...
The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient. Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ...
For many inverse problems – such as ultrasound or seismic imaging, or electrical impedance tomography – the quantity we would like to identify is not a right-hand source term, but a coefficient in the operator on the left side of the equation. In these cases, the definition of the information density will have to be l...
As a first example of where we believe that information densities could be used, we consider the regularization of inverse problems. In a large number of practical applications, one regularizes inverse problems by adding a penalty term to the misfit function for the purpose of penalizing undesirable aspects of the rec...
Herein, let us first provide some perspectives in Section 2 on how one formulates inverse problems, and how the different philosophical approaches inform our approach. We then address the goals mentioned in the previous paragraph by first considering a finite-dimensional, linear model problem in Section 3 that we use ...
C
Indeed, working with large collections of cultural material is not always a linear process. This is especially true for data that were previously structured and labeled by several people with different goals in mind, since such data can have inconsistencies and contradictions that are not always visible to the research...
The first computational step consists of processing images, textual metadata, and labels. For image pre-processing, we tested multiple methods to extract the illuminations from the images. Although the method in Grana et al. (Grana et al., 2011) based on the Otsu algorithm (Otsu, 1979) showed good results on subsets o...
We did not extract the illuminations from the scanned manuscripts prior to processing because the state-of-the-art methods for doing so were not robust. The complexity of page structures, backgrounds, and preservation statuses led to insufficient and unusable results. Depending on the content of the raw image, this can...
The label hierarchy view shows the current state of the underlying label hierarchy. We use the Sugiyama framework (Sugiyama et al., 1981) to draw a directed acyclic graph for the label hierarchy. In the first step, it is checked with a depth-first search for each node if the graph contains cycles. If there is one, the ...
For the next image processing steps, we apply the EfficientNet B7 (Tan & Le, 2019) that was pre-trained on ImageNet  (Deng et al., 2009). We use the top layer of the network to compute the image embeddings for each image in the dataset. These embeddings are used to compute nearest neighbor similarities between the imag...
A
Table I: Details of all datasets that we study in this paper. Note that “|Train|” is the training corpus size and “Training Details” are hyper-parameters of prompt tuning for each dataset. Here, for clarification, we refer to the tasks containing less than 20K training samples as small tasks, while the others as large...
Table XV: Results (%) of cross-task prompt transfer on BERT-small. The red-colored row shows the results of full-tuning BERT-small model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target task...
Table III: Average performance (%) of all target datasets (full results can be seen in Appendix). Note that the column “Vanilla” denotes the results of vanilla PoT. Positive prompt transfers are in bold. Numbers in the subscript show the corresponding model parameters of these backbone models.
Table XV: Results (%) of cross-task prompt transfer on BERT-small. The red-colored row shows the results of full-tuning BERT-small model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target task...
Table XV: Results (%) of cross-task prompt transfer on BERT-small. The red-colored row shows the results of full-tuning BERT-small model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target task...
B
Targets Promoted by the IT Army of Ukraine. Many announcements and targeted domains were posted in the first two weeks after the invasion, beginning on 26 February, peaking on 27 February with 40 announcements and 45 domains promoted (IP addresses were not regularly included until later), see Figure 6. Yet they quickl...
Target Selection. Targets were often themed, patterning around particular weekdays e.g., online news and propaganda, food delivery, and entertainment are often attacked at weekends to maximise impact as people spend more time online. Themes were also occasionally set with re-promoted old targets, leading to wide variat...
Crossover with Observed Attacks. The IT Army of Ukraine maintains a dashboard of targets’ status, claiming many are down due to their actions. To find whether the attacks involved reflected DDoS or defacement, we correlate our attack records with promoted targets since the Telegram group started. We consider a defacem...
While the number of announcements dropped, the number of targets steadily increased, particularly in May and June 2022 with multiple-target posting. Activities were unstable at that time; targets got promoted less frequently and occasional days had no targets. Targets were mostly fresh in the first two weeks, but then ...
Targets Promoted by the IT Army of Ukraine. Many announcements and targeted domains were posted in the first two weeks after the invasion, beginning on 26 February, peaking on 27 February with 40 announcements and 45 domains promoted (IP addresses were not regularly included until later), see Figure 6. Yet they quickl...
C
Using many attack steps may be preferable in a white box scenario since it allows targeting less prominent features in the victim model and thus creating less noticeable perturbations. Nevertheless, in the black-box scenario, this might hinder the attack transferability, since these features might only be present in th...
We direct the reader’s attention to the results presented in Figures 4 and 5, which presents the performance of our proposed ranking strategy, HET, across various datasets and model architecture pairings. Figure 4 details the outcomes for CIFAR10 and ImageNet, while Figure 5 delves into the X-Ray and Road Sign datasets...
Our experiments also shed light on the vulnerabilities of various victim models. Among them, EfficentNet, a popular deep learning architecture, which was found to be the most susceptible victim model across all datasets. This discovery emphasizes the imperative need for robustness enhancements in EfficentNet and simila...
In this strategy, the attacker utilizes multiple surrogate models (F0subscript𝐹0F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) to approximate the expected transferability of x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to f𝑓fitalic_f, as expressed in (4). Here, the performanc...
For our all of our experiments, we consider a black-box adversary that has no knowledge of the victim’s architecture. To simulate this setting, we ensured that the architectures in used for f𝑓fitalic_f, f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and those in F0subscript𝐹0F_{0}itali...
B
Consequently, the fidelity kernel should only be used for datasets where the data-embedded states are ‘not too distant’ in the Hilbert space. Finally, our study of noise suggests that polynomial-depth data embeddings in noisy hardware suffer from exponential concentration, thus presenting a serious barrier to achieve a...
There are a number of causes that can lead to barren plateaus, including using variational ansatze that are too expressive [20, 33, 34] or too entangling [23, 35]. However, barren plateaus can even arise for inexpressive and low-entangling QNNs if the cost function relies on measuring global properties of the system [2...
},\boldsymbol{x}_{j})]roman_Var start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT [ italic_κ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ] and find that the same ones that lead to BP...
Here we argue that quantum kernel methods experience a similar barrier to barren plateaus. Crucially, the trainability guarantees enjoyed by kernel methods only become meaningful when the values of the kernel can be efficiently estimated to a sufficient precision such that the statistical estimates contain information...
In addition, we show that training parametrized quantum kernels using kernel target alignment suffers from an exponentially flat training landscape under similar conditions to those leading to barren plateaus in QNNs. That is, when constructing the parametrized part of the data embeddings, one should avoid features tha...
D
The 3D CNNs we employ in this work are initially designed for recognising general human behaviours and trained on human behaviours datasets such as Kinetics-400 and Kinetics-600. These datasets are formed by video clips with relatively high frame rates (25 fps) [3]. Therefore, in order to efficiently extract motion cl...
Human drivers predict lane change intentions mainly use visual clues rather than physical variables. However, existing works that utilize appearance features for lane change are surprisingly few. In [19], two appearance features, the state of brake indicators and the state of turn indicators are used for lane change re...
In this work, we propose an end-to-end framework involving two approaches for lane change recognition classification and prediction of surrounding vehicles in highway scenarios. Seven state-of-the-art 3D action recognition models are investigated including one I3D model, two SlowFast models and four X3D models for bot...
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition. Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) ...
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per...
D
7:    cs←𝒜⁢(𝒴⁢(ps))←superscript𝑐𝑠𝒜𝒴superscript𝑝𝑠c^{s}\leftarrow\mathcal{A}(\mathcal{Y}(p^{s}))italic_c start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ← caligraphic_A ( caligraphic_Y ( italic_p start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ) )
have |cπ⁢[i]−ct|≤|cπ⁢[j]−ct|subscript𝑐𝜋delimited-[]𝑖subscript𝑐𝑡subscript𝑐𝜋delimited-[]𝑗subscript𝑐𝑡|c_{\pi[i]}-c_{t}|\leq|c_{\pi[j]}-c_{t}|| italic_c start_POSTSUBSCRIPT italic_π [ italic_i ] end_POSTSUBSCRIPT - italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | ≤ | italic_c start_POSTSUBSCRIPT italic_π...
7:    cs←𝒜⁢(𝒴⁢(ps))←superscript𝑐𝑠𝒜𝒴superscript𝑝𝑠c^{s}\leftarrow\mathcal{A}(\mathcal{Y}(p^{s}))italic_c start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ← caligraphic_A ( caligraphic_Y ( italic_p start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ) )
𝒜:P→(0,1)n,p↦c=(c1,…,cn):𝒜formulae-sequence→𝑃superscript01𝑛maps-to𝑝𝑐subscript𝑐1…subscript𝑐𝑛\mathcal{A}:P\rightarrow(0,1)^{n},\leavevmode\nobreak\ \leavevmode\nobreak\ p% \mapsto c=(c_{1},...,c_{n})caligraphic_A : italic_P → ( 0 , 1 ) start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , italic_p ↦ italic_c = ( ...
8:    Rs←(ct*−cts)←subscript𝑅𝑠superscriptsubscript𝑐𝑡superscriptsubscript𝑐𝑡𝑠R_{s}\leftarrow(c_{t}^{*}-c_{t}^{s})italic_R start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ← ( italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT - italic_c start_POSTSUBSCRIPT italic_t en...
D
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in...
For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede...
The above “pre-train then prompt-tune” paradigm suggests that for some specialized areas we only need to keep one copy of pre-trained model and activate it to accommodate to various downstream tasks. In real-life applications, it is very natural to assume there are some relationships between these tasks. From prior re...
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in...
Multi-task learning (MTL) is an important subfield in machine learning [48, 17, 49]. By exploiting task relatedness, it is able to improve the performance over single-task learning. There are two dominant methods for deep multi-task learning, hard and soft parameter sharing, which learn identical and similar features, ...
D