context
stringlengths
250
6.18k
A
stringlengths
250
3.82k
B
stringlengths
250
8.2k
C
stringlengths
250
4.99k
D
stringlengths
250
4.17k
label
stringclasses
4 values
“I have an app named Google family and I use that one. And there’s another one that I use it for his game console. So that’s pretty much it. I monitor through that. And then I put some kind of restrictions sometimes. There are certain stuffs that are not for him that are very adult contents that I cannot but block.”-P...
Only one parent, P4, expressed how concerned he was about his mobile data privacy and therefore he would deny all the permission requests. Next, we discuss how the participants described engaging in monitoring their family members’ online app safety.
”I usually don’t monitor what he does, like on the apps, because I kind of trust him to know his boundary and also because he is already 16, I know how I have raised him. So I usually don’t feel the need to have to monitor him what he does on his phones and his apps.”-P5, Mother of T5 (Male, 16 years)
Only 11% (N=2) of the teens identified some of the apps that their parent’s had been using but teens knew that these apps had some privacy issues. For example, T6 expressed his concern about the Facebook app that his mother had on her phone and his mother P6 agreed with him.
”I would mostly use it [CO-oPS] to check my family’s apps because that’s one way specifically to point at the permissions that they’re allowing or they’re not allowing to apps? Because, I don’t know, most of the time, how they installed those apps, what they’re allowing what, they’re sharing what. Because I see a coup...
B
Figure 12, followed by the persistence diagrams of the two filtrations in Figure 13. Even without the aid of the confidence bands, one point is conspicuously far away from the diagonal in the persistence diagram of each filtration. The RDAD filtration picks up 2 more significant loops.
The two filtrations pick up completely different homology classes. The class picked up by the distance-to-measure filtration is near Steens Mountain Wilderness in Oregan. The 3 classes picked up by the RDAD filtration are Lake Michigan; Dallas, Texas; and the Texan region surrounded by Houston, Austin and San Antonio....
The homology class picked up by the distance-to-measure filtration is a large sparsely populated area with few cellular towers if any. Those picked up by the RDAD filtration are comparatively smaller regions with an abrupt drop in density. The distance-to-measure filtration fails to pick up the smaller homology classes...
Two squares are clearly visible in the scatter plot in the right subplot of Figure 1. However, the blue point corresponding to the smaller square in the persistence diagram of the distance filtration is very close to the diagonal (it is at the tip of the cluster of red diamonds near the origin). On the other hand, for ...
We also apply our method to real data. The distance-to-measure filtration and the RDAD filtration are applied to an open dataset HIFLD21_cellurlar_towers of cellular tower locations recorded by the Federal Communications Commission (FCC). The two filtrations reveal uninhabited regions in the United States and regions...
A
CISNet is implemented on PyTorch (Paszke et al. 2017) platform. We use Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 0.0005 as the optimizer. Learning rate is set to 0.001 and batch size is set to 4. The number of training epochs is set to 15, and early stopping strategy is employed for tr...
We compare the performance of CISNet with the previous state-of-the-art methods including DRML (Zhao, Chu, and Zhang 2016), EAC-Net (Li et al. 2017), ROI-Net (Li, Abtahi, and Zhu 2017), DSIN (Corneanu, Madadi, and Escalera 2018), JAA-Net (Shao et al. 2018), LP-Net (Niu et al. 2019), SRERL (Li et al. 2019), UGN-B (Song...
Considering the semantic relations among AUs, some works (Wang et al. 2013; Walecki et al. 2017) make efforts in modeling such relations via probabilistic graphical models or graph neural networks. Wang et al. (Wang et al. 2013) introduced a restricted Boltzmann machine to model facial action units, thereby capturing n...
Recent works have made progress in capturing high-level AU semantic relations in an implicit way (Corneanu, Madadi, and Escalera 2018; Niu et al. 2019) by exploiting correlations between AUs via probabilistic graphic models or in an explicit way (Li et al. 2019; Shao et al. 2020) by constructing an AU semantic graph ac...
Considering the locality of AUs, methods such as (Zhao, Chu, and Zhang 2016; Li, Abtahi, and Zhu 2017; Li et al. 2017; Song et al. 2021a; Chen et al. 2021a) make attempt to learn better facial appearance features by emphasizing important local facial regions. Zhao et al. (Zhao, Chu, and Zhang 2016) proposed Deep Region...
A
BHP is a hybrid block propagation protocol adopted in Ethereum. When a node receives a new block, it selects X𝑋\sqrt{X}square-root start_ARG italic_X end_ARG neighbor nodes to forward the full block after it simply verifies the blockheader, where X𝑋Xitalic_X is the number of neighbor nodes that have not received the...
An intuitive but naïve way to improve TPS is to enlarge the block size so that each block can carry more transactions. But large blocks decelerate their propagation on the network, thereby compromising the blockchain’s security and integrity [10]. Fig. 1(a) illustrates the time spent in block propagation. In Fig. 1(a),...
We implemented BBP and conducted experiments over a large-scale blockchain network with many nodes to evaluate its performance. We compare BBP with other block propagation schemes based on the experimental results. The experiment results show that BBP has the least block propagation time. Compared with the current prot...
Experiment Testbed: The direct deployment of thousands of physical machines over a large blockchain network topology is complicated and expensive, if not impossible. To run our experiments, we built a multi-node network environment based on docker container technology [26] in the Linux server and physically connected ...
A weakness of the present blockchains is their low data processing capability, as measured by transactions per second (TPS). For example, as shown in [7, 8], the TPS of Bitcoin and Ethereum are 7 and 15, respectively, significantly lower than that of centralized systems. The low TPS cannot meet the needs of large-scal...
C
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
D
We didn’t restrict on types of intervention and included all studies that proposed automated speech therapy using different intervention methods. Studies included robotics-based, mobile-based, computer-based interventions along with gamified and storybased intervention methods.
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
We presented the language distribution of the papers based on the language addressed by the AI-based automated speech therapy tools as reported in the studies (see Figure 8). The most addressed languages were English (10 studies) and Spanish (4 studies). Furthermore, two studies addressed the Cantonese language, and th...
There were 91 unique authors identified from the included studies. The VOSviewer software was used to calculate the most impactful authors, generate co-authorship clusters, and perform co-occurrences of keyword analysis (Van Eck NJ, \APACyear\bibnodate). All the authors were counted irrespective of the authorship orde...
The following keywords were used to search all the databases: speech, language, disorder, impairment, assessment, therapy, rehabilitation, treatment, AI, artificial intelligence, automated, automatic. Boolean operators were used to combine the terms as:
D
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt...
LOF has the best performance in the Zheng4eq and Zheng4uneq datasets. We also add the results for 5%percent55\%5 % removal in the Appendix, in Section E.3. We also add the improvements in the purity index for the 5% and 10% point removal cases, which is another popular measure of clustering accuracy. As aggregate infor...
For each of the datasets, we do the following. Let k𝑘kitalic_k be the number of communities. We first apply a (k−1)𝑘1(k-1)( italic_k - 1 )-dimensional PCA and then run a standard implementation of K-Means with k𝑘kitalic_k centers on the post-PCA data, and record the normalized mutual information (NMI) value of the o...
Finally, we note that the compression ratio is not overly sensitive to the choice of PCA dimension, and if we use more dimensions than the number of communities, we still get favorable results. For theoretical support, we show in Section E.4 of the appendix that the compression ratios of most points change only mildly ...
This provides an average measurement of the compression ratio in the dataset. In this regard, we find that for each of the 8 datasets and each of the communities in the dataset, the intra-community compression ratio is higher than the inter-community compression ratio. We provide the results in the Appendix E.2.
B
We formulate the dialog with NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT rounds of QA interactions. Specifically, given the visual input data with partially missing visions, the AI system is given NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT chances to ask ...
Fig. 1: The overall architecture of our proposed SI-Dial framework. We first obtain the preliminary objects from the object detector based on the incomplete visual input, and propose to conduct an interactive dialog process. Note that the dashed lines denote the operations only after the dialog is completed) for the f...
Having obtained the interactive dialog xh⁢i⁢s,NRsubscript𝑥ℎ𝑖𝑠subscript𝑁𝑅x_{his,N_{R}}italic_x start_POSTSUBSCRIPT italic_h italic_i italic_s , italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT end_POSTSUBSCRIPT as the supplementary source to missing visual input, we update the preliminary objects O′superscri...
Overall speaking, our proposed SI-Dial takes the preliminary object representations (i.e., node and edge features from the object detector) as input, and outputs the updated representations with supplementary information incorporated from the dialog interactions:
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
C
This paper extends the classical facility location game on the real line by incorporating entrance fee functions, adding versatility to the model. The extension prompts a reevaluation of existing facility location games, like capacitated and heterogeneous facilities, opening avenues for broader applications. Our arbitr...
When the entrance fee is consistently 0, our model reduces to the classical model. Therefore, our mechanisms must also encompass those of the classical model: By letting re=1subscript𝑟𝑒1r_{e}=1italic_r start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 1, one can verify that the ratios in Table 3 match those in Table 1...
Moreover, we complement the proposed mechanisms with tight or nearly tight lower bounds, also parameterized by resubscript𝑟𝑒r_{e}italic_r start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT. While lower bounds for the classical model are applicable in our model, given that the classical model is a special case of ours, w...
A notable open problem is to narrow the gaps between our bounds in Table 3. In the classical model, randomized mechanisms such as the left-right-middle and proportional mechanisms achieve better ratios than deterministic mechanisms. However, these do not extend to our models while remaining strategyproof. Designing imp...
However, the arbitrariness of the entrance fee function introduces new challenges in designing strategyproof mechanisms. Agent preferences may no longer adhere to single-peakedness [22, 5], and standard mechanisms for the classical model cannot be directly extended to our setting while preserving strategyproofness. To ...
C
Hksubscript𝐻𝑘H_{k}italic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is obtained by replacing the middle edge (v,u)𝑣𝑢(v,u)( italic_v , italic_u ) of H𝐻Hitalic_H by a path Pk+1subscript𝑃𝑘1P_{k+1}italic_P start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT that connects v𝑣vitalic_v and u𝑢uitalic_u. Exactly k...
We wonder if there is some algorithmic relation between efficient and perfect edge domination. More specifically, we remark that there are graph classes which admit polynomial time solutions for solving the efficient edge domination problem while being hard for solving the perfect edge domination problem. However, we ...
We say that G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) is a neighborhood star-free graph, NSF graph for short, if for every vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V with degree at least 2, G⁢[N⁢[v]]𝐺delimited-[]𝑁delimited-[]𝑣G[N[v]]italic_G [ italic_N [ italic_v ] ], is not a star. In other words, every ...
There is some more bibliography to add to the already vast literature [3, 10, 23, 31, 33, 35] on dominating induced matchings. As we have seen before the papers on perfect edge domination are less frequent. There is a paper [16] where the authors describe ILP formulations for the PED problem, together with some experim...
Since connected NSF graphs do not have proper perfect dominating sets, the existence of DIM is equivalent to ask if exist a perfect edge dominating set with at most m−1𝑚1m-1italic_m - 1 edges where |E|=m𝐸𝑚|E|=m| italic_E | = italic_m (the trivial perfect dominating set is the set of all edges E𝐸Eitalic_E).
C
As in the case of (4), the optimization-based controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined in (5) is stabilizing but may require a prohibitive amount of computation for use in real-time applications with fast dynamics. The practical limitations of characterizing a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) desig...
We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes...
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s outp...
To address this question, we focus on controllers implemented using ReLU neural networks 21, which provide a natural means for approximating a continuous PWA function Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) since the output mapping of such a network is itself PWA continuous 42.
Since the existence of a polyhedral CLF for a polytopic system is a necessary and sufficient condition for its exponential stabilizability inside 𝒮𝒮\mathcal{S}caligraphic_S 9 Prop. 7.39, and since Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) as defined by any of the methods described in §2 makes Ψ⁢(⋅)Ψ⋅\Psi(\cdot)roman_Ψ ( ⋅ ) a...
C
More recently, several end-to-end learning approaches have been proposed. [27] combine the state prediction of an LSTM from an image with the prediction of a graph neural network from the previous state to propagate the state in time. Using the Sum-Product Attend-Infer-Repeat (SuPAIR) model ([49]) they render images fr...
[60] and [53] use a variational autoencoder (VAE) to predict posterior information about the initial state and combine this with an energy based representation of the dynamics and a final decoding stage. [51] improve the VAE based approach by using known explicit physical models as prior knowledge. [6] combine Mask R-C...
More recently, several end-to-end learning approaches have been proposed. [27] combine the state prediction of an LSTM from an image with the prediction of a graph neural network from the previous state to propagate the state in time. Using the Sum-Product Attend-Infer-Repeat (SuPAIR) model ([49]) they render images fr...
An exeption is the work by Song et al. that use the solution of an ODE as regularization of a motion network to crate dynamic NeRFs [47]. In contrast to our work, this approach does not enforce the physics to be exact. While the majority of works on implicit representations focuses on shape, [45] show the generality of...
For example, [16, 10] and [53] use a neural network to parameterize the Hamiltonian of a system, which relates the total energy to the change of the state. This approach allows to infer the dynamics of systems with conserved energy, like an undamped pendulum. [48] augment the Hamiltonian by a learned Rayleigh dissipati...
A
Consider a QCN in which a transmitting quantum node, S𝑆Sitalic_S, has large amounts of raw classical data, in dataset 𝒳𝒳\mathcal{X}caligraphic_X of images, text, etc., with |𝒳|𝒳\lvert\mathcal{X}\rvert| caligraphic_X | samples each of dimension N𝑁Nitalic_N, which it embeds into quantum states. The transmitting no...
As shown in Fig. 1, the constructed quantum semantic representations in the form of d2subscript𝑑2d_{2}italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-dimensional quantum states must be transmitted through quantum channels to the receiving quantum node. The quantum communication process must preserve the accuracy of ...
In contrast to conventional QCN frameworks, where the receiver typically aims to execute quantum gates and measurements for the precise reconstruction of the embedded data structure within quantum states, our framework distinguishes itself. Here, the receiver’s goal is to draw specific logical conclusions [13], which m...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
Finally, the quantum teleportation protocol is applied to transfer the semantics to the receiver, and both end nodes perform quantum measurements and apply quantum gates to reconstruct the embedded semantics and recover, the context of the raw data. The receiver extracts the semantic concepts using the mapping, g:𝒮→𝒴...
B
In this section, a verifier and a defensive verifier are constructed to respectively capture system behavior and all feasible defensive actions following system activity. Then, an E𝐸Eitalic_E-verifier is built by a special synchronization mechanism between a verifier and a defensive verifier to verify C𝐶Citalic_C-enf...
Given an E𝐸Eitalic_E-verifier, we can check the necessary condition for the defensive function to be C𝐶Citalic_C-enforcing by following Algorithm 1. However, it is possible that a defensive function may not be C𝐶Citalic_C-enforcing even though the necessary condition is satisfied.
(ii) We consider the problem of concealability enforcement when the system is unconcealable, using a so-called defensive function placed at the output of the system to obfuscate the eavesdropper observations by appropriately modifying the original observations generated by the system (via event deletions, insertions, o...
One can also check whether the sufficient condition in Theorem 2 is violated via the reduced E𝐸Eitalic_E-verifier R⁢VE𝑅subscript𝑉𝐸RV_{E}italic_R italic_V start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT constructed by following Algorithm 2 (in this case, we already know that the sufficient condition will be violated ...
Specifically, the E𝐸Eitalic_E-verifier can be used to obtain, with polynomial complexity, one necessary and one sufficient condition for C𝐶Citalic_C-enforceability; in case that the sufficient condition is satisfied, the trimmed version of the E𝐸Eitalic_E-verifier leads to a strategy to enforce concealability, also ...
D
where Qψ¯superscript𝑄¯𝜓Q^{\bar{\psi}}italic_Q start_POSTSUPERSCRIPT over¯ start_ARG italic_ψ end_ARG end_POSTSUPERSCRIPT and π𝛉¯subscript𝜋bold-¯𝛉\pi_{\boldsymbol{\bar{\uptheta}}}italic_π start_POSTSUBSCRIPT overbold_¯ start_ARG bold_θ end_ARG end_POSTSUBSCRIPT, respectively called target action-value functions an...
The logarithmic term in Eq. (11) is the policy entropy. It encourages action-space exploration by promoting random selections, thus reducing the probability to converge to deterministic policies with poor local optima. Parameter τ𝜏\tauitalic_τ is the trade-off weight used to balance the importance of reward maximizati...
If agent i𝑖iitalic_i chooses A⁢C⁢C𝐴𝐶𝐶ACCitalic_A italic_C italic_C action and succeeds in serving a UE, it gets a positive reward equal to the number of bits BiPsubscriptsuperscript𝐵𝑃𝑖B^{P}_{i}italic_B start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT sent by the T...
Note that action a^t(i)subscriptsuperscript^𝑎𝑖𝑡\hat{a}^{(i)}_{t}over^ start_ARG italic_a end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the same as the one sent to the central entity. Indeed, it is generated by the agents to be sent to the central en...
A basic training cycle consists of the following steps, as shown in Fig. 5(a). The agents collect experience data by performing network operations under the existing policies, and send these data together with some local policy information to the central entity. The central entity puts the received data in the replay b...
A
Here, we consider a distribution Q𝑄Qitalic_Q over ΘΘ\Thetaroman_Θ with a fraction π0subscript𝜋0\pi_{0}italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of nulls, i.e., Q⁢(θ=0)=π0𝑄𝜃0subscript𝜋0Q\left(\theta=0\right)=\pi_{0}italic_Q ( italic_θ = 0 ) = italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, for various valu...
In this section, we derive the agent’s best response (2) when the principal offers the menu ℱℱ\mathcal{F}caligraphic_F of all rescaled e𝑒eitalic_e-values. We will show that that the agent’s best response is to implement the classical likelihood ratio test. Turning to the details, suppose there is some maximal amount ...
The connection of incentives with e𝑒eitalic_e-values also allows us to handle a multi-round interaction between the principal and agent. In each step, the agent can invest money to collect more data, with evidence accumulating. In Appendix B, we show how to design incentive-aligned statistical contracts in this multip...
In Appendix B.3, we study the agent’s best response when evidence can be accumulated over T𝑇Titalic_T rounds of an experiment. The agent can recruit and test subjects in batches and stop the experiment at any stage, for any reason, exiting with the current total license value. We apply dynamic programming to show how ...
The menu of all e𝑒eitalic_e-values extends to this setting without modification. In the single-round setting, the principal offers menu ℱ=C⋅ℰℱ⋅𝐶ℰ\mathcal{F}=C\cdot\mathcal{E}caligraphic_F = italic_C ⋅ caligraphic_E, and then the agent will respond by choosing the optimal license function as derived in Section 3. This...
C
This study introduces the novel paradigm of Privacy Preserving Image Registration, designed for allowing image registration in privacy-preserving scenarios where images are confidential and cannot be shared in clear. Leveraging both secure multi-party computation (MPC) and Fully Homomorphic Encryption (FHE), we propose...
This work has been supported by the French government, through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002, and by the ANR JCJC project Fed-BioMed 19-CE45-0006-01.
In order to avoid local minima and to decrease computation time, we use a hierarchical multiresolution optimization scheme. The scheme involves M𝑀Mitalic_M resolution steps, denoted as r1⁢…⁢rMsubscript𝑟1…subscript𝑟𝑀r_{1}\ldots r_{M}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_r start_POSTSUBSCRIPT ital...
Fig. 2: Qualitative results for affine registration with MI over 3D medical images using ADNI dataset [33]. The images are presented in a 3×4343\times 43 × 4 grid, with the first row representing the axial axis, the second row the coronal axis, and the third row the sagittal axis. In the first column of each row, the ...
Since the registration gradient is generally driven mainly by a fraction of the image content, such as the image boundaries in the case of SSD cost, a reasonable approximation of Equations (4) and (6) can be obtained by evaluating the cost only on relevant image locations. This idea has been introduced in medical image...
A
We use an effective teacher-student pair of ResNet56 - MobileNet for experiments. The results show that B2KD methods are generally more robust than traditional KD methods for small data sizes, and they can utilize the information in available samples maximumly to model compression in extreme cases.
We use ResNet [18], VGG [46] and MobileNet [20] as the backbone, and adopt standard data augmentation techniques (random crop and horizontal flip) and an SGD optimizer in all experiments. We consistently train the teacher and student model for 350350350350 epochs, except for 12121212 epochs for MNIST, and we adopt a mu...
We conduct different teacher-student model pairs for distillation experiments, and use ResNet32 / ResNet56 / VGG13 / ResNet110 / ResNet50 / ResNeXt101 as teacher models and use ResNet8 / ResNet32 / VGG11 / MobileNet / ResNet34 / ResNeXt50 as student models.
The initial learning rate is 0.10.10.10.1, except 0.010.010.010.01 for MNIST, and we conduct a multi-step learning rate schedule which decreases the learning rate by 0.1 at the 116t⁢hsuperscript116𝑡ℎ116^{th}116 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT and 233t⁢hsuperscript233𝑡ℎ233^{th}233 start_POS...
In all experiments, teacher and student models are trained for 350350350350 epochs, except 12121212 epochs for MNIST. We use Nesterov SGD with momentum 0.90.90.90.9 and weight-decay 0.00050.00050.00050.0005 for training and use a mini-batch size of 128128128128 images on a single NVIDIA GeForce RTX 3090 GPU.
D
\sum_{|i|>N}c_{i}\phi_{i}\right\|_{2}}{\left\|\sigma(f(x))\right\|_{2}}.italic_σ ( italic_f ( italic_x ) ) ≡ ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_a end_POSTSU...
Suppose our function f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is (exactly) represented as Fourier series with |k|<N𝑘𝑁|k|<N| italic_k | < italic_N terms. We can equivalently store values of the function on the uniform grid with 2⁢N+12𝑁12N+12 italic_N + 1 points. However, when we apply activation function σ⁢(x)𝜎𝑥\sigma(x...
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. The dec...
Figure 1: (a) the output of neural network N⁢(x)𝑁𝑥N(x)italic_N ( italic_x ) computed on coarse and fine grids. On each subgrid, loss and gradients are zero, so the network provides the best (alas, pathological) approximation to f⁢(x)=2⁢x𝑓𝑥2𝑥f(x)=2xitalic_f ( italic_x ) = 2 italic_x on the interval [−1,1]11[-1,1][ ...
Aliasing error defined in Equation 1 measures the norm of harmonics we cannot possibly resolve on the given grid relative to the norm of the function, transformed by activation. The following result gives aliasing error for rectifier and two extreme basis functions.
D
This work develops a framework for knowledge distillation through a target-aware transformation that enables the student to aggregate the useful semantic over itself to enhance the expressivity of each pixel, which allows the student to act as a whole to mimic the teacher rather than minimize each partial divergence in...
We address the conundrum by the proposed anchor-point distillation. As shown in Figure 2 (c), we summarize the local area to compact representation, referred to anchor, within a local area that is representative to describe the semantic of the given area, forming the new feature map of smaller size. Since the new feat...
However, they overestimated the prior of spatial order while neglected the issues of semantic mismatch, i.e., the pixels of teacher feature map often contains richer semantic compared to that of student on the same spatial location. We found that some works [33, 34, 35, 43, 48, 20, 27], though unintended, have been pro...
We choose the DeepLabV3+ [6] as the base architecture, where it contains a backbone to extract feature and a head to generate the segmentation results. For the teacher, we follow [6] to use the ResNet101 as the backbone model. For the student, we select two networks, ResNet18 which shares a similar architecture design ...
Limitations. There are some issues of interest that we would like to explore in the future: (1) Currently, we only select the last layer of the backbone network for distillation. It would be interesting to see the efficacy when multiple layers are get involved with distillation which has been explored by some works [50...
D
While MPSE [19] focuses on simultaneously capturing global distances between objects and ENS-t-SNE aims to capture local neighborhoods, other approaches for dimension reduction, such as UMAP [29], optimize both at the same time. It would be worthwhile to quantitatively verify the extend to which these goals can be real...
In the ENS-t-SNE embedding, each point belongs to two clusters; one for its species and one for its sex. In an interactive environment, one can follow a datapoint from one projection to the other. In other words, there is a transition between the two views in three dimensions that is missing when using small multiples....
Consider, for comparison, the standard t-SNE visualization of the same dataset in 2D; see Figure 6(b). The dominant factor for the embedding is the number of cylinders, resulting in three well-separated clusters in the embedding. Note, however, that the t-SNE embedding completely missed the weight information, as there...
While MDS, t-SNE and UMap can produce 3D embeddings, they cannot optimize per-projection views, as MPSE and ENS-t-SNE can. We could use single projection techniques to obtain subspace embeddings but the resulting plots will be largely unrelated. Thus the only available technique that can be directly compared to ENS-t-...
We described ENS-t-SNE, a generalization of t-SNE, which computes a 3D embedding of a dataset along with a set of 2D projections that optimize subspace clustering information. We note that while our paper describes ENS-t-SNE in 3D, the technique can be applied to higher dimensions (lower than the number of input dimens...
D
To this end, we identify a class of POMDPs with a low-rank structure on the state transition kernel (but not on the observation emission kernel), which allows prediction and control in a sample-efficient manner. More specifically, the transition admits a low-rank factorization into two unknown features, whose dimension...
By integrating the two levels of representation learning, that is, (i) feature learning at each step and (ii) embedding learning across multiple steps, we propose a sample-efficient algorithm, namely Embed to Control (ETC), for POMDPs with infinite observation and state spaces. The key to ETC is balancing exploitation...
In this paper, we propose Embed to Control (ETC) as a unified framework for embedding and control in POMDPs. In particular, by exploiting the low-rank transition and the future sufficiency condition, we decompose the embedding learning into the learning of Bellman operators across multiple steps. By assembling the Bell...
We analyze the sample efficiency of ETC under the future and past sufficiency assumptions. In particular, such assumptions ensure that the future and past observations are sufficient for identifying the belief state, which captures the information-theoretic difficulty of POMDPs. We prove that ETC attains an O⁢(1/ϵ2)𝑂1...
To this end, we identify a class of POMDPs with a low-rank structure on the state transition kernel (but not on the observation emission kernel), which allows prediction and control in a sample-efficient manner. More specifically, the transition admits a low-rank factorization into two unknown features, whose dimension...
A
ℒhπ⁢(bh,bh+1)≔𝔼πb⁢[(ℓhπ⁢(bh,bh+1)⁢(Ah,Zh))2].≔superscriptsubscriptℒℎ𝜋subscript𝑏ℎsubscript𝑏ℎ1subscript𝔼superscript𝜋𝑏delimited-[]superscriptsuperscriptsubscriptℓℎ𝜋subscript𝑏ℎsubscript𝑏ℎ1subscript𝐴ℎsubscript𝑍ℎ2\displaystyle\mathcal{L}_{h}^{\pi}(b_{h},b_{h+1})\coloneqq\mathbb{E}_{\pi^{b}}% \bigl{[}\bigl{(}\ell_...
According to Theorem 3.8 and Assumption 3.5, to estimate the value J⁢(π)𝐽𝜋J(\pi)italic_J ( italic_π ) of π∈Π⁢(ℋ)𝜋Πℋ\pi\in\Pi(\mathcal{H})italic_π ∈ roman_Π ( caligraphic_H ), it suffices to estimate the value bridge functions {bhπ}h=1Hsuperscriptsubscriptsuperscriptsubscript𝑏ℎ𝜋ℎ1𝐻\{b_{h}^{\pi}\}_{h=1}^{H}{ italic...
The reason is that the quantity defined by (3.2) is a conditional expectation and therefore RMSE defined by (3.8) cannot be directly unbiasedly estimated from data (Farahmand et al., 2016). In the sequel, we adopt the technique of minimax estimation to circumvent this issue.
we utilize the the fact that these functions satisfy a sequence of conditional moment equations which resembles the Bellman equations in classical MDPs (Bellman and Kalaba, 1965). Then we adopt the idea of minimax estimation (Dikkala et al., 2020; Kallus et al., 2021; Uehara et al., 2021; Duan et al., 2021) which formu...
In equation (3.1) all the random variables involved are observed by the learner and distributed according to the data distribution 𝒫bsuperscript𝒫𝑏\mathcal{P}^{b}caligraphic_P start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT, by which the value bridge functions can be estimated from data. This overcomes the confoun...
B
First, they studied unconstrained regression problems with objectives in the form F⁢(𝒙T⁢ξ)𝐹superscript𝒙𝑇𝜉F({\bm{x}}^{T}\xi)italic_F ( bold_italic_x start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_ξ ), resulting in objective Hessians owning rank-one updates that cannot be employed for our general problem ...
In this paper, we answer this question by complementing the global convergence guarantees and establishing the local asymptotic properties of existing StoSQP methods. Specifically, we focus on an Adaptive Inexact StoSQP scheme, referred to as AI-StoSQP. By adaptive we mean that the scheme inherits the critical merit of...
To our knowledge, this is the first work that performs online inference by taking into account not only the randomness of samples but also the randomness of computation (i.e., sketching and stepsize); the latter is particularly important for making second-order methods computationally promising.
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
The asymptotics of second-order Newton’s methods for unconstrained problems have recently been investigated. Bercu2020Efficient designed an online Newton’s method for logistic regression, and Boyer2023asymptotic generalized that method to general regression problems. Compared to first-order methods that often consider...
B
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition. In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3...
The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB...
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous...
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont...
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
B
At the heart of WaveMix are three design elements – a stack of self-similar WaveMix blocks, a multi-level two-dimensional discrete wavelet transform (2D-DWT) in each block, and spatial resolution contraction followed by expansion back to the original size within a block. Using self-similar stacked blocks makes the arc...
We compared WaveMix models with various CNN, transformer, and token-mixing models for semantic segmentation and image classification. Ablation studies were conducted to assess the effect of the hyperparameters and the importance of each component and its placement in the WaveMix block.
We relate WaveMix to previous works in Section 2, where we delve further into the image priors modeled by various classes of neural architectures for vision, and the use of wavelet transform. Our key innovations – the WaveMix blocks, use of multi-level 2D-DWT in each block, channel mixing, and the preservation of featu...
Table 4 shows the performance of WaveMix on image classification using supervised learning on ImageNet-1K on a single GPU with limited epochs. WaveMix models outperform CNN and transformer-based models, and token-mixers. The use of non-learnable fixed weights and shallower network structure also makes inference using ...
For WaveMix model notation, we use the format Model Name -Embedding Dimension/ no. of blocks and mention the number of levels of DWT in brackets. We call the WaveMix model which uses only one level of 2D-DWT as WaveMix-Lite and it has been shown to perform well in small datasets with low resolution images. For other mo...
B
(∂P/∂ξ|(P,ξ∈Σ2×Ξ2)(\partial P/\partial\xi|(P,\xi\in\Sigma_{2}\times\Xi_{2})( ∂ italic_P / ∂ italic_ξ | ( italic_P , italic_ξ ∈ roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT × roman_Ξ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), points that do not satisfy the regularity condition of
Ξ1=F1⁢(Z)subscriptΞ1subscript𝐹1𝑍\Xi_{1}=F_{1}(Z)roman_Ξ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_Z ), where F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a differential function of the flat outputs Z𝑍Zitalic_Z, then Ξ2=F2⁢(Z,Ξ1)subsc...
this ō-system 12121212 possible sets of linearizing outputs: first, we take ξ1=zsubscript𝜉1𝑧\xi_{1}=zitalic_ξ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_z, then ξ2∈Ξ¯1subscript𝜉2subscript¯Ξ1\xi_{2}\in\bar{\Xi}_{1}italic_ξ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ over¯ start_ARG roman_Ξ end_ARG start_POSTSUBSC...
{ξ3,ξ4}∈Ξ¯2subscript𝜉3subscript𝜉4subscript¯Ξ2\{\xi_{3},\xi_{4}\}\in\bar{\Xi}_{2}{ italic_ξ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_ξ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT } ∈ over¯ start_ARG roman_Ξ end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (6 possibilities).
Ξ¯2:={α,β,μ,F)=Ξ2assignsubscript¯Ξ2𝛼𝛽𝜇𝐹subscriptΞ2\bar{\Xi}_{2}:=\{\alpha,\beta,\mu,F)=\Xi_{2}over¯ start_ARG roman_Ξ end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := { italic_α , italic_β , italic_μ , italic_F ) = roman_Ξ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, Ξ¯3:={p,q,r}=Ξ3assignsubscript¯Ξ3𝑝𝑞𝑟subscriptΞ3...
B
We discuss approaches related to the NTCIR-11/12 MathIR Wikipedia Formula Retrieval/Browsing Tasks zanibbi2016ntcir. Similar to NTCIR-11, the NTCIR-12 MathIR Task objective is to build math information retrieval (MIR) systems that enable users to search for a particular math concept using math formulae. Given a query w...
MathBERT peng2021mathbert and Tangent-CFT mansouri2019tangent, both combined with Approach0 zhong2019structural, are the current state-of-the-art for non-wildcard formula retrieval, however, MathBERT does not explicitly account for SLTs. They claim that LaTeX codes account for SLTs to some extent but not OPTs, and ther...
Purely explicit methods still deliver competitive results. Of the methods discussed here, only Tangent-CFT mansouri2019tangent and MathBERT peng2021mathbert employ learning techniques beyond the level of linear regression. To outperform Approach0 zhong2019structural in full Bpref these models combine their scores with...
Figure 3: taxonomy for approaches related to formula retrieval (math information retrieval). In the “SLT + OPT” (top right) the asterisks in MCAT* and MathBERT* refer to how SLTs and/or OPTs are not encoded directly from trees as seen in any of the Tangent approaches or Approach0. MCAT encodes SLTs implicitly through ...
Approaches reliant solely on SLTs, such as the early versions of the Tangent retrieval system pattaniyil2014combining; zanibbi2015tangent; zanibbi2016multi, or solely OPTs zhong2019structural; zhong2020accelerating tend to return less useful results from queries and are associated with lower performance compared to tho...
D
Another heterogeneous sub-collection of 23232323 networks. The 10101010 networks from dataset 157157157157 (stream food webs from New Zealand) are divided between sub-collections B and D based on the type of ecosystem. The data from sub-collection B were collected in creeks, while the one from sub-collection D were co...
The last sub-collection consists of 6666 networks from various datasets. In the 7777 blocks structure, the species of block 1111 (represented on 4444 of the 6666 networks) prey on species from all other blocks with the exception of block 7777. The basal species are separated between blocks 6666 and 7777 depending on w...
We obtain respectively Q^1=5subscript^𝑄15\widehat{Q}_{1}=5over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 5 blocks for Martins, Q^2=3subscript^𝑄23\widehat{Q}_{2}=3over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3 blocks for Cooper and Q^3=4subscript^𝑄34\widehat{...
A small sub-collection of 6666 networks with density ranging from .06.06.06.06 to .11.11.11.11. All networks are represented in 5555 or 6666 of the 7777 blocks, including the first three blocks. The sub-collection consists of 3333 of the 5555 networks of dataset 48484848, the separation being based on the collecting s...
Another heterogeneous sub-collection of 23232323 networks. The 10101010 networks from dataset 157157157157 (stream food webs from New Zealand) are divided between sub-collections B and D based on the type of ecosystem. The data from sub-collection B were collected in creeks, while the one from sub-collection D were co...
A
Then, 𝐱j,t(yi)subscriptsuperscript𝐱subscript𝑦𝑖𝑗𝑡\mathbf{x}^{(y_{i})}_{j,t}bold_x start_POSTSUPERSCRIPT ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j , italic_t end_POSTSUBSCRIPT is tested on how likely it matches to 𝐱tt⁢e⁢s⁢tsubscriptsuperscript𝐱𝑡�...
It is shown in the result that, by leveraging the learned knowledge of mechanisms, the estimator and the identifier as auxiliaries can improve the classification accuracy significantly with extra explainability. When the two modules operate independently of the classifier, without accessing any data of hand-written dig...
Figure 2: The CED architecture. Potential classes are hypothesized by the classifier C𝐶Citalic_C, and verification on these classes is made by the estimator E𝐸Eitalic_E and the identifier D𝐷Ditalic_D through the pipeline of (1) analyzing possible transformations, (2) reconstructing from candidates and (3) matching ...
In the above process, potential classes are hypothesized by C𝐶Citalic_C, and verification on these classes is made by modules E𝐸Eitalic_E and D𝐷Ditalic_D through the pipeline of (a) analyzing possible transformations, (b) reconstructing from candidates and (c) matching them with the sample.
ED outperforms the basic classifier by classifying with an accuracy of more than 75%percent7575\%75 %. It is worth noting that the performance is achieved without any knowledge of the handwritten digits (since both E𝐸Eitalic_E and D𝐷Ditalic_D are trained in Exp_NOISE), but only through the processes of analyzing, rec...
C
In standard ID setting i.e. when finetuning data and test data come from the same distribution, it is well known that finetuning results in better ID generalization performance than linear probing (He et al., , 2020; Zhai et al., , 2019). However, in the PU setting we observe that finetuning consistently under-performs...
Stability of PU Learning. The classical cost-sensitive PU learning algorithms are notoriously unstable and tend to overfit the data. These methods need careful hyper parameter tuning and early stopping. Interestingly, we observe that when the model is initialized with parameters obtained from puNCE pretraining, the PU ...
Stability of PU Learning. The classical cost-sensitive PU learning algorithms are notoriously unstable and tend to overfit the data. These methods need careful hyper parameter tuning and early stopping. Interestingly, we observe that when the model is initialized with parameters obtained from puNCE pretraining, the PU ...
We further evaluate puNCE in the binary semi-supervised setting. Training data contains samples from both the classes and a set of unlabeled samples. In particular, we perform experiments when only 1%, 5% and 10% of the data is available (Figure 5.2). It is important to note that, unlike PU Learning settings, here we p...
Stability of PU Learning. The classical cost-sensitive PU learning algorithms are notoriously unstable and tend to overfit the data. These methods need careful hyper parameter tuning and early stopping. Interestingly, we observe that when the model is initialized with parameters obtained from puNCE pretraining, the PU...
D
Written this way we see that MT fits an SBM to each layer of the network, holding fixed the outgoing and incoming group memberships across layers. Parameters 𝑼,𝑽, and ⁢𝑮ℓ𝑼𝑽 and subscript𝑮ℓ\bm{U},\bm{V},\text{ and }\bm{G}_{\ell}bold_italic_U , bold_italic_V , and bold_italic_G start_POSTSUBSCRIPT roman_ℓ end_POST...
This work also lays the groundwork for diverse future work and applications. Given the observation in Section 6.2.4, that for many of the village multilayer networks we study the layers seem to be noisy observations from the same SBM, it would be interesting to explore how other models of network formation (e.g., the ...
In Carlen et al. (2022) and De Bacco et al. (2017), a node’s membership vectors are held fixed across layers, but a new affinity matrix is fit for each layer. A similar model is proposed in Paul and Chen (2016) but with node membership vectors constrained to take on binary values and with a Bernoulli distribution assum...
Understanding how the layers of a multilayer network interact with, represent, or are different from one another has been a relevant question ever since multilayer networks started being studied. As such, there have been a multitude of proposed methods to study and assess layer interdependence. Krackhardt (1987) sugge...
De Domenico et al. (2015) and De Domenico and Biamonte (2016) develop information-theoretic tools to identify layer dependency and cluster similar layers. In Stanley et al. (2016), the authors study layer interdependence by categorizing layers into groups such that all layers were drawn from the same SBM. In the MULTIT...
C
Inspired by the promising performance that graph convolutional networks (GCNs) [21] achieved in learning semantic representations of KG entities, we propose to adapt GCNs for answering multi-relation questions with single-step implicit reasoning that is simpler, more efficient, and easier to adopt than existing reason...
Specifically, in this paper, we propose a novel Question-Aware GCN-based QA method, called QAGCN, which encodes questions and KG entities in a joint embedding space where questions are close to correct answers (entities). The intuition of our method is as follows:
Also, TransferNet and NSM are state-of-the-art (SOTA) reasoning-based methods in multi-relation QA. Furthermore, considering that our answer search in the learned embedding space is similar to embedding-based methods, we also select a prominently adopted embedding-based method: EmbedKGQA [22].
Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task. Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:...
The model consists of a question encoder and a graph encoder that compute semantic representations (embeddings) of the question and subgraph entities, respectively, through several layers of encoding. We select answers from subgraph entities according to their distances to the question in the output embedding space and...
A
Building off of an n𝑛nitalic_n-qubit feature map for n𝑛nitalic_n-dimensional word vectors, the same QSVM classification process was followed for densely encoded feature maps (alexander2022quantum, ). In this case, vector representations were encoded into fewer qubits in the feature map circuit, using log2⁡(n)subscri...
When running the densely encoded version of the word embeddings classifier from Section 3.3, 100% accuracy was achieved using 16 embedding dimensions and only 4 qubits (alexander2022quantum, ). This model achieves perfect accuracy for the lambeq set and in the fewest qubits of all the methods covered.
As observed in the results below, the classical embeddings preprocessing step from Section 3.2 enabled the quantum circuit to achieve relatively high accuracy with fewer qubits. Further improvements to space using densely encoded feature maps achieved similar results.
and results accuracy can be considered in the light of what problems users are trying to solve. For example, the work by Alexander and Widdows alexander2022quantum investigates solely the effects of decreasing space in the QSVM using a densely encoded feature map. Improved accuracy from 90% to 100% in fewer qubits on ...
The percentage of samples correctly classified peaked when using 4 qubits, where average accuracy was 57% with the ZZFeatureMap and 62% using the densely encoded feature map. The QSVM experiments were on par with classical SVM on average, and classified some sample batches with perfect accuracy. This, in contrast with ...
B
Node classification tasks predict labels of nodes based on graph structure and node features. We aim to improve the prediction accuracy of GNN models by restructuring edges via the adaptive SC method, particularly for heterophilic graphs. The evaluation results are shown in Equation 1. On average, the performance of GN...
We run GCN and SGC on the synthetic dataset of controlled homophily range from 00 to 1111. The model performance with homophily is plotted in Equation 4. As expected, higher homophily level corresponds to better performance for both GCN and SGC. All model reaches 100%percent100100\%100 % accuracy where homophily is la...
As homophily and performance are correlated, in the restructuring process, number of edges are chosen based on homophily level on the validation set. As shown in Equation 5, we chose 48000480004800048000 edges for Chameleon and 26000260002600026000 edges for Squirrel, each corresponds to the first peak of homophily on...
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
The correlation between homophily and GNN performance has been studied by Zhu et al. (2021a); Luan et al. (2021); Ma et al. (2022). In general higher homophily yields better prediction performance but, as shown by Luan et al. (2021); Ma et al. (2022), in some cases, heterophily can also contribute to better model perf...
A
Another set of experiments in Figure 4(b) illustrates how a larger number of learners lead to better outcomes in terms of total risk. We consider a set of m=2𝑚2m{}=2italic_m = 2 learners and n=50𝑛50n{}=50italic_n = 50 subpopulations. We simulate the dynamics until the market has reached equilibrium, at which point a ...
Figure (a) illustrates a setting with 3 subpopulations and 2 learners. The dsolid lines correspond to the risk trajectory for the unstable balanced equilibrium at initialization. Dotted and dashed lines illustrate risk trajectories under three different slight perturbations from the initialization. In Figure (b), the l...
Despite these inherent difficulties, we find that the situation improves as the number of learners increases. It is straightforward to see that the maximal social welfare will increase: any point which is optimal for m𝑚mitalic_m learners can be trivially transformed into a feasible point for m+1𝑚1m+1italic_m + 1 lear...
The procedure repeats until the number of learners reaches number of subpopulations. These simulations illustrate that more competition improves social welfare, however the improvements are not uniform for all subpopulations with some groups seeing their risk at equilibrium increase with the addition of new learners.
Another set of experiments in Figure 4(b) illustrates how a larger number of learners lead to better outcomes in terms of total risk. We consider a set of m=2𝑚2m{}=2italic_m = 2 learners and n=50𝑛50n{}=50italic_n = 50 subpopulations. We simulate the dynamics until the market has reached equilibrium, at which point a ...
C
In the previous section, we considered a fair classifier, which means that the classifier satisfies the equalized odds requirements exactly. However, in many cases, exact fairness may not be guaranteed. For instance, the classifier may have been constructed using heuristic methods that attempt to obtain fairness (see, ...
We discuss related work in Section 2. The setting and notation are defined in Section 3. We extend the equalized odds criterion to multiclass classifiers in Section 4. In Section 5, we discuss lower-bounding the error using label proportions when the classifier is known to be fair. In Section 6, we discuss possible way...
The guiding principle behind the derivation of this measure is that it is derived based on its intended meaning, which is rooted in a probabilistic interpretation. We provide this derivation in Section 6.1 below. In Section 6.2, we compare the proposed unfairness measure to possible measures that naturally arise from p...
We propose a measure of unfairness which is based on a clear probabilistic meaning, by defining the amount of unfairness as the fraction of the population on which the classifier behaves differently from its baseline behavior. This is analogous to the definition of error, which indicates the fraction of the population ...
In this section, we compare the proposed unfairness measure to possible measures based on previous works. We are not aware of works that explicitly suggest a measure of unfairness for existing classifiers. However, several works propose ways to obtain a near-fair (binary) classifier based on constraints that relax the ...
B
Table 1. Overview of state-of-the-art Pair Motif and Motif Set discovery definitions and implementations, given a motif length l𝑙litalic_l and TS of length n𝑛nitalic_n. Notably, no Range Motif discovery algorithm was published to-date. There are four different formal MD definitions for Motif Sets we are aware of.
Table 1. Overview of state-of-the-art Pair Motif and Motif Set discovery definitions and implementations, given a motif length l𝑙litalic_l and TS of length n𝑛nitalic_n. Notably, no Range Motif discovery algorithm was published to-date. There are four different formal MD definitions for Motif Sets we are aware of.
Time series (TS) are sequences of real values ordered along a specific dimension, with time as the most important dimension. The concept of time series motif discovery (TSMD, or MD in short) was first described in (Patel et al., 2002) and has since then emerged as an important primitive for exploring and analyzing TS i...
Though intuitively easy to describe, the specific definitions of the MD problem for a TS T𝑇Titalic_T differ notably between existing works. Several tools focus only on motif pairs (Mueen et al., 2009; Yeh et al., 2016), which are defined as the most similar pair(s) of subsequences of T𝑇Titalic_T of user-defined leng...
MD in TS has been researched intensively for approximately 20202020 years. The first publication we are aware of was studied in the context of summarizing and visualizing massive TS datasets (Lin et al., 2002). In the following, we shall first discuss recent approaches to pair MD and then focus on methods for the disco...
D
Xie and Guo [32]-[33] considered tracking time-varying parameters by using constant algorithm gains and obtained the Lpsubscript𝐿𝑝L_{p}italic_L start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-boundedness of the tracking errors. In [32]-[33], the homogeneous part of the estimation error equation is Lpsubscript𝐿𝑝L_{p}...
At present, the non-regularized decentralized linear regression problems have been widely studied in [27]-[38]. Xie and Guo [32]-[33] considered the time-varying linear regression with measurement noises, where the cooperative information condition on the conditional expectations of the regression matrices was proposed...
Liu et al. [25] studied the decentralized regularized gossip gradient descent algorithm for linear regression models, where the method is applicable for the case that only two nodes exchange information at each instant. In addition, they require that the graphs be strongly connected and the observation vectors and the ...
Chen et al. [35] proposed a saturated innovation update algorithm for the decentralized estimation under sensor attacks, where the interagent communication is noiseless. They proved that if the communication graph is undirected and fixed, the nodes are locally observable, and the number of attacked nodes is less than h...
We use the regret to evaluate the performance of the decentralized optimization algorithm, which has been investigated in [36] and [43]-[44]. Yuan et al. [36] studied the non-regularized decentralized online linear regression problem over the fixed graph. Bedi et al. [43] considered the multi-agent stochastic optimiza...
D
For more details about ILU(0) and ILUT(τ𝜏\tauitalic_τ) we refer to literature 40, 28 pp. 287–307. A symmetric reverse Cuthill-McKee reordering 41 has been applied to some matrices to allow a stable construction of the ILU factors. The matrices that needed a reordering for the ILU preconditioner are marked with an aste...
The experiments compare the performance of GMRES with restart parameter equal to 50, Alternating AA, Subselected Alternating AA and Randomized Alternating AA in solving linear systems with the matrices described in Table 1. The exact solution of each linear system is chosen as a random vector where all the entries foll...
For more details about ILU(0) and ILUT(τ𝜏\tauitalic_τ) we refer to literature 40, 28 pp. 287–307. A symmetric reverse Cuthill-McKee reordering 41 has been applied to some matrices to allow a stable construction of the ILU factors. The matrices that needed a reordering for the ILU preconditioner are marked with an aste...
Since the matrices xenon1 and xenon2 were poorly scaled, the ILU factors have been computed for these matrices only after a diagonal scaling. The diagonal scaling has been applied via a diagonal preconditioner on these matrices. Those situations where the matrices had zeros on the main diagonal were treated by setting ...
when using Alternating AA with the ILU(0) preconditioner, solving the least-squares problems still takes longer than performing the Richardson’s steps for all matrices except for the matrices bcsstk29 and QC2534. These results validate the claim that the computational time to solve the least-squares problems is general...
C
The results are shown in Table 5. All models perform quite similarly in terms of ROUGE score in the oracle setup, while the best performance is achieved when tagging is combined with prepending, outperforming all the evaluated methods. We do not compute ROUGE scores in the non-oracle setup, as we lack a gold summary fo...
In terms of STAS, in both setups prepending leads to much better results compared to token embeddings and tagging, which have similar scores. The best results are again obtained when tagging is combined with prepending. The high STAS score of the combined BARTpre+tag model in the non-oracle (70.09%) setup shows that t...
The results are shown in Table 5. All models perform quite similarly in terms of ROUGE score in the oracle setup, while the best performance is achieved when tagging is combined with prepending, outperforming all the evaluated methods. We do not compute ROUGE scores in the non-oracle setup, as we lack a gold summary fo...
Even though the models have not seen the zero-shot topics during training, they can successfully generate topic-oriented summaries for these topics achieving similar results in terms of both ROUGE-1 score and STAS metric, with the BARTpre+tag method outperforming all the other methods. In addition, the results indicate...
The experimental results reported in Table 2 show that topic control methods perform significantly better compared to the corresponding baseline methods that do not take into account the topic requested by the user. Furthermore, the proposed BART-based formulation significantly outperforms the topic-oriented PG approac...
A
Neutral-atom computers, particularly those employing AODs and SLMs, enable architectures with designated zones for specific functions [24]. These zones include memory (storing unused qubits), execution (performing gate operations), and measurement. Zone-based architectures have demonstrably facilitated pipelining in ne...
This paper is organised as follows: Section 1.1 introduces the goals of our approach, and Section 1.2 connects and differentiates our method from the state-of-the-art. Section 1.5 describes the natural application of our approach to the scalable compilation of neutral atom quantum circuits. Section 2 is describing the ...
Within the framework of specialized zones, standard cells represent the execution zones. Our approach automates the pipelining of circuit execution across sequences of zones (tiles), as detailed in Section 2.1 and Figure 4. Furthermore, our cell design allows for the pre-computation of optimal shuttling routes, leadin...
There exist multiple applications of standard cells and tiling. First, tiling quantum circuits can inform the co-design of computing architectures, where the qubit layout, for example, is developed in parallel to the circuits to execute. Such 2D and 3D architectural co-design can be implemented with neutral atoms [10]...
Neutral-atom computers, particularly those employing AODs and SLMs, enable architectures with designated zones for specific functions [24]. These zones include memory (storing unused qubits), execution (performing gate operations), and measurement. Zone-based architectures have demonstrably facilitated pipelining in ne...
B
While most current MRI simulators [7] rely on complex biophysical models to simulate nonlinearities, recent advances in deep learning techniques in different scientific areas have motivated us to develop a DL-based model for MR image re-parameterization
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image....
In summary, our simulation study showed that DL-based methods can be used for MR image re-parameterization. Based on our preliminary results, we suggest that DL-based methods hold the potential to generate via simulations MR imaging scans with a new set of parameters.
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ...
B
In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat...
In the numerical simulation, we take τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and use a 1-block ResNet with 20202020 nodes in each layer. The nonlinear activation function is chosen to be tanh\tanhroman_tanh. The total number of parameters is 501. To achieve energy stability, we fix the training samples to eliminate the n...
Given that the proposed numerical scheme employs neural networks with time-dependent parameters to approximate the solution of a gradient flow, there is no need to employ a deep neural network. In all numerical experiments for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows, we...
Various numerical experiments are presented to demonstrate the accuracy and energy stability of the proposed numerical scheme. In our future work, we will explore the effects of different neural network architectures, sampling strategies, and optimization methods, followed by a detailed numerical analysis. Additionally...
In principle, the proposed numerical framework is independent of the choice of neural network architectures. However, different neural network architectures may lead to different numerical performances, arising from a balance of approximation (representation power), optimization, and generalization. In this subsection...
C
One of the key aspects of sheaf theory is that any continuous map f:Y⟶X:𝑓⟶𝑌𝑋f:Y\longrightarrow Xitalic_f : italic_Y ⟶ italic_X between topological spaces induces a pair of adjoint functors (f−1,R⁢f∗)superscript𝑓1Rsubscript𝑓(f^{-1},\mathrm{R}{f}_{*})( italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT , roman_...
In this work, we provide a detailed study of the pushforward operation on sheaves and persistence modules, both from a theoretical and computational perspective. Following the same strategy of reducing the study of multi-parameter persistence modules to the study of families of one-dimensional persistence modules, we i...
In this section, we elaborate on our study of the pushforward operation and introduce in Definition 5.1 the notion of 𝔉𝔉\mathfrak{F}fraktur_F-projected barcodes, associated to a family 𝔉𝔉\mathfrak{F}fraktur_F of subanalytic functions up to infinity from 𝕍𝕍\mathbb{V}blackboard_V to ℝℝ\mathbb{R}blackboard_R. We mo...
We review the notion of γ𝛾\gammaitalic_γ-sheaves, and recall the precise relationship between this type of sheaves and persistence modules [3]. We then strengthen one of our previous results, asserting that the interleaving distance between persistence modules equals the convolution distance between their associated γ...
We develop the theory of 𝔉𝔉\mathfrak{F}fraktur_F-Integral Sheaf Metrics (𝔉𝔉\mathfrak{F}fraktur_F-ISM), which are well-behaved distances between 𝔉𝔉\mathfrak{F}fraktur_F-projected barcodes, obtained by taking the supremum over each function in f∈𝔉𝑓𝔉f\in\mathfrak{F}italic_f ∈ fraktur_F of the pushforward (possib...
A
As discussed, correct arc orientation is particularly important for CBNs. As well as the problem of identifying arc orientation in equivalent DAGs, another problem is that different MECs may have rather similar scores, and therefore the confidence in choosing one over the other is low. [23] therefore recommend consider...
This study examines the impact that the arbitrary variable ordering within the dataset has on the accuracy of graphs learnt by commonly used structure learning algorithms using discrete categorical data. Whilst the importance of some aspects of variable ordering is well known, we are unaware of any other study that qu...
We evaluate the effect of variable ordering on 16 discrete networks ranging from 8 to 109 variables. Many of these networks are widely used in the literature as case studies to evaluate structure learning algorithms. Table 1 lists them and their key characteristics. The Sports, Property, Formed and Diarrhoea networks a...
A second group of causes of inaccuracy relates to the assumptions that many algorithms rely on, and which frequently do not apply in the real world. These include assuming that there are no missing data values, latent confounders or measurement noise, as well as assumptions about the underlying statistical distributio...
We focus on networks with discrete categorical variables in this study since these are common in many domain areas such as healthcare, epidemiological data and survey data, for example. There is also a wide range of expert-specified discrete variable networks which provide a basis for making a structural evaluation of...
D
From our observations, we can conclude the following: 1) Reducing the group size (g𝑔gitalic_g) effectively decreases perplexity, even when employing a simple RTN quantization scheme, at the cost of a marginal increase in latency, 2) Increasing the number of GPUs (and, consequently, parallelism) does not significantly ...
However, for the LLaMA-65B model with FP16 weights, the model size exceeds the memory capacity of a single GPU (80GB for A100), necessitating model parallelism techniques. Nevertheless, when the weights of the LLaMA-65B model are quantized to 3 or 4 bits, as demonstrated to be a viable solution in (Frantar et al., 2022...
Accordingly, in recent years, several large-scale generative language models, including GPT-3 (175B) (Brown et al., 2020), HyperCLOVA (204B) (Kim et al., 2021a), Gopher (280B) (Rae et al., 2021), Chinchilla (70B) (Hoffmann et al., 2022), Megatron Turing NLG (530B) (Smith et al., 2022), PaLM (540B) (Chowdhery et al., 20...
To address such a concern, researchers have proposed to use model parallelism, which distributes computations over multiple GPUs through GPU-to-GPU communication (Shoeybi et al., 2019; Narayanan et al., 2021). Nevertheless, it is worth noting that model parallelism introduces additional overheads, stemming from the int...
Recent studies have focused on the inefficiency of the generation step and, in response, proposed the utilization of the W4A16 format (Frantar et al., 2022; Zeng et al., 2022; Dettmers et al., 2023; Kim et al., 2023), which compresses model weights into 4-bit integers without quantizing the activations as weights typic...
A
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided...
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee...
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
B
To effectively incorporate both temporal and channel attention dimensions while minimizing parameter usage and maximizing the receptive field, we present a novel attention mechanism termed Temporal-Channel Joint Attention (TCJA). This attention mechanism is distinguished by its global cross-receptive field and its abil...
Figure 4: The Framework of SNN with TCJA module. In SNNs, information is transmitted in the form of spike sequences, encompassing both temporal and spatial dimensions. In temporal-wise, the spiking neuron with a threshold feed-forward in membrane potential (𝑽𝑽\boldsymbol{V}bold_italic_V) and spike (𝑺𝑺\boldsymbol{S...
As mentioned above, we contend that the frame at the current time step exhibits a significant correlation with its neighboring frames in both the channel and temporal dimensions. This correlation opens up the possibility of employing a mechanism to establish a connection between these two dimensions. Initially, we emp...
To make the attention mechanism easier to understand, we finally visualize the output of the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset, which can be seen in Fig. 11. Changes in attention weights are primarily accumulated among channels, verifying further the substantial role performed by th...
Based on the aforementioned analysis, the utilization of a temporal-wise attention mechanism in SNNs has exhibited substantial progress in effectively processing time-related data streams. Moreover, it has been observed in both biological neural networks [32] and ANNs [15] that recalibrating channel features within con...
A
the restriction t<μmin𝑡subscript𝜇t<\mu_{\min}italic_t < italic_μ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT in the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT estimate (30). However, when s>k𝑠𝑘s>kitalic_s > italic_k, it appears that a
∥u−uh∥hsubscriptdelimited-∥∥𝑢subscript𝑢ℎℎ\displaystyle\lVert u-u_{h}\rVert_{h}∥ italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ≲hmin⁡(k,s)⁢(∥u∥s+1,1−μ+∥∇⋅u∥s,μ−1+∥∇×u∥s,μ−1),less-than-or-similar-toabsentsuperscriptℎ𝑘𝑠subscriptdelimited-∥∥𝑢𝑠11𝜇s...
∥u−uh∥Ωsubscriptdelimited-∥∥𝑢subscript𝑢ℎΩ\displaystyle\lVert u-u_{h}\rVert_{\Omega}∥ italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT ≲hmin⁡(k,s)+t⁢(∥u∥s+1,1−μ+∥∇⋅u∥s,μ−1+∥∇×u∥s,μ−1).less-than-or-similar-toabsentsuperscriptℎ𝑘𝑠𝑡subscriptdelimited-∥∥�...
∥u−uh∥h≲hs⁢(∥u∥s+1,1−μ+∥∇⋅u∥s,μ−1+∥∇×u∥s,μ−1).less-than-or-similar-tosubscriptdelimited-∥∥𝑢subscript𝑢ℎℎsuperscriptℎ𝑠subscriptdelimited-∥∥𝑢𝑠11𝜇subscriptdelimited-∥∥⋅∇𝑢𝑠𝜇1subscriptdelimited-∥∥∇𝑢𝑠𝜇1\lVert u-u_{h}\rVert_{h}\lesssim h^{s}\bigl{(}\lVert u\rVert_{s+1,1-\mu}+%
∥u−uh∥Ω≲hmin⁡(k+1,s+t)⁢(∥u∥s+1,1−μ+∥∇⋅u∥s,μ−1+∥∇×u∥s,μ−1).less-than-or-similar-tosubscriptdelimited-∥∥𝑢subscript𝑢ℎΩsuperscriptℎ𝑘1𝑠𝑡subscriptdelimited-∥∥𝑢𝑠11𝜇subscriptdelimited-∥∥⋅∇𝑢𝑠𝜇1subscriptdelimited-∥∥∇𝑢𝑠𝜇1\lVert u-u_{h}\rVert_{\Omega}\lesssim h^{\min(k+1,s+t)}\bigl{(}\lVert u\rVert_%
D
Specifically, we manually inspected all benchmarks whose relevant smart contracts that are all open-source and for each benchmark we allocated more than 4444 manual analysis hours to extract the precise mathematical summaries. The baseline synthesizer then
Table 1 lists each action’s token flow, along with the number of data points collected initially (without counterexamples) and the total number of data points for polynomial and interpolation, respectively. The amounts of tokens transferred in/out for each action are calculated based on its contract’s member variables ...
If the estimation is accurate, this indicates that the transition functions of the action at the index k𝑘kitalic_k of 𝐂𝐂\mathbf{C}bold_C and its predecessors are accurate; so the procedure breaks the loop and returns the data points computed in the previous iterations (line 7). Otherwise, it indicates inaccurate tra...
the sub-procedure Construct constructs the optimization framework 𝒫𝒫\mathcal{P}caligraphic_P for the actions vector (line 9). Then, FlashSyn uses the optimization sub-procedure Optimize (line 10) to find the optimal concrete values to pass as input parameters to the methods in the actions vector that satisfy the cons...
We assume that the flash loan providers are generally available, and we do not consider the borrow and the return as the part of the synthesis task. To facilitate FlashSyn experimentation, we manually annotated the prestates and poststates for each action. The details of this annotation process are described in Appendi...
D
}{n\cdot h^{d}}}\right),∥ over^ start_ARG italic_f end_ARG start_POSTSUBSCRIPT bold_H start_POSTSUBSCRIPT bold_0 end_POSTSUBSCRIPT , italic_h end_POSTSUBSCRIPT ( italic_x ) - italic_f ( italic_x ) ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT < italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⋅ ( divide start_ARG itali...
The sample complexity in the general bound of Theorem 6 grows exponentially with the dimension of the parameter space ΘΘ\Thetaroman_Θ. In many practical cases however, such as the HalfCircle domain of Example 3, there may be a low dimensional representation that encodes most of the important information in the tasks, e...
The first term on the right hand side of (1) is a bias term, which can be reduced by reducing hℎhitalic_h. However, the second term will grow when hℎhitalic_h is reduced. In general, the sample complexity under an optimal bandwidth scales exponentially in the dimension d𝑑ditalic_d (see Lemma 4 for a specific example)...
}}{{P_{M^{\prime}}\left(s^{\prime},c\mid s,a\right)}}italic_q = roman_sup start_POSTSUBSCRIPT italic_M , italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_M , italic_s , italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_S , italic_a ∈ caligraphic_A , italic_c ∈ caligraphic_C end_POSTSU...
The first term in the bound of Theorem 9 is the KDE error. Note that, compared to the KDE error in Theorem 6, the exponential dependence is on the low dimension d′superscript𝑑′d^{\prime}italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and not on the higher dimension d𝑑ditalic_d. The second term in the bound is d...
B
The increasing popularity of social media has facilitated discussion between Internet users and encourages us to share information and experiences. Such development contributes to the emergence of online health communities (OHCs). It stimulates a new channel for exchanging healthcare knowledge, including personal expe...
Our proposed method can be generalized to align word vector spaces of two more languages to compare consumer-oriented expressions across multiple languages. The framework can be used for exploring different vocabulary usage patterns in different countries and generate a language-agnostic CHV for facilitating cross-lin...
However, analyzing HCGC is challenging because the vocabulary used by consumers is very different from that used in the medical literature and electronic health records. For example, consumers tend to use colloquial expressions like watery stool rather than professional jargon such as diarrhea for describing their bowe...
This study presents two implications for practitioners. First, the induced non-English CHV connects to the existing English CHV thanks to the bilingual word space. Such connections help the induced non-English medical terms conform to the existing medical terminology, such as concept unique identifiers (CUI) in UMLS. T...
We collected the healthcare Q&A corpus from OHCs, the platforms for laypeople to exchange health-related information, where people tend to use colloquial expressions rather than professional jargon. Hence, we can exploit the usage of health vocabulary from such HCGCs on the OHCs.
B
The widespread adoption of AI-based black-box models has become a standard practice across various fields due to their ability to be deployed without requiring an in-depth understanding of the underlying processes. However, this advantage also poses challenges regarding trustworthiness and the explanation of AI models...
We call our approach Thermodynamics-inspired Explainable Representations of AI and other black-box Paradigms (TERP). Owing to its model-agnostic implementation, TERP can be used for explaining predictions from any AI classifier. We demonstrate this generality by explaining the following black-box models in this work: ...
We showcased the effectiveness of this approach in various AI applications, including image classification, text analysis, and molecular simulations. While several methodsFisher, Rudin, and Dominici (2019); Lundberg and Lee (2017); Sundararajan, Taly, and Yan (2017); Wachter, Mittelstadt, and Russell (2017) have been p...
Recently there has been significant progress in addressing this issue and the proposed approaches can be classified into two categories: (a) AI models that are inherently explainable, or (b) post-hoc explanation schemes for AI models that are not inherently explainable (XAI).Rudin (2019) Since most of the existing bl...
Recent applications of our framework (TERP), have been instrumental in uncovering key mechanisms behind crystal nucleationWang et al. (2024) and hydrophobic ligand dissociation.Beyerle and Tiwary (2024) Given the critical role of molecular sciences in uncovering chemical reaction pathways,Yang et al. (2017) understand...
B
An important FuSeBMC subsystem discussed in this paper is the Tracer, which coordinates the bounded model checker and the various fuzzing engines. The Tracer monitors the test-cases produced by the fuzzers. It selects those with the highest impact (as measured by a couple of metrics discussed in Section 3) to act as s...
Bounded model checking can be slow and resource-intensive. To mitigate against this, FuSeBMC does not use an off-the-shelf fuzzer for its grey box fuzzing but instead uses a modified version of the popular American Fuzzy Lop tool. One of the features of this modified fuzzer is its ability to carry out lightweight stati...
In this paper, we presented FuSeBMC v4, a test generator that relies on smart seed generation to improve the state-of-the-art in hybrid fuzzing and achieve high coverage for C programs. First, FuSeBMC analyses and injects goal labels into the given C program. Then, it ranks these goal labels according to the given stra...
FuSeBMC begins by analyzing C code and then injecting goal labels into the given C program (based on the code coverage criteria that we introduce in Section 3.2.1) and ranking them according to one of the strategies described in Section 3.2.2 (i.e., depending on the goal’s origin or depth in the PUT). From then on, FuS...
GTFuzz (Li et al., 2020) is a tool that prioritizes inputs based on extracting syntax tokens that guard the target place The backward static analysis technique extracts these tokens. Also, GTFuzz benefits from this extraction by improving the mutation algorithm. Smart grey-box fuzzing (SGF) (Pham et al., 2019) is a fuz...
A
where Xb,p(α,β)superscriptsubscript𝑋𝑏𝑝𝛼𝛽X_{b,p}^{(\alpha,\beta)}italic_X start_POSTSUBSCRIPT italic_b , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_α , italic_β ) end_POSTSUPERSCRIPT and Ib,p(α,β)superscriptsubscript𝐼𝑏𝑝𝛼𝛽I_{b,p}^{(\alpha,\beta)}italic_I start_POSTSUBSCRIPT italic_b , italic_p e...
Our pseudo-stabilization technique discussed in Appendix A is essential to the scalability of the JFP method and thus also for its application to practical computational problems. We emphasize that high-precision computations are required only for the computation of fractional integration matrices and not for the solu...
We consider our method to be the successor of that of Bhrawy and Zaky [7]. They applied a change of variables to classical Jacobi polynomials such that the algebraic singularities of the resulting basis, the JFP basis (which is called thus for reasons we explain in Section 3), conform to those of the solution333The met...
The bandwidths of the matrices follow from Lemmas 7 and 8. The first equation (28) follows immediately from the commutativity of fractional (and integer-order) integration matrices stated in (3). To derive (29), we apply Proposition 6 to the JFP basis, then
The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in...
C
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Let us first consider the best index value ranking in the unsupervised approach (Fig. 1c presented in the main text and Fig. S20), in which the lowest index value of L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is greater than the highest index value of L2subscript𝐿2L_{2}italic_L start_POSTSUBS...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
C
where 𝐢𝐢\mathbf{i}bold_i is a collection of layer indices whose weights are updated, and 𝐫𝐫\mathbf{r}bold_r is the corresponding update ratios (1/8, 1/4, 1/2, 1). Intuitively, by solving this optimization problem, we find the combination of (#layers for bias update, the subset of weights to update), such that the ...
(a,b) Our engine traces the forward graph for a given model and derives the corresponding backward graph at compile time. The red cycles denote the gradient descent operators. (c) To reduce memory requirements, nodes related with frozen weights (colored in light blue) are pruned from backward computation.
TTE offloads the auto-differentiation from the runtime to the compile-time, generating a static backward graph which can be pruned and optimized (see below) to reduce the memory and computation. TTE is based on code generation: it compiles the optimized graphs to executable binaries on the target hardware, which minim...
For sparse layer update, we prune away the gradient nodes of the frozen weights, only keeping the nodes for bias update. Afterwards, we traverse the graph to find unused intermediate nodes due to pruning (e.g., saved input activation) and apply dead-code elimination (DCE) to remove the redundancy. For sparse tensor upd...
(d) To minimize memory footprint, the gradient descent operators are re-ordered to be interlaced with backward computations (colored in yellow). (e) TTE compiles forward and backward graphs using code generation and deploys training on tiny IoT devices (best viewed in colors).
A
12⁢((M+I)⁢z+q)=12⁢|(M−I)⁢z+q|12𝑀𝐼𝑧𝑞12𝑀𝐼𝑧𝑞\tfrac{1}{2}((M+I)z+q)=\tfrac{1}{2}|(M-I)z+q|divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ( italic_M + italic_I ) italic_z + italic_q ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | ( italic_M - italic_I ) italic_z + italic_q |
12⁢((A+I)⁢x∗+b)=12⁢|(A−I)⁢x∗+b|12𝐴𝐼superscript𝑥∗𝑏12𝐴𝐼superscript𝑥∗𝑏\tfrac{1}{2}((A+I)x^{\ast}+b)=\tfrac{1}{2}|(A-I)x^{\ast}+b|divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ( italic_A + italic_I ) italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT + italic_b ) = divide start_ARG 1 end_ARG start_ARG 2 end_A...
12⁢((B+I)⁢y∗+c)=12⁢|(B−I)⁢y∗+c|.12𝐵𝐼superscript𝑦∗𝑐12𝐵𝐼superscript𝑦∗𝑐\tfrac{1}{2}((B+I)y^{\ast}+c)=\tfrac{1}{2}|(B-I)y^{\ast}+c|.divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ( italic_B + italic_I ) italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT + italic_c ) = divide start_ARG 1 end_ARG start_ARG 2 end_...
(A-BD)^{-1}\|\|\Delta b\|}{\|b\|}\frac{\|b\|}{\|x^{\ast}\|}divide start_ARG roman_max ∥ ( italic_A - italic_B italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ ∥ roman_Δ italic_b ∥ end_ARG start_ARG ∥ italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∥ end_ARG = divide start_ARG roman_max ∥ ( italic_A - ita...
12⁢((M+I)⁢z+q)=12⁢|(M−I)⁢z+q|12𝑀𝐼𝑧𝑞12𝑀𝐼𝑧𝑞\tfrac{1}{2}((M+I)z+q)=\tfrac{1}{2}|(M-I)z+q|divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ( italic_M + italic_I ) italic_z + italic_q ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | ( italic_M - italic_I ) italic_z + italic_q |
A
Let m≥3𝑚3m\geq 3italic_m ≥ 3, possibly depending on n𝑛nitalic_n. The expected runtime of the permutation-based (1+1)11(1+1)( 1 + 1 ) EA with heavy-tailed scramble mutation with power-law exponent β𝛽\betaitalic_β on PJumpn,msubscriptPJump𝑛𝑚\textsc{PJump}_{n,m}PJump start_POSTSUBSCRIPT italic_n , italic_m end_POSTS...
Concentrating on the plots ignoring easy-to-detect void mutations, that is, the dotted lines in Figure 1 (note that there are no void mutations for the heavy-tailed swap operator, hence this line is identical to (and thus covered by) the corresponding solid line), we see that the Poisson scramble operator leads to the ...
Again, we also determine the runtime of the (1+1)11(1+1)( 1 + 1 ) EA with heavy-tailed scramble mutation on the permutation version of the LeadingOnes benchmark. This analysis will, naturally, very roughly follow the lines of the analysis for the standard scramble operator. However, the now much higher probability to ...
Both from the complicated analysis and the slightly odd result that jump functions with jump size m𝑚mitalic_m and m+1𝑚1m+1italic_m + 1, m𝑚mitalic_m odd, have the same asymptotic optimization time, we were led to wonder if the mutation operator regarded in [STW04] is really the most appropriate one. We therefore also...
Since the heavy-tailed scramble mutation differs from the standard scramble operator only in the probability distribution used to sample the size k𝑘kitalic_k of the set of items randomly permuted, we can reuse large parts of the analysis for the standard scramble operator.
D
\eta(1-\eta)}}}roman_exp start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT ( italic_t italic_v ) := divide start_ARG italic_e start_POSTSUPERSCRIPT italic_θ ( italic_η ) end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT italic_t italic_u ( italic_η , italic_v ) end_POSTSUPERSCRIPT end_ARG start_ARG 1 + italic_e start_POS...
The dual nature of the exponential parametrization (4.8) is also highlighted by recovering the metric tensor (4.2) as Hessian metric from the potential ϕ⁢(η)italic-ϕ𝜂\phi(\eta)italic_ϕ ( italic_η ) that is conjugate to the log-partition function (4.9),
A key concept of information geometry is to replace the metric connection by a pair of connections that are dual to each other with respect to the Riemannian metric g𝑔gitalic_g [AN00, Section 3.1]. In particular, under suitable assumptions, the parameter space of a probability distribution becomes a Riemannian manifo...
Retractions [AMS08, Def. 4.1.1] and their inverses are basic ingredients of first-order optimization algorithms on Riemannian manifolds. The main motivation is to replace the exponential map with respect to the metric (Levi Civita) connection by an approximation that can be efficiently evaluated or even in closed form....
Such an adaption of the constraints is not needed in our proposed approach and therefore makes our approach more flexible. We notice however that the impact of the second level in the case of the projected gradient is stronger than in the geometric case. We believe that this is due to the fact that our coarse model loo...
A
Monotone functions arise in several fields such as economics, operations research, statistics, computational complexity theory, healthcare, and engineering. For example, larger houses typically result in larger prices, and certain features are monotonically related to option pricing [15] and bond rating [12]. As monoto...
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was c...
Later, the problem of approximating arbitrary monotone functions with networks having non-negative parameters using more standard activation functions such as thresholds or sigmoids has been studied in [12]. In particular [12] gives a recursive construction showing how to approximate in the ℓ∞subscriptℓ\ell_{\infty}rom...
Several works have studied approximating monotone (real) functions over a bounded domain using a monotone network. Sill [39] provides a construction of a monotone network (all parameters are non-negative) with depth 3333 where the first layer consists of linear units divided into groups, the second layer consists of ma...
When using a network to approximate a monotone function, one might try to “force” the network to be monotone. A natural way to achieve this is to consider only networks where every parameter (other than the biases) is non-negative222We restrict our attention to non-negative prediction problems: The domain of the functi...
D
In this study, we assumed the data to allow for solutions u∈H2𝑢superscript𝐻2u\in H^{2}italic_u ∈ italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT concerning the spatial variable. At the cost of some technical but standard extensions of the DG analysis, we can extend our results to the case that u∈H3/2+ϵ𝑢superscr...
A. Rupp has been supported by the Academy of Finland’s grant number 350101 Mathematical models and numerical methods for water management in soils, grant number 354489 Uncertainty quantification for PDEs on hypergraphs, grant number 359633 Localized orthogonal decomposition for high-order, hybrid finite elements, Busi...
Thus, the spatial grid of the discontinuous Galerkin method is significantly coarser than the conforming finite element grid. We do so to underline that it is generally considered ‘unfair’ to compare discontinuous Galerkin and conforming finite elements on the same grid, since DG usually has many more degrees of freed...
A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ...
Figure 1 shows the numerical results for the affine case and locally linear DG or conforming finite element approximation. We recognize that the green plot for the SIPG method with η=10𝜂10\eta=10italic_η = 10 does not show the expected convergence behavior (with respect to the number of cubature points). This nicely r...
A
The (homogeneous) CL model was first used in the work (HKK+13, ) for studying multi-agent BAI, but the model was not formally defined there. The results for fixed-time BAI in (HKK+13, ) only consider the special case where there is only one communication phase (i.e., R=2𝑅2R=2italic_R = 2). The CL model was rigorously ...
The (homogeneous) CL model was first used in the work (HKK+13, ) for studying multi-agent BAI, but the model was not formally defined there. The results for fixed-time BAI in (HKK+13, ) only consider the special case where there is only one communication phase (i.e., R=2𝑅2R=2italic_R = 2). The CL model was rigorously ...
The authors of (RVK22, ) studied BAI and regret minimization in multi-armed bandits in a model similar to the CL model, but mainly in the fixed-confidence setting. That is, their algorithm takes a confidence parameter δ𝛿\deltaitalic_δ (instead of a time horizon T𝑇Titalic_T) as an input, and try to use the smallest p...
We aim at identifying the arm whose associated distribution has the largest mean by a sequence of T𝑇Titalic_T pulls. In each arm pull, we choose an arm based on the previous pulls and outcomes, and obtain a sample from the arm’s associated distribution. Assuming that each pull takes unit time, we call T𝑇Titalic_T the...
In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first...
B
Despite an upsurge in developing optimization methods to address such a problem, the potential of low-memory quasi-Newton methods has largely been neglected which can be partially attributed to the absence of theoretical foundations for handling nonsmooth settings. In the smooth strongly convex settings, competitive co...
Even though in Algorithm 1 quasi-Newton directions based on the residual mapping were suggested (cf. (3.5)), any superlinear direction can be employed in the algorithm. As a result, our theory provides a direct globalization strategy for works that employ quasi-Newton direction with only local convergence guarantees. F...
Stochastic gradient descent (SGD) is commonly employed for finite sum minimization problems. Despite it involving simple iterations, SGD requires a diminishing stepsize and, even in the strongly convex setting, can only achieve sublinear rates of convergence. These limitations have prompted the development of several s...
Notably, algorithms such as SAGA, SVRG, and SARAH employ an outer loop to incorporate full gradients as the deterministic enhancement, along with an inner loop that incorporates stochastic gradients using randomized sampling with replacement. Furthermore, these algorithms adopt fixed stepsizes, in contrast to SGD which...
In its outer loop, one distinguishing characteristic of the proposed algorithm, which sets it apart from stochastic algorithms like SVRG and SARAH, is its utilization of quasi-Newton directions integrated with a linesearch while preserving the advantageous low-memory characteristic. In the context of this study, variou...
B
In this example, we apply Algorithm 4 to the COIL-100 dataset [51] which is the extension of the COIL-20 dataset. This data tensor consists of 7200 color images (100 objects under 72 rotations per object, see Figure 6 for some samples of this data tensor). The size of each image is 128×128×31281283128\times 128\times 3...
As discussed earlier, the randomized algorithms proposed in ([34, 35, 36, 50]) require an estimation of the tubal rank which may be a difficult task. To solve this limitation, we propose a new randomized fixed-precision or adaptive algorithm which for a given approximation error bound, it can find an optimal tubal rank...
In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a...
The sampling approach can also be used for low tubal rank approximation besides the random projection. Indeed, a randomized slice sampling algorithm was proposed in [35] in which horizontal and lateral slices are selected and a low tubal rank approximation is computed based on them, see Figure 3 for a graphical illust...
This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ...
B
For each of the 10 runs, we optimize the parameter w𝑤witalic_w (weight) by an internal 3-fold cross-validation procedure performed on the labeled portion of the training set. The semi-supervised methods also use the available unlabeled examples. The values of the parameter w𝑤witalic_w vary from 0 to 1 with a step of...
The algorithms are evaluated by means of the area under the Precision-Recall curve (AUPRC). Since the considered tasks are MLC and HMLC, we use a variant of the AUPRC – the area under the micro-averaged average Precision-Recall curve (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end...
The statistical test is applied to the predictive performances (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of the supervised and semi-supervised single trees (SL-PCT, SSL-PCT and SSL-PCT-FR) on the datasets considered in this study: 12 for multi-label classification and 1...
Figure 3 presents the learning curves in terms of the predictive performance (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 hierarchical multi-label classifi...
Figure 2 presents the predictive performance (AU⁢PRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 MLC datasets, with an increasing amount of labeled data.
A
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t...
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t...
Based on the experimental results, we disclosed several findings as follows. First, in line with our previous work [1], the BEV semantic feature is proven can improve the model performance in predicting waypoints and navigational controls. With a better perception, the model can leverage useful information which result...
In the navigational controls estimation task, DeepIPC also has the best performance in line with the waypoints prediction result. The MLP agent can leverage useful features encoded from both RGB and BEV semantic maps. Therefore, the MLP agent can perform as well as the PID agent in estimating steering and throttle. Wi...
Table III shows that DeepIPC achieves the best performance by having the lowest total metric score in all conditions. Moreover, it achieves the fastest inference speed (lowest latency) as it has the lowest number of parameters, yielding a very low computational load compared to the other models. However, all models inc...
B
For the opposite direction, if there is a tree decomposition 𝒯=(T,{Xt}t∈V⁢(T))𝒯𝑇subscriptsubscript𝑋𝑡𝑡𝑉𝑇\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})caligraphic_T = ( italic_T , { italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t ∈ italic_V ( italic_T ) end_POSTSUBSCRIPT ) of G′super...
We supplement our main results with a second NP-completeness proof for a problem closely related to computing the tree-independence number and the algorithm of Theorem 1.1. We consider the problem where we are given a graph G𝐺Gitalic_G, two non-adjacent vertices u𝑢uitalic_u and v𝑣vitalic_v, and an integer k𝑘kitalic...
we demonstrated that for every fixed integer k≥3𝑘3k\geq 3italic_k ≥ 3, deciding if two given vertices of a graph can be separated by removing a set of vertices that induces a graph with independence number at most k𝑘kitalic_k is NP-complete. This result indicates that it may be already NP-complete to decide whether �...
In this section we show that, for every fixed integer k≥3𝑘3k\geq 3italic_k ≥ 3, deciding if two given vertices of a graph can be separated by removing a set of vertices that induces a graph with independence number at most k𝑘kitalic_k is NP-complete.
We show in Theorem 6.1 that this problem is NP-complete for any fixed integer k≥3𝑘3k\geq 3italic_k ≥ 3. This hardness result is motivated by the fact that the algorithm of Theorem 1.1 finds separators with bounded independence number as a subroutine.
C
To solve the above challenges, in this work, we propose an efficient VFL optimization framework with multiple heads (VIM), where each head corresponds to one local client. VIM takes the individual contribution of clients into consideration and facilitates a thorough decomposition of the VFL optimization problem into mu...
the server needs to calculate training loss based on the labels and then send gradients to clients for each training step to update their local models vepakomma2018split ; chen2020vafl ; kang2020fedmvt , which incurs high communication cost and leads to potential rapid consumption of the privacy budget.
(i) In the model splitting setting, each client trains a feature extractor as the local model that outputs local embeddings, and the server owns a model which predicts the final results based on the aggregated embeddings. (ii) In the VFL without model splitting setting, the clients host the entire model that outputs th...
Due to the privacy protection requirement of VFL, each client k𝑘kitalic_k does not share raw local feature set Xksubscript𝑋𝑘X_{k}italic_X start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT with other clients or the server. Instead, VFL consists of two steps: (1) local processing step: each client learns a local model th...
This leads to faster model convergence and significantly reduces the communication cost, which is crucial to preserve privacy because the privacy cost of clients increases when the number of communication rounds increases abadi2016deep ; brendan2018learning , due to the continuous transmission of sensitive local inform...
D
N}}_{s}(t)}\exp(\mathbf{a}^{\mathrm{T}}\sigma(\mathbf{W}_{g}[\mathbf{x}_{s}(t)% \|\phi(\Delta t_{j})\mathbf{x}_{j}(t)]))}.italic_α start_POSTSUBSCRIPT italic_s italic_i end_POSTSUBSCRIPT = divide start_ARG roman_exp ( bold_a start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_σ ( bold_W start_POSTSUBSCRIPT italic_...
In this subsection, we perform ablation study to verify the effectiveness of our two components: 1) time-aware attentional aggregating module which consists of one aggregate process for calculating the interaction message and one information propagation process for calculating the intermediate embeddings, as shown in ...
As a trade off between speed and performance, we set a limit to the neighborhood size k𝑘kitalic_k in our reinforced based agent. i.e., we only send the most recent k𝑘kitalic_k neighbors to the agent for selection. Thus, we study the impact of different numbers of neighbors on the model performance.
To investigate the impact of varying dimensions on the model’s performance, we conduct another experiment using our method with different dimensions on the UCI dataset. Note that in order to minimize the need for hyperparameter tuning, we set dn=dmsubscript𝑑𝑛subscript𝑑𝑚d_{n}=d_{m}italic_d start_POSTSUBSCRIPT itali...
For the sake of the efficiency and parallel processing ability, our reinforced neighbor selection module only samples the most k=200𝑘200k=200italic_k = 200 recent interaction neighbors. We also perform the experiments to show the performances of our method with different values of k𝑘kitalic_k. In our experiment, we s...
B
TABLE VII: Progressive Uncertainty Performance in our framework. The results are reported in the rank-1 accuracy. e represent that re-sample an additional embedding from Gaussian distributions. Blue color indicates the comprehensive best result. ‡ means we abandon the residual connection.
From the CL baseline results, we can learn that when adding a new cloth condition, the baseline model is difficult to learn cross-view and cross-cloth information from different identities, same with the result on CASIA-BN-RCC. And our framework can further improve the model’s performance in CL condition with both back...
We propose a new framework that utilizes progressive feature learning to fully expand the gait representations in discriminative space. In this framework, we design a Progressive Mapping and a Progressive Uncertainty to extract features used for cross-view and cross-cloth in a cascaded way.
In this paper, we propose a new task called Realistic Cloth-Changing Gait Recognition, which focuses on the cloth-changing problem in practice. In this section, we will formally define the problem and then introduce a framework with Progressive Feature Learning to fully extract the cross-view and cross-cloth informatio...
In this work, we tackle the cloth-changing gait recognition problem in realistic application with automatic data collection at one of the first time. We propose a new framework that can be applied with off-the-shelf gait recognition backbones to boost performance in the RCC-GR task.
D
The evaluation metrics are success rate and averaged reward. Success rate is the ratio of the number of tasks successfully completed by the dialogue system in evaluation to the total number of dialogues in the test set. Averaged reward refers to the average of the cumulative rewards obtained by the dialogue system for ...
To benchmark our method performance, we use different DQN variants as baselines in dialogue policy module for comparison: (1) DQN policy is learned with standard DQN algorithm Mnih et al. (2015). (2) Duel DQN policy is learned by the duel network structure (Wang et al., 2016).(3) Double DQN policy uses Double Estimator...
The value-based algorithm Q-learning, a common unit of the dialogue policy module, suffers the overestimation bias (Thrun and Schwartz, 1993; Hasselt, 2010). Prior studies addressed the problem in multiple ways, including (1) bias compensation with additive pseudo costs and (2) a variety of estimators. Bias-corrected ...
Overestimation bias is more problematic in the deep Q-learning network (DQN) algorithm (Fan et al., 2020) due to the function approximation errors of DRL. Polishing estimation tricks of a single model and using ensemble models are two mainstream solutions. Double Q-learning is subsequently adapted to a neural network a...
Reinforcement learning (RL) algorithms, specifically Q-learning (Watkins and Dayan, 1992) based algorithms, have become a mainstream method for training the dialogue policy module (Peng et al., 2018; Zhang et al., 2020b). For each step, the policy agent updates its action value 111This value is the expected return for ...
A
In Table 2, it is shown that applying all techniques (M+I+N) obtains the best Top-1 accuracy results of our framework, but our model slightly decreases its performance regarding Top-5 predictions. We claim that by dealing with imbalance of the Ego4D dataset through Focal Loss, the model takes more risk by attempting t...
Table 2: Performance of H3M with different training strategies, compared to the baseline using the accuracy metric. M: multitask surrogate loss (sharing weights). I: focal loss to solve class imbalance. N: noise injection. Here, bold fonts denote the best result and underlines denote the second best result among all ap...
In Table 2, it is shown that applying all techniques (M+I+N) obtains the best Top-1 accuracy results of our framework, but our model slightly decreases its performance regarding Top-5 predictions. We claim that by dealing with imbalance of the Ego4D dataset through Focal Loss, the model takes more risk by attempting t...
Our model is able to recognize verb-noun pairs with similar performance as the baseline, as reported in Table 2. However, our model is trained on pre-extracted features 𝐅𝐅\mathbf{F}bold_F for each clip, and not the image-based video clip 𝐕𝐕\mathbf{V}bold_V . Due to the lower dimensionality of these features (𝐅∈ℝT×...
Table 1: Edit Distance (ED) comparison of long-term human action anticipation in Ego4D dataset. Scores are obtained directly from their reported results. Here, bold fonts denote the best result and underline denote the second-best result among all approaches.
A
Two calibration methods, i.e., uncertainty modeling-based calibration and native anomaly-based calibration, are respectively removed from COUTA in two ablated variants, w/o UMC and w/o NAC. These two components are simultaneously excluded in w/o UMC&NAC. The remaining parts of these ablated versions remain the same as ...
Two calibration methods, i.e., uncertainty modeling-based calibration and native anomaly-based calibration, are respectively removed from COUTA in two ablated variants, w/o UMC and w/o NAC. These two components are simultaneously excluded in w/o UMC&NAC. The remaining parts of these ablated versions remain the same as ...
According to the comparison results, COUTA successfully achieves state-of-the-art performance by addressing two key limitations in the current one-class learning pipeline. The superiority of COUTA can be attributed to the synergy of our two novel one-class calibration components, which achieves contamination-tolerant, ...
It is mainly because COUTA is essentially a one-class classification model, and by definition, it can alarm all the observations that deviate from the learned normality according to one-class distances. The data normality is modeled based on the majority of the training data instead of these created native anomalies. T...
Based on the comparison of COUTA and its variants, the superiority of COUTA can verify the significant contribution of two calibration methods on one-class classification. COUTA outperforms w/o UMC, w/o, and w/o UMC&NAC by 8%, 2%, and 7%, respectively.
D
Apart from the importance of coherence and linguistic diversity in surface realization, data fidelity is a crucial aspect of D2T systems - the narrative should neither hallucinate contents absent from the data instance nor omit contents present in the data instance. Often, the divergence present in benchmark training d...
Architectural Interventions: The sections §5.1.1 Entity Encoders, §5.1.2 Hierarchical Encoders, §5.1.3 Plan Encoders & Autoencoders, §5.1.5 Graph Encoders, §5.1.6 Reconstruction & Hierarchical Decoders, and §5.1.10 Supplemental Frameworks suggest modifications/augmentations to the seq2seq architecture such that it fos...
Loss-function Interventions: An alternative avenue to achieving a balance between conflicting optimization objectives is to directly model the objective functions to perform multi-task learning: as such sections §5.1.7 Regularization Techniques and §5.1.8 Reinforcement Learning suggest modifications/augmentations to t...
Hierarchical Decoding: Similar to hierarchical encoding (see §5.1.2), hierarchical decoding intends to designate granular roles to each decoder in the hierarchy. Serban et. al. (Serban et al., 2017) show that injecting variations at the conditional output distribution does not capture high-level variations. As such, to...
The use of explicit graph encoders in D2T stems from the intuition that neural graph encoders such as Graph Convolutional Networks (GCNs) (Kipf and Welling, 2016) have strong relational inductive baises that produce better representations of input graphs (Battaglia et al., 2018) as an effective alternative to lineariz...
A
Results. The results on the GQA dataset are summarized in TABLE V. From the table, we have the following observations: 1) The incorporation of NICE and NICEST can significantly improve the mR@K scores of the two strong baselines (e.g., 12.6% and 12.6% absolute gains on metric mR@100 over Motifs and VCTree, respectively...
As shown in TABLE VI, it is evident that both NICE and NICEST achieve the highest performance in terms of the Mean metric (e.g., 44.3% and 45.3% under PredCls over Motifs, respectively). This far exceeds the performance of using only label correction or label smoothing methods (e.g., 3.9% ∼similar-to\sim∼ 4.7% absolute...
Results. From the results in TABLE III and TABLE VII, we have the following observations: 1) Compared to the two strong baselines (i.e., Motifs and VCTree), our NICE can consistently improve model performance on metric mR@K over all three tasks (e.g., 5.9% ∼similar-to\sim∼ 14.3% and 3.7% ∼similar-to\sim∼ 14.7% absolute...
Results. From the results in TABLE IV, we can observe that: 1) Compared with the two common baselines (i.e., Motifs and VCTree), the mR@K of our NICE has been significantly improved in all three tasks (e.g., 4.4% ∼similar-to\sim∼ 17.0% and 4.7% ∼similar-to\sim∼ 17.0% absolute gains on metric mR@100 over Motifs and VCT...
Results. The results on the GQA dataset are summarized in TABLE V. From the table, we have the following observations: 1) The incorporation of NICE and NICEST can significantly improve the mR@K scores of the two strong baselines (e.g., 12.6% and 12.6% absolute gains on metric mR@100 over Motifs and VCTree, respectively...
A
We denote the rushing ability of the attacker by γ𝛾\gammaitalic_γ. If the attacker publishes a new whole block from its private branch to race with other miners (e.g., a new block broadcast in the public branch), γ𝛾\gammaitalic_γ is the expected ratio of public miners that receive the attacker’s block first. γ𝛾\gam...
In Figure 10, we show the numerous simulation results of the attacker’s RER following PSM instead of the selfish mining strategy. The PSM attacker can get a higher reward than the selfish miner when its mining power is relatively small. When the attacker’s mining power is large enough, the possibility of finding more ...
The workflow of the PSM-DoS attack is shown in Figure 8. First, the attacker distributes the partial block data to all the miners and attracts attracted miners to join the attacker’s private branch. In the meantime, the attacker leaves the private branch and puts all the mining power back into the public branch. Then, ...
We also extend the assumption that the attacker can promise that it will release the secret based on the length of both the private and the public branches. This assumption is reasonable because when launching mining attacks, attackers are motivated to maximize their revenue, and the secret computation mechanism in Se...
A miner’s expected normalized reward equals the probability of finding a valid block in each round. This assumption is reasonable because according to [gervais2016security], the probability of unintentional forks is around 0.41%, which is negligible.
D
How is this tension resolved? In the full-batch special case, gradient descent spends the bulk of training in a regime called the Edge of Stability (EoS) [8] in which the sharpness — the maximum eigenvalue of the training Hessian — hovers right at, or just above, the stability threshold.
These works have empirically demonstrated that dynamical instability plays a key role in the training process. When training neural networks, gradient descent is constantly attracted to regions of parameter space with increasingly high curvature [20, 21, 8]; yet, up to a quadratic Taylor approximation, gradient descent...
At the EoS, gradient descent would still be moving into regions of higher curvature were it not being constantly repelled from these high-curvature regions by unstable dynamics. As we confirm below, these findings also generalize to preconditioned gradient descent (with a static preconditioner).
At the EoS, gradient descent is constantly being repelled from regions of the loss landscape with sharpness exceeding the stability threshold. Prior work has not discussed preconditioned gradient descent, but in the next section we confirm that the results of [8] carry over to the non-adaptive preconditioned setting.
We will see that this behavior sometimes differs substantially from that of non-adaptive optimizers. In particular, whereas non-adaptive optimizers at the EoS are blocked from entering high-curvature regions of parameter space, adaptive gradient methods at the AEoS can and do enter high-curvature regions via their abil...
B
{p_{kl}}{q_{kl}}\right),italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_M , italic_Q ; bold_italic_θ ) = ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_k italic_l end_POSTSUBSCRIPT roman_log ( divide start_ARG italic...
Use the Kullback-Leibler divergence DKL⁢(𝜽)subscript𝐷KL𝜽D_{\mathrm{KL}}(\boldsymbol{\theta})italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_θ ) to estimate statistical distance between Q𝑄Qitalic_Q and M𝑀Mitalic_M given the current parameters 𝜽𝜽\boldsymbol{\theta}bold_italic_θ [Eq. (27)].
where M𝑀Mitalic_M is row-stochastic. The Markov transition matrix models then the unbiased Markov chain where each entry is the probability of the jump from 𝐱ksubscript𝐱𝑘\mathbf{x}_{k}bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to 𝐱lsubscript𝐱𝑙\mathbf{x}_{l}bold_x start_POSTSUBSCRIPT italic_l end_POST...
As biased CVs to enhance transitions between the folded and unfolded conformations of CLN025 in the metadynamics simulation, we choose the distance between Cα𝛼\alphaitalic_α atoms of residues Y1 and Y10 (d𝑑ditalic_d) and the radius of gyration (rgsubscript𝑟𝑔r_{g}italic_r start_POSTSUBSCRIPT italic_g end_POSTSUBSCR...
where in contrast to the standard formulation of the Kullback--Leibler divergence that compares two probability distributions, Eq. (27) is computed for every pair of rows from M𝑀Mitalic_M and Q𝑄Qitalic_Q, and then summed. Equivalently, we can minimize the cross-entropy:
D
Currently, there is no clear optimal criteria for the additional removal of short segments from a tree. Further work is needed to set such a condition. At present, a segment is removed if it contains fewer than 5 nodes and gives rise to no further segments.
With the analysis described above, it is possible to generate coronary arterial networks that adhere to given conditions. These networks have terminal segments which should reasonably give rise to a small vessel tree. Such small vessel trees can be modeled in a variety of ways. It is natural to ask which region of the ...
Further, the data discussed here seem rarely to be available. However, even without access to the data it may be desirable to subdivide the ventricle for perfusion studies. Future research will focus on the development of mappings from a given left ventricle to the one discussed here that will allow a ventricle without...
Briefly, using spatial data describing the coronary arterial vasculature from a single porcine heart obtained from fluorescence cryomicrotome images (Goyal et al., 2012) and image processing techniques, we have developed algorithms to organise and search the data in order to build subtrees from the data. These subtrees...
The work presented here is part of a larger effort to create detailed computational models of, for example, myocardial infarction and vascular rarefaction. A myocardial infarction is the blockage or partial occlusion of a coronary artery or cardiac vein leading to local damage. In order to model such pathological condi...
A
Figure 1: Schematic Representation of SDN Network State Estimation Problem. Directed line denotes network traffic flows with different throughput in the given topology, where one can see that there exist different distributions of line type between training and validation/test time, as mentioned in (i)𝑖(i)( italic_i ...
In this paper we consider the problem of estimating the latency between a source and a destination network node (denoted as OD pair) in a routing network. The problem setting is to learn a model on smaller networks and extrapolate the predictions on larger networks assuming open-world input, as illustrated in Fig. 1.
In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph, which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal...
We consider the pair-wise similarity between the role of a node and its neighbors to be crucial, since an OD pair’s source or destination can bring its impact towards close neighbours of nodes of interest and formulate their embedding as an “augmented source.” GAT is reported to be a well-suited solution to capture suc...
Our study is among the first approaches that attempt to model the routing network state snapshot by learning the given topology structure for the task of estimating the network latency using open-world input. Our proposed solution transforms the task of estimating the latency between source and destination nodes, to a ...
A
As for theoretical results, we prove that the proposed algorithm provably recovers the true representations under the low-rank MDP setting. Moreover, we show that our algorithm achieves a 𝒪~⁢(1/ε2)~𝒪1superscript𝜀2\widetilde{\mathcal{O}}(1/\varepsilon^{2})over~ start_ARG caligraphic_O end_ARG ( 1 / italic_ε start_POS...
This section provides the analysis of the transition kernel recovery via contrastive learning and the proofs of the main results for single-agent MDPs and zero-sum MGs. Our theoretical analysis integrates contrastive self-supervised learning for transition recovery and low-rank MDPs in a unified manner. Part of our ana...
To focus our analysis on the contrastive learning for the transition dynamics, we only consider the setting where the reward function rh⁢(⋅,⋅)subscript𝑟ℎ⋅⋅r_{h}(\cdot,\cdot)italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is known. One might further modify the proposed algorithm to the unknown reward...
In addition to theoretical guarantees, we also provide numerical experiments to empirically demonstrate the efficacy of our algorithm. Furthermore, we extend the algorithm and theory to the zero-sum MG under the low-rank setting, a multi-agent extension of MDPs to a competitive environment.
We study contrastive-learning empowered RL for MDPs and MGs with low-rank transitions. We propose novel online RL algorithms that incorporate such a contrastive loss with temporal information for MDPs or MGs. We further theoretically prove that our algorithms recover the true representations and simultaneously achieve...
C
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha...
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
Importance sampling for MV-SDEs has been studied in (dos Reis et al., 2023; Ben Rached et al., 2023). The decoupling approach developed by (dos Reis et al., 2023) defines a modified, decoupled MV-SDE with coefficients computed using a realization of the MV-SDE law estimated beforehand using a stochastic particle syste...
The decoupled MV-SDE (8) for the given empirical law {μtP:t∈[0,T]}conditional-setsubscriptsuperscript𝜇𝑃𝑡𝑡0𝑇\left\{\mu^{P}_{t}:t\in[0,T]\right\}{ italic_μ start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ [ 0 , italic_T ] } is a standard SDE, making it po...
The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it. We decouple the computation o...
D
{5.5⁢ m,if ⁢r≥6,km0.1⁢ m,otherwisecases5.5 mif 𝑟6km0.1 motherwise\begin{cases}5.5\text{ m},&\text{if }r\geq 6,\text{km}\\ 0.1\text{ m},&\text{otherwise}\end{cases}{ start_ROW start_CELL 5.5 m , end_CELL start_CELL if italic_r ≥ 6 , km end_CELL end_ROW start_ROW start_CELL 0.1 m , end_CELL start_CELL otherwise end_CELL...
Hayabusa 2 spacecraft has three optical navigation cameras, ONC-T, ONC-W1, and ONC-W2 [48]. For the optical navigation of our analysis scenario, we consider the ONC-T and ONC-W1. The ONC-T is a telescopic camera with a FOV of 6.27°°\degree° and pixel size of 1024×\times×1024, while the ONC-W1 has a wide FOV of 69.71°°...
We assume that the spacecraft is equipped with a LiDAR (Light Detection and Ranging), two optical navigation cameras and a set of accelerometers, for navigation with respect to the asteroid. A summary of the values used in the simulation is presented in Table 2. We consider that no radiometric data is available for the...
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as...
in which I⁢F⁢O⁢V=F⁢O⁢V/Np𝐼𝐹𝑂𝑉𝐹𝑂𝑉subscript𝑁𝑝IFOV=FOV/N_{p}italic_I italic_F italic_O italic_V = italic_F italic_O italic_V / italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is the instantaneous field of view of the camera, with Np=1024subscript𝑁𝑝1024N_{p}=1024italic_N start_POSTSUBSCRIPT italic_p end...
A
There exists a deterministic polynomial time algorithm, which approximates problem (5.14) within a factor of ed′+o⁢(d′)superscript𝑒superscript𝑑′𝑜superscript𝑑′e^{d^{\prime}+o(d^{\prime})}italic_e start_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + italic_o ( italic_d start_POSTSUPERSCRIPT ′ ...
Our convergence guarantees rely on the existence of suitable parameters Δ,δ>0Δ𝛿0\Delta,\delta>0roman_Δ , italic_δ > 0 for a given Brascamp–Lieb datum (𝒜,𝒘)𝒜𝒘(\mathcal{A},\bm{w})( caligraphic_A , bold_italic_w ), such that the global optimum X∗superscript𝑋X^{*}italic_X start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ...
A shortcoming of the present approach lies in the difficulty of characterizing the structural dependency on the input datum more explicitly. In particular, while (Bennett et al., 2007, Proposition 5.2) guarantees the existence of suitable δ,Δ>0𝛿Δ0\delta,\Delta>0italic_δ , roman_Δ > 0 for each (feasible, simple) Brasca...
For a detailed analysis of the algorithmic aspects of this problem and Nikolov’s approach, see also (Ebrahimi et al., 2017). An interesting direction for future work is the question of whether a suitable approximation algorithm for the parameter c𝑐citalic_c can be found, which, in turn, may provide insight into explic...
We conjecture that there is a close relation between our δ𝛿\deltaitalic_δ and the parameter c𝑐citalic_c in Theorem 33 (via Eq. (5.10)), which is characterized by the bit complexity of the input datum (𝒜,w)𝒜𝑤(\mathcal{A},w)( caligraphic_A , italic_w ). Notably, c𝑐citalic_c is closely related to solving a constrain...
C
Figure 3. The J3subscript𝐽3J_{3}italic_J start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT centrality plot, with fσ=1subscript𝑓𝜎1f_{\sigma}=1italic_f start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = 1, of dimension 1111 produced by the Rips filtration of the point cloud sampled around a wedge sum of two annuli.
In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. Integral to this is the development of an algorithm that captures the merge history of homology classes. These homology-based centrality measures produce, for all cycle gen...
The centrality plots suggest the presence of a relatively important signal within the point cloud data. This is evidenced by the large difference in the maximum centrality value for the highest ranked hole compared to the others. Notably, this hole coincides with the one previously identified using the hypothesis test...
This subsection investigates the stability of the centrality measures we defined earlier. Stability ensures that small changes in the network data, such as slight adjustments to edge weights, will not lead to drastic changes in the calculated centrality of cycles.
Many complex networks, such as social networks [socnet] and telecommunication networks [telnet] use graph-based centrality measures to determine the relative significance of nodes or cycles in the network. The derivations of the measures of central tendency in Statistics reflect the idea that a single value can represe...
C
Laine et al.  (DBLP:conf/iclr/LaineA17, ) develop Temporal Ensembling, which maintains an exponential moving average of label predictions on each training example and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwiel...
However, the success of the typical SSL largely depends on the assumption that the labeled and unlabeled data share an identical class distribution, which is hard to meet in the real-world application. The distribution mismatch between the labeled and unlabeled sets can cause severe bias in the pseudo-labels of SSL, re...
In this paper, we aim to tackle the semi-supervised domain generalization (SSDG) task. Different from the typical semi-supervised task, the challenge of SSDG is that there exist multiple different domains with latent distribution discrepancy. To address this issue, we first explore the theory of multi-domain learning ...
In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis...
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui...
A
Figure 3: Four adapting schemes of Conv-Adapter to ResNet50: Convolution Parallel, Convolutional Sequential, Residual Parallel, and Residual Sequential. The schemes differ regarding the position of of the modified representation and corresponding insertion form. Other networks can be adapted similarly following the il...
Given the above challenges, we design our Conv-Adapter as a bottleneck structure, which is also widely used by PET methods of NLP tasks [19, 21]. However, our Conv-Adapter designs the bottleneck, particularly for ConvNets. Precisely, it consists of two convolutional layers with a non-linearity function in-between. The ...
Compared to channel down-sampling with a bottleneck in Conv-Adapter, spatial down-sampling introduces nearly 27 times of parameters with inferior accuracy. We also validate the adapting scheme of applying 1×1111\times 11 × 1 convolution to all convolutional layers [48], which introduces nearly 16 times of parameters to...
For simplicity, we adopt the same activation function used in the backbone as the non-linearity at the middle of the bottleneck. The effective receptive field of the modulated feature maps produced by Conv-Adapter is thus similar to that of the adapted blocks in the backbone.
We show the performance of Conv-Adapter on VTAB-1k validation set in Fig 5, of using different kernel size for the depth-wise convolution to verify our argument of the loss of locality. One can observe that, for both ResNet50 and ConvNext-B, using smaller kernel size results in inferior performance. When setting the ke...
A
\right|^{2}.italic_L start_POSTSUPERSCRIPT fraktur_s end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT | italic_f start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT ( bold_x start_POSTS...
We introduce a novel method for identifying changepoints in dynamic systems governed by general PDEs dynamics. Our approach works with piecewise-constant time-changing parameters and leverages total variation regularization on the first-order differences of parameters. We also propose an online learning strategy that ...
The standard PINNs model assumes that the parameters of PDEs are constant values across the entire time domain. In order to accommodate Definition 2.1, we allow for the changes in the λ⁢(t)𝜆𝑡\lambda(t)italic_λ ( italic_t ) and introduce additional regularization term in a form of total variation penalty on the first ...
where Δ⁢λ⁢(ti)=λ⁢(ti+1)−λ⁢(ti)Δ𝜆superscript𝑡𝑖𝜆superscript𝑡𝑖1𝜆superscript𝑡𝑖\Delta{\lambda}(t^{i})={\lambda}(t^{i+1})-{\lambda}(t^{i})roman_Δ italic_λ ( italic_t start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) = italic_λ ( italic_t start_POSTSUPERSCRIPT italic_i + 1 end_POSTSUPERSCRIPT ) - italic_λ ( italic...
The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl...
B
A number hℎhitalic_h which forms the boundary between two consecutive legs will be called a waypoint. We count the numbers 1111 and m𝑚mitalic_m as waypoints by courtesy, and refer to them as terminal waypoints; all other waypoints are internal. Thus, a walk consists of a sequence of legs from one waypoint to another.
that w′=uf′=vg′superscript𝑤′superscript𝑢superscript𝑓′superscript𝑣superscript𝑔′w^{\prime}=u^{f^{\prime}}=v^{g^{\prime}}italic_w start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_v start_POSTSUPERSCRIPT itali...
This deals with all cases in which the minimal leg [V,W]𝑉𝑊[V,W][ italic_V , italic_W ] is internal. We turn finally to the few remaining cases where it is terminal. Without loss of generality, we may assume that we are dealing with an initial leg, i.e. V=1𝑉1V=1italic_V = 1; hence W𝑊Witalic_W is an internal waypoint...
By a leg of f𝑓fitalic_f, we mean a maximal interval [i,j]⊆[1,m]𝑖𝑗1𝑚[i,j]\subseteq[1,m][ italic_i , italic_j ] ⊆ [ 1 , italic_m ] such that, for hℎhitalic_h in the range i≤h<j𝑖ℎ𝑗i\leq h<jitalic_i ≤ italic_h < italic_j, the difference d=f⁢(h+1)−f⁢(h)𝑑𝑓ℎ1𝑓ℎd=f(h{{+}}1){-}f(h)italic_d = italic_f ( italic_h + 1 ) -...
If hℎhitalic_h is an internal waypoint where the change is from an increasing to a decreasing leg, we call hℎhitalic_h a peak; if the change is from a decreasing to an increasing leg, we call it a trough. Not all waypoints need be peaks or troughs, because some legs may be flat; however, it is these waypoints that will...
D
β2⁢‖∇αq‖L22𝛽2superscriptsubscriptnormsuperscript∇𝛼𝑞subscript𝐿22\frac{\beta}{2}\|\nabla^{\alpha}q\|_{L_{2}}^{2}divide start_ARG italic_β end_ARG start_ARG 2 end_ARG ∥ ∇ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT italic_q ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT...
The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient. Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ...
accuracy of measurements on the one hand, and the uncertainty in the recovered parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us...
For many inverse problems – such as ultrasound or seismic imaging, or electrical impedance tomography – the quantity we would like to identify is not a right-hand source term, but a coefficient in the operator on the left side of the equation. In these cases, the definition of the information density will have to be l...
The bottom right panel of the figure also shows the information density j⁢(𝐱)𝑗𝐱j(\mathbf{x})italic_j ( bold_x ) that corresponds to this problem, as defined in (29). It illustrates that, given the location of detectors and the nature of the equation, information is primarily available upstream of detector locations....
C
We use the label hierarchy creation view to create child and parent relationships between related labels. Creating relationships to link metadata in both datasets works relatively well, especially for contemporary objects, such as flora, fauna, body parts, body postures, or furniture. More specific elements of medieval...
The Image Point Cloud (Figure 4)is an entry point to the labeling process by showing two-dimensional representations of the images of the selected manuscripts so that similar images are presented close to each other in the space. In it, each circle represents an image. One can select and combine embeddings based on the...
The label hierarchy view shows the current state of the underlying label hierarchy. We use the Sugiyama framework (Sugiyama et al., 1981) to draw a directed acyclic graph for the label hierarchy. In the first step, it is checked with a depth-first search for each node if the graph contains cycles. If there is one, the ...
We designed visualizations as part of our research thinking process to combine two (partially-)annotated image datasets representing the same genre, but originating in two different research initiatives exhibiting differences and inconsistencies in their vocabulary. The visualizations were used for labeling purposes a...
User 2, for example, started the visual analytics process not by labeling the images with missing labels but by working on the hierarchy. Starting with the hierarchy helped to understand the variety of labels from a holistic point of view and to understand how they relate to each other. Annotating the illuminations wi...
D
Some readers may point out that the different metrics do not make a big difference in Table VI, and show concerns about the effectiveness of our proposed metric. Here, we discuss the contributions of our metric in detail. Firstly, we state that the main contribution of our metric is to effectively retrieve similar sour...
In this part, we conduct a case study to verify the effectiveness of PanDa’s knowledge transfer ability and reveal its potential limitations in detail. First, we analyze why the vanilla PoT (i.e., SPoT) works well (even outperforms the model-tuning and PanDa) in some settings, but fails in others. We use the MNLI and ...
Secondly, to investigate the effectiveness of our metric intuitively and clearly, we report the performance of the target tasks with the most similar source tasks, as measured by different metrics in the vanilla PoT settings. Figure 7 illustrates the results across different PLMs respectively.
Secondly, to investigate the effectiveness of our metric intuitively and clearly, we report the performance of the target tasks with the most similar source tasks, as measured by different metrics in the vanilla PoT settings. Figure 7 illustrates the results across different PLMs respectively.
Secondly, to investigate the effectiveness of our metric intuitively and clearly, we report the performance of the target tasks with the most similar source tasks, as measured by different metrics in the vanilla PoT settings. Figure 7 illustrates the results across different PLMs respectively.
B
Russia and Ukraine have a long history of electronic information warfare (Margarita Jaitner, 2015) and are among the most active cybercrime hubs (Lusthaus et al., 2020). When Russia invaded Ukraine on 24 February 2022, war-related attacks on the two countries were regularly reported (New York Times, 2022). A popular n...
We do not dispute claims about the prevalence of state-sponsored attacks such as malware and phishing (Google, 2023; Microsoft Threat Intelligence, 2023), but rather provide additional perspectives on the role of low-level cybercrime actors. Some cybercrime-related activities may indeed contribute to the war effort. Le...
The role of the low-level cybercrime actors studied in this paper amounts to essentially trivial acts of solidarity and opportunistic competition. Their primary impact is probably to disseminate political propaganda, with little measurable evidence to suggest these actors are making any persistent contribution to the ...
Government-backed cyber operations (Google, 2023; Microsoft Threat Intelligence, 2023) and destructive attacks have continued (Wired, 2023; Microsoft, 2022). However, data about nation-state attacks is hard for academics to access, and actors behind significant real-world attacks tend to take steps to avoid scrutiny. W...
Russia and Ukraine have a long history of electronic information warfare (Margarita Jaitner, 2015) and are among the most active cybercrime hubs (Lusthaus et al., 2020). When Russia invaded Ukraine on 24 February 2022, war-related attacks on the two countries were regularly reported (New York Times, 2022). A popular n...
C
We also observe that, similar to the results in E1.1, the road sign dataset is challenging to perform ranking on. We believe this is because the road sign dataset is relatively small resulting in surrogate models with unaligned loss surfaces (Demontis et al., 2019). However, when the attacker has a large enough datase...
Table 4. The average transferability of a sample when ranking its potential perturbations for the X-Ray and Road Sign datasets over 100 trials. Columns represent the various ranking methods, and rows indicate the combination of victim and surrogate model architectures, ensuring that F0subscript𝐹0F_{0}italic_F start_P...
In Tables 3 and 4 we present the results when ranking images for different k𝑘kitalic_k after applying the best perturbation to each image. Table 3 presents the findings for the CIFAR10 and ImageNet datasets, while Table 4 provides the results for the X-Ray and Road Sign datasets. Each cell within these tables indicate...
Table 3. The average transferability of a sample when ranking its potential perturbations for the CIFAR10 and ImageNet datasets over 100 trials. Columns represent the various ranking methods, and rows indicate the combination of victim and surrogate model architectures, ensuring that F0subscript𝐹0F_{0}italic_F start_P...
In this part of the evaluation, we present the findings of our comparative analysis on the transferability of three adversarial attacks: FGSM, PGD and Momentum, conducted across four diverse datasets. These results offer insights into the effectiveness of these attacks, the vulnerability of different victim models and ...
B
In a similar manner, we stress our results should not be understood as condemning quantum kernel methods, but rather a prompt to develop exponential-concentration-free embeddings for quantum kernels. Crucially, incorporating quantum aspects to machine learning does not always lead to better performance. Indeed, often i...
program for Academic Promotion. SW is supported by the Samsung GRP grant. M.C. was initially supported by ASC Beyond Moore’s Law project at Los Alamos National Laboratory (LANL). This work was also supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Departme...
We thank the reviewers at Nature Communications and QIP for their valuable feedback and Jonas M. Kübler for his comments on Appendix A. ST is supported by the National Research Foundation, Prime Minister’s Office, Singapore and the Ministry of Education, Singapore under the Research Centres of Excellence programme and...
We show that analogous to the causes of BPs for QNNs there are at least three different mechanisms that can lead to the exponential concentration of the encoded quantum states, including (i) the expressivity of the encoded quantum state ensemble, (ii) the entanglement in encoded quantum states with a local observable a...
Fig. 6 shows results for the scaling of the kernel variance as a function of the number of qubits n𝑛nitalic_n and HEE layers L𝐿Litalic_L. As L𝐿Litalic_L increases, the expressivity of the ansatz increases, and for sufficiently large L𝐿Litalic_L we observe exponential concentration of both the fidelity and projected...
B
Most of current methods [25, 1, 31, 17, 5, 4] use physical variables, e.g., driving speed, acceleration, time-gap, heading angle, yaw angle, distances, etc. for lane change recognition. Nevertheless, physical variables cannot represent the type of target objects as they do not contain enough semantic information, where...
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per...
: The second method is designed over the first one. It uses the same 3D action recognition networks as the first method. Bounding box information is embedded to each frame of the RGB video data to improve classification and prediction accuracy. This method assumes that a separate vehicle prediction method has been used...
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition. Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) ...
: The first method utilises only the visual information collected by the front-facing cameras, which is the same kind of information and approach that human drivers would use to predict manoeuvres. We test this approach with seven 3D action recognition networks involving I3D networks, SlowFast networks, X3D networks an...
C
be changed and API functions can be substituted. Given a program pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, there typically exist many pb∈Psubscript𝑝𝑏𝑃p_{b}\in Pitalic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ∈ italic_P, such that
representation and semantics: If we have pa=pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}=p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, it directly follows that pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUB...
k𝑘kitalic_k-anonymity. Let pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and pbsubscript𝑝𝑏p_{b}italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT be two programs written by developers a𝑎aitalic_a and b𝑏bitalic_b with pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_PO...
pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≡ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT but pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≠ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBS...
\ \Longleftrightarrow\leavevmode\nobreak\ \leavevmode\nobreak\ p_{a}\equiv p_{% b}.caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) = caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) ⟺ italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≡ italic_p start_POSTSUBSCRIP...
C
With the rapid development of pre-training techniques, lots of pre-trained models are available, which poses a new challenge – how to fine-tune the models in a parameter-efficient way on new tasks. Traditional fine-tuning methods [24, 25, 26, 27] add task-specific heads and tunes all parameters. Although it is simple a...
Multi-task learning (MTL) is an important subfield in machine learning [48, 17, 49]. By exploiting task relatedness, it is able to improve the performance over single-task learning. There are two dominant methods for deep multi-task learning, hard and soft parameter sharing, which learn identical and similar features, ...
CoOp was proposed by Zhou et al. [5], which enables the few-shot adaption of pre-trained CLIP model for image recognition. CoOp inherits the two-stream structure from CLIP to bridge the gap between pre-training and fine-tuning. In other words, it has an image encoder denoted as e⁢(⋅)𝑒⋅e(\cdot)italic_e ( ⋅ ) to extract...
For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede...
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in...
D